Upvoting to encourage discussion of these differentiators:
"Apertus is a 70B and 8B parameter language model designed to push the boundaries of fully-open multilingual and transparent models. The model supports over 1000 languages and long context, it uses only fully compliant and open training data, and achieves comparable performance to models trained behind closed doors."
"pretrained on 15T tokens with a staged curriculum of web, code and math data"
"open weights + open data + full training details including all data and training recipes"
"Apertus is trained while respecting opt-out consent of data owners (even retrospectivey), and avoiding memorization of training data"
At least not "open source"
> "open weights + open data + full training details including all data and training recipes"
Is it reproducible?
> respecting opt-out consent of data owners (even retrospectivey)
Were they notified and given an option to opt out? Owners and authors are not the same. Data owners aren't copyright owners either.
> avoiding memorization of training data
Not convincing.
I saw some of the pretraining code in github, but not the post-training.
posttraining codebase is here: https://github.com/swiss-ai/posttraining