So what changed? They are surely not getting new data to train with, what is the change in architecture that caused this? Do we not know anything about this model? My fear is Anthropic cannot be the only one that achieved it, OpenAI, Gemini and even the Chinese companies see this and probably achieved it too. At which point not releasing will become moot.
The gains have for a year and a half now been post training RL on a harnessed loop. That doesn’t require data, just cycles.
If that doesn’t worry you, it should.
Chinese companies have consistently been many months behind. I don't think they are hiding anything, they just don't have the compute capability to match Antropic's training runs. As for OpenAI, they are known to have nonpublic models; I agree that it's possible they are preparing for a major release too. (It's also possible that they aren't, in which case it's quite a fumble for them.)
Well the important thing is they have a lot more data of people actually using their models. They have read billions more lines of private repos and implemented millions of patches, all of which is feeding into the newer models.
More importantly it understand what behaviour people tend to appreciate and what changes are more likely to get approved. This real world usage data is invaluable.
Exactly. As Claude increases in popularity, their available training data also increases. I'd guess Anthropic has the most expansive swe training data as of now, if not close. Considering how quickly Claude is penetrating, I expect their lead to grow quickly.
Assuming it's #1 a bigger model (given that it is slower), I'm sure there are a variety of improvements but basically they probably mostly come down to: Scaling keeps working. Are there fundamental improvements though? I don't see signs of it.
New pre train?