llama models pushed the envelope for a while, and having them "open-weight" allowed a lot of tinkering. I would say that most of fine tuned evolved from work on top of llama models.
llama models pushed the envelope for a while, and having them "open-weight" allowed a lot of tinkering. I would say that most of fine tuned evolved from work on top of llama models.
Llama wasn’t Yann LeCun’s work and he was openly critical of LLMs, so it’s not very relevant in this context.
Source: himself https://x.com/ylecun/status/1993840625142436160 (“I never worked on any Llama.”) and a million previous reports and tweets from him.
He founded FAIR and the team in Paris that ultimately worked on the early Llama versions.
FAIR was founded in 2015 and Llama's first release was in 2023. Musk co-founded OpenAI in 2015 but no reasonable person credits ChatGPT in 2022 to him.
> My only contribution was to push for Llama 2 to be open sourced.
Quite a big contribution in practice.
Sure, but I don't that's relevant in a startup with 1B VC money either. Meta can afford to (attempt to) commoditize their complement.