I had lunch with Yann last August, about a week after Alex Wang became his "boss." I asked him how he felt about that, and at the time he told me he would give it a month or two and see how it goes, and then figure out if he should stay or find employment elsewhere. I told him he ought to just create his own company if he decides to leave Meta to chase his own dream, rather than work on the dream's of others.
That said, while I 100% agree with him that LLM's won't lead to human-like intelligence (I think AGI is now an overloaded term, but Yann uses it in its original definition), I'm not fully on board with his world model strategy as the path forward.
> I'm not fully on board with his world model strategy as the path forward
can you please elaborate on your strategy as the path forward?
You have to understand the strategy of all the other players:
Build attention-grabbing, monetizable models that subsidize (at least in part) the run up to AGI.
Nobody is trying to one-shot AGI. They're grinding and leveling up while (1) developing core competencies around every aspect of the problem domain and (2) winning users.
I don't know if Meta is doing a good job of this, but Google, Anthropic, and OpenAI are.
Trying to go straight for the goal is risky. If the first results aren't economically viable or extremely exciting, the lab risks falling apart.
This is the exact point that Musk was publicly attacking Yann on, and it's likely the same one that Zuck pressed.
There's two points here. The first is that a strategy of monetizing models to fund the goal of reaching AI is indistinguishable from just running a business selling LLM model access, you don't actually need to be trying to reach AGI you can just run an LLM company and that is probably what these companies are largely doing. The AGI talk is just a recruiting/marketing strategy.
Secondly, it's not clear that the current LLMs are a run up to AGI. That's what LeCun is betting - that the LLM labs are chasing a local maxima.
I mean, Sutskevar and Carmack are trying to one-shot AGI. We just don't talk about them as much as we do the labs with products because their labs aren't selling products.
On recent podcasts, Ilya says he's no longer assuming they can jump straight there.
> Trying to go straight for the goal is risky.
That's the point of it. You need to take more risk for different approach. Same as what OpenAI did initially.