I feel like I'm the only one not getting the world models hype. We've been talking about them for decades now, and all of it is still theoretical. Meanwhile LLMs and text foundation models showed up, proved to be insanely effective, took over the industry, and people are still going "nah LLMs aren't it, world models will be the gold standard, just wait."

I bet LLMs and world models will merge. World models essentially try to predict the future, with or without actions taken. LLMs with tokenized image input can also be made to predict the future image tokens. It's a very valuable supervised learning signal aside from pre-training and various forms of RL.

I think "world models" is the wrong thing to focus on when contrasting the "animal intelligence" approach (which is what LeCun is striving for) with LLMs, especially since "world model" means different things to different people. Some people would call the internal abstractions/representations that an LLM learns during training a "world model" (of sorts).

The fundamental problem with today's LLMs that will prevent them from achieving human level intelligence, and creativity, is that they are trained to predict training set continuations, which creates two very major limitations:

1) They are fundamentally a COPYING technology, not a learning or creative one. Of course, as we can see, copying in this fashion will get you an extremely long way, especially since it's deep patterns (not surface level text) being copied and recombined in novel ways. But, not all the way to AGI.

2) They are not grounded, therefore they are going to hallucinate.

The animal intelligence approach, the path to AGI, is also predictive, but what you predict is the external world, the future, not training set continuations. When your predictions are wrong (per perceptual feedback) you take this as a learning signal to update your predictions to do better next time a similar situation arises. This is fundamentally a LEARNING architecture, not a COPYING one. You are learning about the real world, not auto-regressively copying the actions that someone else took (training set continuations).

Since the animal is also acting in the external world that it is predicting, and learning about, this means that it is learning the external effects of it's own actions, i.e. it is learning how to DO things - how to achieve given outcomes. When put together with reasoning/planning, this allows it to plan a sequence of actions that should achieve a given external result ("goal").

Since the animal is predicting the real world, based on perceptual inputs from the real world, this means that it's predictions are grounded in reality, which is necessary to prevent hallucinations.

So, to come back to "world models", yes an animal intelligence/AGI built this way will learn a model of how the world works - how it evolves, and how it reacts (how to control it), but this behavioral model has little in common with the internal generative abstractions that an LLM will have learnt, and it is confusing to use the same name "world model" to refer to them both.

RL on LLMs has changed things. LLMs are not stuck in continuation predicting territory any more.

Models build up this big knowledge base by predicting continuations. But then their RL stage gives rewards for completing problems successfully. This requires learning and generalisation to do well, and indeed RL marked a turning point in LLM performance.

A year after RL was made to work, LLMs can now operate in agent harnesses over 100s of tool calls to complete non-trivial tasks. They can recover from their own mistakes. They can write 1000s of lines of code that works. I think it’s no longer fair to categorise LLMs as just continuation-predictors.

Thanks for saying this. It never ceases to amaze me how many people still talk about LLMs like it’s 2023, completely ignoring the RLVR revolution that gave us models like Opus that can one-shot huge chunks of works-first-time code for novel use cases. Modern LLMs aren’t just trained to guess the next token, they are trained to solve tasks.

Forget 2023 - the advances in coding ability in just last 2-months are amazing. But, they are still not AGI, and it is almost certainly going to take more than just a new training regime such as RL to get there. Demis Hassabis estimates we need another 2-3 "transformer-level" discoveries to get there.

RL adds a lot of capability in the areas where it can be applied, but I don't think it really changes the fundamental nature of LLMs - they are still predicting training set continuations, but now trying to predict/select continuations that amount to reasoning steps steering the output in a direction that had been rewarded during training.

At the end of the day it's still copying, not learning.

RL seems to mostly only generalize in-domain. The RL-trained model may be able to generate a working C compiler, but the "logical reasoning" it had baked into it to achieve this still doesn't stop it from telling you to walk to the car wash, leaving your car at home.

There may still be more surprises coming from LLMs - ways to wring more capability out of them, as RL did, without fundamentally changing the approach, but I think we'll eventually need to adopt the animal intelligence approach of predicting the world rather than predicting training samples to achieve human-like, human-level intelligence (AGI).

You can’t really say it is just predicting continuations when it is learning to write proofs for Erdos problems, formalise significant math results, or perform automated AI research. Those are far beyond what you get by just being a copying and re-forming machine, a lot of these problems require sophisticated application of logic.

I don’t know if this can reach AGI, or if that term makes any sense to begin with. But to say these models have not learnt from their RL seems a bit ludicrous. What do you think training to predict when to use different continuations is other than learning?

I would say LLM’s failure cases like failing at riddles are more akin to our own optical illusions and blind spots rather than indicative of the nature of LLMs as a whole.

I think you're conflating mechanism with function/capability.

I'm not sure what I wrote that made you conclude that I thought these models are not learning anything from their RL training?! Let me say it again: they are learning to steer towards reasoning steps that during training led to rewards.

The capabilities of LLMs, both with and without RL, are a bit counter-intuitive, and I think that, at least in part, comes down to the massive size of the training sets and the even more massive number of novel combinations of learnt patterns they can therefore potentially generate...

In a way it's surprising how FEW new mathematical results they've been coaxed into generating, given that they've probably encountered a huge portion of mankind's mathematical knowledge, and can potentially recombine all of these pieces in at least somewhat arbitrary ways. You might have thought that there are results A, B and C hiding away in some obscure mathematical papers that no human has previously considered to put together before (just because of the vast number of such potential combinations), that might lead to some interesting result.

If you are unsure yourself about whether LLMs are sufficient to reach AGI (meaning full human-level intelligence), then why not listen to someone like Demis Hassabis, one of the brightest and best placed people in the field to have considered this, who says the answer is "no", and that a number of major new "transformer-level" discoveries/inventions will be needed to get there.

> What do you think training to predict when to use different continuations is other than learning?

Sure, training = learning, but the problem with LLMs is that is where it stops, other than a limited amount of ephemeral in-context learning/extrapolation.

With an LLM, learning stops post-training when it is "born" and deployed, while with an animal that's when it starts! The intelligence of an animal is a direct result of it's lifelong learning, whether that's imitation learning from parents and peers (and subsequent experimentation to refine the observed skill), or the never ending process of observation/prediction/surprise/exploration/discovery which is what allows humans to be truly creative - not just behaving in ways that are endless mashups of things they have seen and read about other humans doing (cf training set), but generating truly novel behaviors (such as creating scientific theories) based on their own directed exploration of gaps in mankind's knowledge.

Application of AGI to science and new discovery is a large part of why Hassabis defines AGI as human-equivalent intelligence, and understands what is missing, while others like Sam Altman are content to define AGI as "whatever makes us lots of money".

[dead]

>The fundamental problem with today's LLMs that will prevent them from achieving human level intelligence, and creativity, is that they are trained to predict training set continuations, which creates two very major limitations:

I am of the opinion that imagination and creativity comes from emotion, hence a machine that cannot "feel" will never be truly intelligent.

One can go ahead and ask, but you are just a lump of meat, if you can feel, then so a computer of similar structure can.

If we assume that physical reality is fundamental, then that might make sense. But what if consciousness is fundamental and reality plays on consciousness?

Then randomness, and in-turn ideas come from the attributes of the fundamental reality that we are in.

I ll try to simplify it. Imagine you having an idea that extends your life for a day. Then from all the possible worlds, in some worlds, you find yourselves living in the next day (in others you are dead). But this "idea" you had, was just one among the infinite sea of possibilities, and your consciousness inside one such world observes you having that idea and survive for a day!

If you want to create a machine that can do that, it implies that you should be a consciousness inside a world in it (because the machine cannot pick valid worlds from infinite samples, but just enables consciousness to exists such suitable worlds). So it cannot be done in our reality!

Mayyyyy be "Quantum Darwinism" is what I am trying to describe here..

> I am of the opinion that imagination and creativity comes from emotion

How do you see emotion as being necessary for creativity?

It sure seems that things like surprise (prediction failure) driven "curiosity" and exploration (I can't predict what will happen if I do X, so let me try) are behind creativity, pushing the boundaries of knowledge and discovering something new.

Perhaps you mean artistic creativity rather than scientific, in which case we're talking about different things, but I'd agree with you since the goal of much art is to elicit an emotional response in those engaging with it.

I don't think there is anything stopping us from implementing emotions, every bit as real as our own, in some form of artificial life if we want to though. At the end of the day emotion comes down to our primitive brain releasing chemicals like adrenaline, dopamine, etc as a result of certain stimuli, the functioning of our brain/body being affected by those chemicals, and the feedback loop of us then recognizing how our brain/body is operating differently ("I feel sad/exited/afraid" etc). It's all very mechanical.

FWIW I think consciousness is also very mechanical, but it seems somewhat irrelevant to the discussion of intelligence/AGI.