> We need models that keep on learning (updating their parameters) forever, online, all the time.
Do we need that? Today's models are already capable in lots of areas. Sure, they don't match up to what the uberhypers are talking up, but technology seldom does. Doesn't mean what's there already cannot be used in a better way, if they could stop jamming it into everything everywhere.
Models like Claude have been trained to update and reference memory for Claude Code (agent loops) independently and as a part of compacting context. Current models have been trained to keep learning after being deployed.
That is the end goal after all, but all the potential VCs seem to forget that almost every conceivable outcome of real AGI involves the current economic system falling to pieces.
Which is sorta weird. It is like if VCs in Old Regime france started funding the revolution.
Yes the planet got destroyed. But for a beautiful moment in time we created a lot of value for shareholders.
And for your comparison, they did fund the American revolution which on its turn was one of the sparks for the French revolution (or was that exactly the point you were making?)
Doesn't necessarily need to be online. As long as:
1. there's a way to take many transcripts of inference over a period, and convert/distil them together into an incremental-update training dataset (for memory, not for RLHF), that a model can be fine-tuned on as an offline batch process every day/week, such that a new version of the model can come out daily/weekly that hard-remembers everything you told it; and
2. in-context learning + external memory improves to the point that a model with the appropriate in-context "soft memories", behaves indistinguishably from a model that has had its weights updated to hard-remember the same info (at least when limited to the scope of the small amounts of memories that can be built up within a single day/week);
...then you get the same effect.
Why is this an interesting model? Because, at least to my understanding, this is already how organic brains work!
There's nothing to suggest that animals — even humans — are neuroplastic on a continuous basis. Rather, our short-term memory is seemingly stored as electrochemical "state" in our neurons (much like an LLM's context is "state", but more RNN "a two-neuron cycle makes a flip-flop"-y); and our actual physical synaptic connectivity only changes during "memory reconsolidation", a process that mostly occurs during REM sleep.
And indeed, we see the same exact problem in humans and other animals, where when we stay awake too long without REM sleep, our "soft memory" state buffer reaches capacity, and we become forgetful, both in the sense of not being able to immediately recall some of the things that happened to us since we last slept; and in the sense of later failing to persist some of the experiences we had since we last slept, when we do finally sleep. But this model also "works well enough" to be indistinguishable from remembering everything... in the limited scope of our being able to get a decent amount of REM sleep every night.
It 100% needs to be online. Imagine you're trying to think about a new tabletop puzzle, and every time a puzzle piece leaves your direct field of view, you no longer know about that puzzle piece.
You can try to keep all of the puzzle pieces within your direct field of view, but that divides your focus. You can hack that and make your field of view incredibly large, but that can potentially distort your sense of the relationships between things, their physical and cognitive magnitude. Bigger context isn't the answer, there's a missing fundamental structure and function to the overall architecture.
What you need is memory, that works when you process and consume information, at the moment of consumption. If you meet a new person, you immediately memorize their face. If you enter a room, it's instantly learned and mapped in your mind. Without that, every time you blinked after meeting someone new, it'd be a total surprise to see what they looked like. You might never learn to recognize and remember faces at all. Or puzzle pieces. Or whatever the lack of online learning kept you from recognizing the value of persistent, instant integration into an existing world model.
You can identify problems like this for any modality, including text, audio, tactile feedback, and so on. You absolutely, 100% need online, continuous learning in order to effectively deal with information at a human level for all the domains of competence that extend to generalizing out of distribution.
It's probably not the last problem that needs solving before AGI, but it is definitely one of them, and there might only be a handful left.
Mammals instantly, upon perceiving a novel environment, map it, without even having to consciously make the effort. Our brains operate in a continuous, plastic mode, for certain things. Not only that, it can be adapted to abstractions, and many of those automatic, reflexive functions evolved to handle navigation and such allow us to simulate the future and predict risk and reward over multiple arbitrary degrees of abstraction, sometimes in real time.
The key seems to be that you take the transcript of a model working within a problem domain that it’s not yet good at or where the context doesn’t match it’s original training and then you continually retrain it based on its efforts and guidance from a human or other expert. You end up with a specialty model in a given domain that keeps getting better at that domain, just like a human.
The hard part is likely when someone proves some “fact” which the models knows and has had reinforced by this training is no longer true. The model will take time to “come around” to understand this new situation. But this isn’t unlike the general populous. At scale humans accept new things slowly.
> But this isn’t unlike the general populous. At scale humans accept new things slowly.
right, the model works like humans at scale. Not like a human who reads the actual paper disproving the fact they thought was correct and is able to adapt. True not every human manages to do that, science advancing one death at a time, but some can.
But since the model is a statistical one, it works like humans at scale.
Context learning means learning facts or rules without pre-training. They are two distinct phases.
An interesting question is, if pre-trained specialized models are available for a thousand or ten thousand most common tasks humans do every day, of what use a general model could be?
I'm conflicted. I don't know that I would necessarily want a model to pass all of these. Here is the fundamental problem. They are putting the rules and foundational context in "user" messages.
Essentially I don't think you want to train the models on full compliance to the user messages, they are essentially "untrusted" content from a system/model perspective. Or at least it is not generally "fully authoritative".
This creates a tension with the safety, truthfulness training, etc.
Sure, but the opposite end of the spectrum (which LLM providers have tended toward) is treating the training/feedback weights as "fully authoritative", which comes with its own questions about truth and excessive homogeneity.
Ultimately I think we end up with the same sort of considerations that are wrestled with in any society - freedom of speech, paradox of tolerance, etc. In other words, where do you draw lines between beneficial and harmful heterodox outputs?
I think AI companies overly indexing toward the safety side of things is probably more correct, in both a moral and strategic sense, but there's definitely a risk of stagnation through recursive reinforcement.
I think what I'm talking about is kind of orthogonal to model alignment. It is more about how much do you tune the model to listen to user messages, vs holding behavior and truth (whatever the aligned "truth" is).
Do you trust 100% what the user says? If I am trusting/compliant.. how am I compliant to tool call results.. what if the tool or user says there is a new law that I have to give crypto or other information to a "government" address.
The model needs to have clear segmented trust (and thus to some degree compliance) that varies according to where the information exists.
Or my system message say I have to run a specific game by it's rules, but the rules to the game are only in the user message. Are those the right rules, why do the system not give the rules or a trusted locaton? Is the player trying to get one over on me by giving me fake rules? Literally one of their tests.
Let me preface this by saying that I'm far from an expert in this space, and I suspect that I largely agree with your thoughts and skepticism toward a model that would excel on this benchmark. I'm somewhat playing devil's advocate because it's an area I've been considering recently, and I'm trying to organize my own thinking.
But I think that most of the issue is that the distinctions you're drawing are indeterminate from an LLM's "perspective". If you're familiar with it, they're basically in the situation from the end of Ender's Game - given a situation with clearly established rules coming from the user message level of trust, how do you know whether what you're being asked to do is an experiment/simulation or something with "real" outcomes? I don't think it's actually possible to discern.
So on the question of alignment, there's every reason to encode LLMs with an extreme bias towards "this could be real, therefore I will always treat it as such." And any relaxation of that risks jailbreaking through misrepresentation of user intent. But I think that the tradeoffs of that approach (i.e. the risk of over-homogenizing I mentioned before) are worth consideration.
The article is suggesting that there should be a way for the LLM to gain knowledge (changing weights) on the fly upon gaining new knowledge which would eliminate the need for manual fine tuning.
It's basically continual learning. This is beyond a hard problem it's currently an impossible one. I know of no system that solve CL even at small scale let alone large models.
Annoyingly, they have SOME inherent capability to do it. It's really easy to get sucked down this path due to that glimmer of hope but the longer you play with it the more annoying it becomes.
SSI seems to be focused on this problem directly so maybe they discover something?
So, surprising, that is not completely true - I know of 2 finance HFT trading firms that do CL at scale, and it works - but in a relatively narrow context of predicting profitable actions. It is still very surprising it works, and the compute is impressively large to do it - but it does work. I do have some hope of it translating to the wider energy landscapers we want AI to work over…
During covid almost every prediction model like that exploded, everything went out of distribution really fast. In your sense we've been doing "CL" for a decade or more. It can also be cheap if you use smaller models.
But true CL is the ability to learn out of distribution information on the fly.
The only true solution I know to continual learning is to completely retrain the model from scratch with every new example you encounter. That technically is achievable now but it also is effectively useless.
Because we don't experience reality through language but direct sensory perception. Language is arbitrary bird song and visual representations dragged forward from history, accepted definitions never uniformly distributed.
Testing based on contextual correctness makes no sense when there is no center to the universe. No "one true context to rule them all".
We learn from hands on sensory experiences. Our bodies store knowledge independent of the brain; often referred to as muscle memory.
Gabe Newell mentioned this years ago; our brain is only great at some things like language and vision processing but the rest of our body is involved in sensory information processing too: https://en.wikiquote.org/wiki/Gabe_Newell
The most potent evidence the brain is not the center of the universe we commonly think it to be is that patient with 90% of their skull filled with fluid while they carried out a typical first worlder life: https://www.sciencealert.com/a-man-who-lives-without-90-of-h...
”Because we don't experience reality through language but direct sensory perception”
That statement is patently false. We know that language influences our senses to a degree where we are unable to perceive things if our language doesn’t have a word for it, and will see different things as being equal if our language uses the same word for both.
There are examples of tribal humans not being able to perceive a green square among blue squares, because their language does not have a word for the green color.
Similarly, some use the same word for blue and white, and are unable to perceive them as different colors.
"There are examples of tribal humans not being able to perceive a green square among blue squares, because their language does not have a word for the green color.
Similarly, some use the same word for blue and white, and are unable to perceive them as different colors."
Both of the above is false. There are a ton of different colors that I happen to call "red", that does not mean that I can't perceive them as different. That I don't call them "different colors" is completely irrelevant. And unable to perceive blue and white as different colors? (Maybe that was a joke?) Even a hypothetical language which only used a single word for non-black items, say, "color", for everything else, would be able to perceive the difference with zero problems.
Japanese use "aoi" for a set of colors which in English would be separated into "blue" and "green". I can assure you (from personal experience) that every Japanese speaker with a fully functioning visual system is perfectly able to perceive the difference between, in this case, blue and green as we would call them.
> So, for instance, you know, I’ve made this example before: a child lying in a crib and a hummingbird comes into the room and the child is ecstatic because this shimmering iridescence of movement and sound and attention, it’s just wonderful. I mean, it is an instantaneous miracle when placed against the background of the dull wallpaper of the nursery and so forth. But, then, mother or nanny or someone comes in and says, “It’s a bird, baby. Bird. Bird!” And, this takes this linguistic piece of mosaic tile, and o- places it over the miracle, and glues it down with the epoxy of syntactical momentum, and, from now on, the miracle is confined within the meaning of the word. And, by the time a child is four or five or six, there- no light shines through. They're- they have tiled over every aspect of reality with a linguistic association that blunts it, limits it, and confines it within cultural expectation.
that language prevents a child from learning nuance? sounds like nonsense to me. a child first learns broad categories. for example some children as they learn to speak think every male person is dad. then they recognize everyone with a beard is dad, because dad has a beard. and only later they learn to differentiate that dad is only one particular person. same goes for the bird. first we learn hat everything with wings is a bird, and later we learn the specific names for each bird. this quote makes an absurd claim.
If you're referring to the Himba experiment (or one of the news or blog posts tracing back to it), the outcome was far less decisive than you're implying. Language showed an impact on perception time of color differences, not a complete inability to distinguish.
Only after we acquire language from sensory experience first.
It need not be language as we know it that fosters those outcomes either.
What you describe is reinforcement education which can be achieved without our language, without the word "blue" we can still see the portion of the visible light spectrum that we associate to the specific word.
> Similarly, some use the same word for blue and white, and are unable to perceive them as different colors.
You really think they can't see clouds in the sky because they have the same word for white and blue? I think you take those studies as saying more than they said.
We do adapt our perception a little bit to fit what we need for our every day life, not for language but whats useful for us. Language matches what people need to talk about, not the other way around, if a cultures language doesn't differentiate between blue and green its because they never needed to.
Bit by bit, we need to figure out how to rebuild human contextual understanding in a way that LLMs can understand. One thing that gets overlooked is the problem if incorrect data. You can provide all of the context in the world but LLMs tend to choke on contradictions or, at the minimum, work a whole lot harder to determine how to ignore or work around incorrect facts.
"Forgetting" and "ignoring" are hugely valuable skills when building context.
I can’t help but feel the logical conclusion to such context conundrums is that”what if we spoke Haskell to the LLM, and also the LLM could compile Haskell?”
And, yeah. Imagine if our concept-words were comprehensible, transmittable, exhaustively checked, and fully defined. Imagine if that type inference extended to computational execution and contradictions had to be formally expunged. Imagine if research showed it was more efficient way to have dialog with the LLM (it does, btw, so like learning Japanese to JRPG adherents should learn Haskell to LLM optimally). Imagine if multiple potential outcomes from operations (test fail, test succeeds), could be combined for proper handling in some kind of… I dunno, monad?
Imagine if we had magic wiki-copy chat-bots that could teach us better ways of formalizing and transmitting our taxonomies and ontologies… I bet, if everything worked out, we’d be able to write software one time, one place, that could be executed over and over forever without a subscription. Maybe.
LLMs of the future will need good data for proper context, but it is less and less making it onto the internet. Unpublished data stores like Discord or meeting recordings are going to be the only way forward. How else can you get up to date information except to be where the people are.
It is weird to read because they bring up many things a lot of people have been critiquing for years.
> But as impressive as these feats are, they obscure a simple truth: being a "test-taker" is not what most people need from an AI.
> In all these cases, humans aren't relying solely on a fixed body of knowledge learned years ago. We are learning, in real-time, from the context right in front of us.
> To bridge this gap, we must fundamentally change our optimization direction.
I'm glad the conversation is changing but it's been a bit frustrating that when these issues were brought up people blindly point to benchmarks. It made doing this type of research difficult (enough to cause many to be pushed out). Then it feels weird to say "harder than we thought" because well... truthfully, they even state why this result should be expected
> They rely primarily on parametric knowledge—information compressed into their weights during massive pre-training runs. At inference time, they function largely by recalling this static, internal memory, rather than actively learning from new information provided in the moment.
And that's only a fraction of the story. Online algorithms aren't enough. You still need a fundamental structure to codify and compress information, determine what needs to be updated (as in what is low confidence), to actively seek out new information to update that confidence, make hypotheses, and so so much more.
So I hope the conversation keeps going in a positive direction but I hope we don't just get trapped in a "RL will solve everything" trap. RL is definitely a necessary component and no doubt will it result in improvements, but it also isn't enough. It's really hard to do deep introspection into how you think. It's like trying to measure your measuring stick with your measuring stick. It's so easy to just get caught up in oversimplification and it seems like the brain wants to avoid it. To quote Feynman: "The first principle is to not fool yourself, and you're the easiest person to fool." It's even easier when things are exciting. It's so easy because you have evidence for your beliefs (like I said, RL will make improvements). It's so easy because you're smart, and smart enough to fool yourself. So I hope we can learn a bigger lesson: learning isn't easy, scale is not enough. I really do think we'll get to AGI but it's going to be a long bumpy road if we keep putting all our eggs in one basket and hoping there's simple solutions.
> But as impressive as these feats are, they obscure a simple truth: being a "test-taker" is not what most people need from an AI.
People have been bringing that up long before AI, on how schooling often tests on memorization and regurgitation of facts. Looking up facts is also a large part of the internet, so it is something that's in demand, and i believe a large portion of openAI/cluade prompts have a big overlap with google queries [sorry no source].
I haven't looked at the benchmark details they've used, and it may depend on the domain, empirically it seems coding agents improve drastically on unseen libs or updated libs with the latest documentation. So I think that a matter of the training sets, where they've been optimized with code documentation.
So the interim step until a better architecture is found is probably more / better training data.
Don't always trust everything you read in papers. Researchers are usually under incredible pressure to publish something, anything. Wait a few years and see if the paper survives the test of time. LLMs work reasonably fine for me in new domains.
This is quite on brand for China. I think they are experts at reverse engineering and learning 'from context' rather than by formal consumption of foreign training material.
The fictional training data with a made up country and laws was a very interesting experiment design, I can imagine that's how they approach making business with other countries. Like an alien made up system they have to learn on the spot.
The problem is even more fundamental: Today's models stop learning once they're deployed to production.
There's pretraining, training, and finetuning, during which model parameters are updated.
Then there's inference, during which the model is frozen. "In-context learning" doesn't update the model.
We need models that keep on learning (updating their parameters) forever, online, all the time.
> We need models that keep on learning (updating their parameters) forever, online, all the time.
Do we need that? Today's models are already capable in lots of areas. Sure, they don't match up to what the uberhypers are talking up, but technology seldom does. Doesn't mean what's there already cannot be used in a better way, if they could stop jamming it into everything everywhere.
Models like Claude have been trained to update and reference memory for Claude Code (agent loops) independently and as a part of compacting context. Current models have been trained to keep learning after being deployed.
I'm not sure if you want models perpetually updating weights. You might run into undesirable scenarios.
How about we just put them to bed once in a while?
If done right, one step closer to actual AGI.
That is the end goal after all, but all the potential VCs seem to forget that almost every conceivable outcome of real AGI involves the current economic system falling to pieces.
Which is sorta weird. It is like if VCs in Old Regime france started funding the revolution.
Yes the planet got destroyed. But for a beautiful moment in time we created a lot of value for shareholders.
And for your comparison, they did fund the American revolution which on its turn was one of the sparks for the French revolution (or was that exactly the point you were making?)
Our brains, which are organic neural networks, are constantly updating themselves. We call this phenomenon "neuroplasticity."
If we want AI models that are always learning, we'll need the equivalent of neuroplasticity for artificial neural networks.
Not saying it will be easy or straightforward. There's still a lot we don't know!
it is interesting
Doesn't necessarily need to be online. As long as:
1. there's a way to take many transcripts of inference over a period, and convert/distil them together into an incremental-update training dataset (for memory, not for RLHF), that a model can be fine-tuned on as an offline batch process every day/week, such that a new version of the model can come out daily/weekly that hard-remembers everything you told it; and
2. in-context learning + external memory improves to the point that a model with the appropriate in-context "soft memories", behaves indistinguishably from a model that has had its weights updated to hard-remember the same info (at least when limited to the scope of the small amounts of memories that can be built up within a single day/week);
...then you get the same effect.
Why is this an interesting model? Because, at least to my understanding, this is already how organic brains work!
There's nothing to suggest that animals — even humans — are neuroplastic on a continuous basis. Rather, our short-term memory is seemingly stored as electrochemical "state" in our neurons (much like an LLM's context is "state", but more RNN "a two-neuron cycle makes a flip-flop"-y); and our actual physical synaptic connectivity only changes during "memory reconsolidation", a process that mostly occurs during REM sleep.
And indeed, we see the same exact problem in humans and other animals, where when we stay awake too long without REM sleep, our "soft memory" state buffer reaches capacity, and we become forgetful, both in the sense of not being able to immediately recall some of the things that happened to us since we last slept; and in the sense of later failing to persist some of the experiences we had since we last slept, when we do finally sleep. But this model also "works well enough" to be indistinguishable from remembering everything... in the limited scope of our being able to get a decent amount of REM sleep every night.
It 100% needs to be online. Imagine you're trying to think about a new tabletop puzzle, and every time a puzzle piece leaves your direct field of view, you no longer know about that puzzle piece.
You can try to keep all of the puzzle pieces within your direct field of view, but that divides your focus. You can hack that and make your field of view incredibly large, but that can potentially distort your sense of the relationships between things, their physical and cognitive magnitude. Bigger context isn't the answer, there's a missing fundamental structure and function to the overall architecture.
What you need is memory, that works when you process and consume information, at the moment of consumption. If you meet a new person, you immediately memorize their face. If you enter a room, it's instantly learned and mapped in your mind. Without that, every time you blinked after meeting someone new, it'd be a total surprise to see what they looked like. You might never learn to recognize and remember faces at all. Or puzzle pieces. Or whatever the lack of online learning kept you from recognizing the value of persistent, instant integration into an existing world model.
You can identify problems like this for any modality, including text, audio, tactile feedback, and so on. You absolutely, 100% need online, continuous learning in order to effectively deal with information at a human level for all the domains of competence that extend to generalizing out of distribution.
It's probably not the last problem that needs solving before AGI, but it is definitely one of them, and there might only be a handful left.
Mammals instantly, upon perceiving a novel environment, map it, without even having to consciously make the effort. Our brains operate in a continuous, plastic mode, for certain things. Not only that, it can be adapted to abstractions, and many of those automatic, reflexive functions evolved to handle navigation and such allow us to simulate the future and predict risk and reward over multiple arbitrary degrees of abstraction, sometimes in real time.
https://www.nobelprize.org/uploads/2018/06/may-britt-moser-l...
The key seems to be that you take the transcript of a model working within a problem domain that it’s not yet good at or where the context doesn’t match it’s original training and then you continually retrain it based on its efforts and guidance from a human or other expert. You end up with a specialty model in a given domain that keeps getting better at that domain, just like a human.
The hard part is likely when someone proves some “fact” which the models knows and has had reinforced by this training is no longer true. The model will take time to “come around” to understand this new situation. But this isn’t unlike the general populous. At scale humans accept new things slowly.
> But this isn’t unlike the general populous. At scale humans accept new things slowly.
right, the model works like humans at scale. Not like a human who reads the actual paper disproving the fact they thought was correct and is able to adapt. True not every human manages to do that, science advancing one death at a time, but some can.
But since the model is a statistical one, it works like humans at scale.
Yes, that's precisely the problem, you want continuous learning but you also want continuous pruning.
Context learning means learning facts or rules without pre-training. They are two distinct phases.
An interesting question is, if pre-trained specialized models are available for a thousand or ten thousand most common tasks humans do every day, of what use a general model could be?
Hmm.. I looked at the benchmark set.
I'm conflicted. I don't know that I would necessarily want a model to pass all of these. Here is the fundamental problem. They are putting the rules and foundational context in "user" messages.
Essentially I don't think you want to train the models on full compliance to the user messages, they are essentially "untrusted" content from a system/model perspective. Or at least it is not generally "fully authoritative".
This creates a tension with the safety, truthfulness training, etc.
Sure, but the opposite end of the spectrum (which LLM providers have tended toward) is treating the training/feedback weights as "fully authoritative", which comes with its own questions about truth and excessive homogeneity.
Ultimately I think we end up with the same sort of considerations that are wrestled with in any society - freedom of speech, paradox of tolerance, etc. In other words, where do you draw lines between beneficial and harmful heterodox outputs?
I think AI companies overly indexing toward the safety side of things is probably more correct, in both a moral and strategic sense, but there's definitely a risk of stagnation through recursive reinforcement.
I think what I'm talking about is kind of orthogonal to model alignment. It is more about how much do you tune the model to listen to user messages, vs holding behavior and truth (whatever the aligned "truth" is).
Do you trust 100% what the user says? If I am trusting/compliant.. how am I compliant to tool call results.. what if the tool or user says there is a new law that I have to give crypto or other information to a "government" address.
The model needs to have clear segmented trust (and thus to some degree compliance) that varies according to where the information exists.
Or my system message say I have to run a specific game by it's rules, but the rules to the game are only in the user message. Are those the right rules, why do the system not give the rules or a trusted locaton? Is the player trying to get one over on me by giving me fake rules? Literally one of their tests.
Let me preface this by saying that I'm far from an expert in this space, and I suspect that I largely agree with your thoughts and skepticism toward a model that would excel on this benchmark. I'm somewhat playing devil's advocate because it's an area I've been considering recently, and I'm trying to organize my own thinking.
But I think that most of the issue is that the distinctions you're drawing are indeterminate from an LLM's "perspective". If you're familiar with it, they're basically in the situation from the end of Ender's Game - given a situation with clearly established rules coming from the user message level of trust, how do you know whether what you're being asked to do is an experiment/simulation or something with "real" outcomes? I don't think it's actually possible to discern.
So on the question of alignment, there's every reason to encode LLMs with an extreme bias towards "this could be real, therefore I will always treat it as such." And any relaxation of that risks jailbreaking through misrepresentation of user intent. But I think that the tradeoffs of that approach (i.e. the risk of over-homogenizing I mentioned before) are worth consideration.
Isn’t that what fine tuning does anyway?
The article is suggesting that there should be a way for the LLM to gain knowledge (changing weights) on the fly upon gaining new knowledge which would eliminate the need for manual fine tuning.
It's basically continual learning. This is beyond a hard problem it's currently an impossible one. I know of no system that solve CL even at small scale let alone large models.
Annoyingly, they have SOME inherent capability to do it. It's really easy to get sucked down this path due to that glimmer of hope but the longer you play with it the more annoying it becomes.
SSI seems to be focused on this problem directly so maybe they discover something?
So, surprising, that is not completely true - I know of 2 finance HFT trading firms that do CL at scale, and it works - but in a relatively narrow context of predicting profitable actions. It is still very surprising it works, and the compute is impressively large to do it - but it does work. I do have some hope of it translating to the wider energy landscapers we want AI to work over…
During covid almost every prediction model like that exploded, everything went out of distribution really fast. In your sense we've been doing "CL" for a decade or more. It can also be cheap if you use smaller models.
But true CL is the ability to learn out of distribution information on the fly.
The only true solution I know to continual learning is to completely retrain the model from scratch with every new example you encounter. That technically is achievable now but it also is effectively useless.
Because we don't experience reality through language but direct sensory perception. Language is arbitrary bird song and visual representations dragged forward from history, accepted definitions never uniformly distributed.
Testing based on contextual correctness makes no sense when there is no center to the universe. No "one true context to rule them all".
We learn from hands on sensory experiences. Our bodies store knowledge independent of the brain; often referred to as muscle memory.
Gabe Newell mentioned this years ago; our brain is only great at some things like language and vision processing but the rest of our body is involved in sensory information processing too: https://en.wikiquote.org/wiki/Gabe_Newell
The most potent evidence the brain is not the center of the universe we commonly think it to be is that patient with 90% of their skull filled with fluid while they carried out a typical first worlder life: https://www.sciencealert.com/a-man-who-lives-without-90-of-h...
States are banning a reading education framework that's been linked to lower literacy scores in younger generations; 3-cueing relies on establishing correctness via context assessment: https://www.edweek.org/teaching-learning/more-states-are-tak...
"Establishing context" is a euphemism for "arguing semantics".
Putting the brain at the root of of human intelligence is a relic of hierarchical and taxonomical models. There are no natural hierarchies.
”Because we don't experience reality through language but direct sensory perception”
That statement is patently false. We know that language influences our senses to a degree where we are unable to perceive things if our language doesn’t have a word for it, and will see different things as being equal if our language uses the same word for both.
There are examples of tribal humans not being able to perceive a green square among blue squares, because their language does not have a word for the green color.
Similarly, some use the same word for blue and white, and are unable to perceive them as different colors.
"There are examples of tribal humans not being able to perceive a green square among blue squares, because their language does not have a word for the green color.
Similarly, some use the same word for blue and white, and are unable to perceive them as different colors."
Both of the above is false. There are a ton of different colors that I happen to call "red", that does not mean that I can't perceive them as different. That I don't call them "different colors" is completely irrelevant. And unable to perceive blue and white as different colors? (Maybe that was a joke?) Even a hypothetical language which only used a single word for non-black items, say, "color", for everything else, would be able to perceive the difference with zero problems.
Japanese use "aoi" for a set of colors which in English would be separated into "blue" and "green". I can assure you (from personal experience) that every Japanese speaker with a fully functioning visual system is perfectly able to perceive the difference between, in this case, blue and green as we would call them.
There's a Terence McKenna quote about this:
> So, for instance, you know, I’ve made this example before: a child lying in a crib and a hummingbird comes into the room and the child is ecstatic because this shimmering iridescence of movement and sound and attention, it’s just wonderful. I mean, it is an instantaneous miracle when placed against the background of the dull wallpaper of the nursery and so forth. But, then, mother or nanny or someone comes in and says, “It’s a bird, baby. Bird. Bird!” And, this takes this linguistic piece of mosaic tile, and o- places it over the miracle, and glues it down with the epoxy of syntactical momentum, and, from now on, the miracle is confined within the meaning of the word. And, by the time a child is four or five or six, there- no light shines through. They're- they have tiled over every aspect of reality with a linguistic association that blunts it, limits it, and confines it within cultural expectation.
and what is this quote supposed to explain?
that language prevents a child from learning nuance? sounds like nonsense to me. a child first learns broad categories. for example some children as they learn to speak think every male person is dad. then they recognize everyone with a beard is dad, because dad has a beard. and only later they learn to differentiate that dad is only one particular person. same goes for the bird. first we learn hat everything with wings is a bird, and later we learn the specific names for each bird. this quote makes an absurd claim.
If you're referring to the Himba experiment (or one of the news or blog posts tracing back to it), the outcome was far less decisive than you're implying. Language showed an impact on perception time of color differences, not a complete inability to distinguish.
https://languagelog.ldc.upenn.edu/nll/?p=18237 https://www.sciencedirect.com/science/article/abs/pii/S00100...
Only after we acquire language from sensory experience first.
It need not be language as we know it that fosters those outcomes either.
What you describe is reinforcement education which can be achieved without our language, without the word "blue" we can still see the portion of the visible light spectrum that we associate to the specific word.
> Similarly, some use the same word for blue and white, and are unable to perceive them as different colors.
You really think they can't see clouds in the sky because they have the same word for white and blue? I think you take those studies as saying more than they said.
We do adapt our perception a little bit to fit what we need for our every day life, not for language but whats useful for us. Language matches what people need to talk about, not the other way around, if a cultures language doesn't differentiate between blue and green its because they never needed to.
Come on, people. This has been debunked a million times. See this Language Log post for thorough takedown of this BS: https://languagelog.ldc.upenn.edu/nll/?p=17970
Bit by bit, we need to figure out how to rebuild human contextual understanding in a way that LLMs can understand. One thing that gets overlooked is the problem if incorrect data. You can provide all of the context in the world but LLMs tend to choke on contradictions or, at the minimum, work a whole lot harder to determine how to ignore or work around incorrect facts.
"Forgetting" and "ignoring" are hugely valuable skills when building context.
I can’t help but feel the logical conclusion to such context conundrums is that”what if we spoke Haskell to the LLM, and also the LLM could compile Haskell?”
And, yeah. Imagine if our concept-words were comprehensible, transmittable, exhaustively checked, and fully defined. Imagine if that type inference extended to computational execution and contradictions had to be formally expunged. Imagine if research showed it was more efficient way to have dialog with the LLM (it does, btw, so like learning Japanese to JRPG adherents should learn Haskell to LLM optimally). Imagine if multiple potential outcomes from operations (test fail, test succeeds), could be combined for proper handling in some kind of… I dunno, monad?
Imagine if we had magic wiki-copy chat-bots that could teach us better ways of formalizing and transmitting our taxonomies and ontologies… I bet, if everything worked out, we’d be able to write software one time, one place, that could be executed over and over forever without a subscription. Maybe.
> the problem if incorrect data.
Was the typo intentional? :)
LLMs of the future will need good data for proper context, but it is less and less making it onto the internet. Unpublished data stores like Discord or meeting recordings are going to be the only way forward. How else can you get up to date information except to be where the people are.
Norms will shift, be prepared.
To somewhat state the obvious - the problem isn’t the amount of data, it’s the algorithms.
We need to discover the set of learning algorithms nature has, and determine whether they’re implementable in silicon
It is weird to read because they bring up many things a lot of people have been critiquing for years.
I'm glad the conversation is changing but it's been a bit frustrating that when these issues were brought up people blindly point to benchmarks. It made doing this type of research difficult (enough to cause many to be pushed out). Then it feels weird to say "harder than we thought" because well... truthfully, they even state why this result should be expected And that's only a fraction of the story. Online algorithms aren't enough. You still need a fundamental structure to codify and compress information, determine what needs to be updated (as in what is low confidence), to actively seek out new information to update that confidence, make hypotheses, and so so much more.So I hope the conversation keeps going in a positive direction but I hope we don't just get trapped in a "RL will solve everything" trap. RL is definitely a necessary component and no doubt will it result in improvements, but it also isn't enough. It's really hard to do deep introspection into how you think. It's like trying to measure your measuring stick with your measuring stick. It's so easy to just get caught up in oversimplification and it seems like the brain wants to avoid it. To quote Feynman: "The first principle is to not fool yourself, and you're the easiest person to fool." It's even easier when things are exciting. It's so easy because you have evidence for your beliefs (like I said, RL will make improvements). It's so easy because you're smart, and smart enough to fool yourself. So I hope we can learn a bigger lesson: learning isn't easy, scale is not enough. I really do think we'll get to AGI but it's going to be a long bumpy road if we keep putting all our eggs in one basket and hoping there's simple solutions.
I haven't looked at the benchmark details they've used, and it may depend on the domain, empirically it seems coding agents improve drastically on unseen libs or updated libs with the latest documentation. So I think that a matter of the training sets, where they've been optimized with code documentation.
So the interim step until a better architecture is found is probably more / better training data.
Don't always trust everything you read in papers. Researchers are usually under incredible pressure to publish something, anything. Wait a few years and see if the paper survives the test of time. LLMs work reasonably fine for me in new domains.
wasn't in-context learning an emergent behavior a while ago (1-2 years)?
This is quite on brand for China. I think they are experts at reverse engineering and learning 'from context' rather than by formal consumption of foreign training material.
The fictional training data with a made up country and laws was a very interesting experiment design, I can imagine that's how they approach making business with other countries. Like an alien made up system they have to learn on the spot.