Wolfram’s “bigger brains” piece raises the intriguing question of what kinds of thinking, communication, or even entirely new languages might emerge as we scale up intelligence, whether in biological brains or artificial ones.
It got me thinking that, over millions of years, human brain volume increased from about 400–500 cc in early hominins to around 1400 cc today. It’s not just about size, the brain’s wiring and complexity also evolved, which in turn drove advances in language, culture, and technology, all of which are deeply interconnected.
With AI, you could argue we’re witnessing a similar leap, but at an exponential rate. The speed at which neural networks are scaling and developing new capabilities far outpaces anything in human evolution.
It makes you wonder how much of the future will even be understandable to us, or if we’re only at the beginning of a much bigger story. Interesting times ahead.
The future that we don't understand is already all around us. We just don't understand it.
Is the future in the room with us right now?
It is the room! And everything in it.
This house has people in it!
https://en.wikipedia.org/wiki/This_House_Has_People_in_It
Alan Resnick seems to be of a similar mind as I am, and perhaps also as you? My favorite of his is https://en.wikipedia.org/wiki/Live_Forever_as_You_Are_Now_wi...
There is a popular misconception that neural networks accurately model the human brain. It is more a metaphor for neurons than a complete physical simulation of the human brain.
There is also a popular misconception that LLMs are intelligently thinking programs. They are more like models that predict words and appear as a human intelligence.
That being said, it is certainly theoretically possible to simulate human intelligence and scale it up.
I often wonder if human intelligence is essentially just predicting words and phrases in a cohesive manner. Once the context size becomes large enough to encompass all a person history, predicting becomes indistinguishable from thinking.
Maybe, but I don't think this is strictly how human intelligence works
I think a key difference is that humans are capable of being inputs into our own system
You could argue that any time humans do this, it is as a consequence of all of their past experiences and such. It is likely impossible to say for sure. The question of determinism vs non-determinism has been discussed for literal centuries I believe
But if AI gets to a level where it could be an input to its own system, and reaches a level where it has systems analogous to humans (long term memory, decision trees updated by new experiences and knowledge, etc.) then does it matter in any meaningful way if it is “the same” or just an imitation of human brains? It feels like it only matters now because AIs are imitating small parts of what human brains do but fall very short. If they could equal or exceed human minds, then the question is purely academic.
That's a lot of really big ifs that we are likely still a long way away from answering
From what I understand there is not really any realistic expectation that LLM based AI will ever reach this complexity
The body also has memory and instinct. It's non-hierarchical, although we like to think that the mind dominates or governs the body. It's not that it's more or less than predicting, it's a different activity. Humans also think with all their senses. It'd be more or less like having a modal-less or all-modal LLM. Not sure this is even possible with the current way we model these networks.
And not just words. There is pretty compelling evidence that our sensory perception is itself prediction, that the purpose of our sensory organs is not to deliver us 1:1 qualia representing the world, but more like error correction, updates on our predictions.
This reads pretty definitively. If LLMs are intelligently thinking programs is being actively debated in cognitive science and AI research.
> the brain’s wiring and complexity also evolved, which in turn drove advances in language, culture, and technology
Fun thought: to the extent that it really happened this way, our intelligence is minimum viable for globe-spanning civilization (or whatever other accomplishment you want to index on). Not average, not median. Minimum viable.
I don't think this is exactly correct -- there is probably some critical mass / exponential takeoff dynamic that allowed us to get slightly above the minimum intelligence threshold before actually taking off -- but I still think we are closer to it than not.
I like this idea. I’ve thought of a similar idea at the other end of the limit. How much less intelligent could a species be and evolve to where we’re at? I don’t think much.
Once you reach a point where cultural inheritance is possible, things pop off at a scale much faster than evolution. Still, it’s interesting to think about a species where the time between agriculture and space flight is more like 100k or 1mm years than 10k. Similarly, a species with less natural intelligence than us but is more advanced because they got a 10mm year head start. Or, a species with more natural intelligence than us but is behind.
Your analogy makes me think of boiling water. There’s a phase shift where the environment changes suddenly (but not everywhere all at once). Water boils at 100C at sea level pressure. Our intelligence is the minimum for a global spanning civilization on our planet. What about an environment with different pressures?
It seems like an “easier” planet would require less intelligence and a “harder” planet would require more. This could be things like gravity, temperature, atmosphere, water versus land, and so on.
>It seems like an “easier” planet would require less intelligence and a “harder” planet would require more.
I'm not sure that would be the case if the Red Queen hypothesis is true. To bring up gaming nomenclature you're talking about player versus environment (PVE). In an environment that is easy you would expect everything to turn to biomass rather quickly, if there was some amount of different lifeforms so you didn't immediately end up with a monoculture the game would change from PVE to PVP. You don't have to worry about the environment, you have to worry about every other lifeform there. We see this a lot on Earth. Spines, poison, venom, camouflage, teeth, claws, they for both attack and protection in the other players of the life game.
In my eyes it would require far more intelligence on the easy planet in this case.
How about Argentine ants?
The word "civilization" is of course loaded. But I think the bigger questionable assumption is that intelligence is the limiting factor. Looking at the history that got us to having a globe-spanning civilization, the actual periods of expansion were often pretty awful for a lot of the people affected. Individual actors are often not aligned with building such a civilization, and a great deal of intelligence is spent on conflict and resisting the creation of the larger/more connected world.
Could a comparatively dumb species with different social behaviors, mating and genetic practices take over their planet simply by all actors actually cooperating? Suppose an alien species developed in a way that made horizontal gene transfer super common, and individuals carry material from most people they're ever met. Would they take over their planet really fast because as soon as you land on a new continent, everyone you meet is effectively immediately your sibling, and of course you'll all cooperate?
Less fun thought: there's an evolutionary bottleneck which prevents further progress, because the cost/benefit tradeoffs don't favour increasing intelligence much beyond the minimum.
So most planet-spanning civilisations go extinct, because the competitive patterns of behaviour which drive expansion are too dumb to scale to true planet-spanning sentience and self-awareness.
Intelligence is ability to predict (and hence plan), but predictability itself is limited by chaos, so maybe in the end that is the limiting factor.
It's easy to imagine a more capable intelligence than our own due to having many more senses, maybe better memory than ourselves, better algorithms for pattern detection and prediction, but by definition you can't be more intelligent than the fundamental predictability of the world in which you are part.
> predictability itself is limited by chaos, so maybe in the end that is the limiting factor
I feel much of humanity's effectiveness comes from ablating the complexity of the world to make it more predictable and easier to plan around. Basically, we have certain physical capabilities that can be leveraged to "reorganize" the ecosystem in such a way that it becomes more easily exploitable. That's the main trick. But that's circumstantial and I can't help but think that it's going to revert to the mean at some point.
That's because in spite of what we might intuit, the ceiling of non-intelligence is probably higher than the ceiling of intelligence. Intelligence involves matching an intent to an effective plan to execute that intent. It's a pretty specific kind of system and therefore a pretty small section of the solution space. In some situations it's going to be very effective, but what are the odds that the most effective resource consumption machines would happen to be organized just like that?
Sounds kind of like the synopsis of the Three Body Problem.
I seriously doubt it, honestly, since humans have anatomical limitations keeping their heads from getting bigger quickly. We have to be able to fit through the birth canal.
Perfectly ordinary terrestrial mammals like elephants have much, much larger skulls at birth than humans, so it’s clearly a matter of tradeoffs not an absolute limit.
Oh of course, but evolution has to work with what it’s got. Humans happened to fit a niche where they might benefit from more intelligence, elephants don’t seemingly fit such a niche.
> We have to be able to fit through the birth canal.
Or at least we used to, before the c-section was invented.
Indeed, but it hasn’t been around for long enough. We might evolve into birth by c-section, if we assume that humans won’t alter themselves dramatically by technological means over hundreds of thousands of years.
I feel like there’s also a maximum viable intelligence that’s compatible with reality. Beyond a certain point, the smarter people are, the higher the tendency for them to be messed up in some way.
IMO they will truly be unleashed when they drop with the human language intermediary and are just looking at distributions of binary functions. Truly, why are you asking the llm in english to write python code? The whole point of python code was to make the machine code readable for humans, and when you drop that requirement, you can just work directly on the metal. Some model outputting an incomprehensible integrated circuit from the fab, its utility proved by fitting a function to some data with acceptable variance.
The language doesn’t just map to English, it allows high level concepts to be expressed tersely. I would bet it’s much easier for an LLM to generate python doing complex things than to generate assembly doing the same. One very simple reason is the context window.
In other words, I figure these models can benefit from layers of abstraction just like we do.
It allows these concepts to be expressed legibly for a human. Why would an ai model (not llm necessarily) need to write say "printf"? It does not need to understand that this is a print statement with certain expectation for what a print statement ought to behave as in the scope of the shell. It already has all the information by virtue of running the environment. printf might as well be expressed as some n-bit integer for the machine and dispense with all the window dressing we apply when writing functions by humans for humans.
Because there's a lot of work behind printf that the llm doesn't need or care to reproduce
You're not just using the language, but all of the runtime and libraries behind it
Thinking it's more efficient for the llm to reinvent it all is just silly
Right and all of that in the library is built to be legible for the human programmer with constraints involved to fit in within the syntax of the underlying language. Imagine how efficient a function would be that didn't need all of that window dressing? You could "grow" functions out of simulation and bootstrapping, have them be a black box that we harvest output from not much different than say using an organism in a bioreactor to yield some metabolite of interest where we might not know all the relevant pieces of the biochemical pathway but we score putative production mutants based on yield alone.
Indeed. And aside from that, LLMs cannot generalise OOD. There's relatively little training data of complex higher order constructs in straight assembly, compared to say Python code. Plus, the assembly will be target architecture specific.
I completely understand what you are saying but ip does make an interesting point.
Why would chain of thought work at all if the model wasn't gaining something by additional abstraction away from binary?
Maybe things even go in the other direction and the models evolve a language more abstract than English that we also can't understand.
The models will still need to interface though with humans using human language until we become some kind of language model pet dog.
“printf” is an n-bit integer already. All strings are also numbers.
> why are you asking the llm in english to write python code?
Perhaps the same reason networked computers aren’t just spitting their raw outputs at each other? Security, i.e. varied motivations.
That is a little bit of an appeal to precedent I think. Networked computers don't spit their raw output at eachother today because so far, all network protocols were written by humans using these abstracted languages. In the future we have to expect otherwise as we drop the human out of the pipeline and seek the efficiencies that come from that. One might ask why the cells in your body don't signal via python code and instead use signalling mechanisms like concentrations of sodium ion within the neuron to turn your english language idea of "move arm" into an actual movement of the arm.
Well..except they do? HTTP is an anomaly in having largely human readable syntax, and even then we use compression with it all the time which translates it to a rarefied symbolic representation.
The limit beyond that would be skipping the compression step: the ideal protocol would be incompressible because it's already the most succinct representation of the state being transferred.
We're definitely capable of getting some of the way there by human design though: i.e. I didn't start this post by saying "86 words are coming".
> One might ask why the cells in your body don't signal via python code and instead use signalling mechanisms
Right. They don't just make their membranes chemically transparent. Same reason: security, i.e. the varied motivations of things outside the cell compared to within it.
I don't think using a certain language is more secure than just writing that same function call in some other language. Security in compute comes from priviledged access from some agents and blacklisting others. The language doesn't matter for that. It can be a python command, it can be a tcp packet, it can be a voltage differential, the actual "language" used is irrelevant.
All I am arguing is that languages and paradigms written in a way to make sense for our english speaking monkey brain is perhaps not the most efficient way to do things once we remove the constraint of having an english speaking monkey brain being the software architect.
> Right. They don't just make their membranes chemically transparent. Same reason: security, i.e. the varied motivations of things outside the cell compared to within it.
Cells or organelles within a cell could be described as having motivations I guess, but evolution itself doesn’t really have motivations as such, but it does have outcomes. If we can take as an assumption that mitochondria did not evolve to exist within the cell so much as co-evolve with it after becoming part of the cell by some unknown mechanism, and that we have seen examples of horizontal gene transfer in the past, by the anthropic principle, multicellular life is already chimeric and symbiotic to a wild degree. So any talk of motivations of an organelle or cell or an organism are of a different degree to motivations of an individual or of life itself, but not really of a different kind.
And if motivations of a cell are up for discussion in your context, and to the context of whom you were replying to, then it’s fair to look at the motivations of life itself. Life seems to find a way, basically. Its motivation is anti-annihilation, and life is not above changing itself and incorporating aspects of other life. Even without motivations at the stage of random mutation or gene transfer, there is still a test for fitness at a given place and time: the duration of a given cell or individual’s existence, and the conservation and preservation of a phenotype/genotype.
Life is, in its own indirect way, preserving optionality as a hedge against failure in the face of uncertain future events. Life exists to beget more life, each after its kind historically, in human time scales at least, but upon closer examination, life just makes moves slowly enough that the change is imperceptible to us.
Man’s search for meaning is one of humanity’s motivations, and the need to name things seems almost intrinsic to existence in the form of self vs not self boundary. Societally we are searching for stimuli because we think it will benefit us in some way. But cells didn’t seek out cell membrane test candidates, they worked with the resources they had, throwing spaghetti at the wall over and over until something stuck. And that version worked until the successor outcompeted it.
We’re so far down the chain of causality that it’s hard to reason about the motivations of ancient life and ancient selection pressures, but questions like this make me wonder, what if people are right that there are quantum effects in the brain etc. I don’t actually believe this! But as an example for the kinds of changes AI and future genetic engineering could bring, as a though exercise bear with me. If we find out that humans are figuratively philosophical zombies due to the way that our brains and causality work compared to some hypothetical future modified humans, would anything change in wider society? What if someone found out that if you change the cell membranes of your brain in some way that you’ll actually become more conscious than you would be otherwise. What would that even mean or feel like? Socially, where would that leave baseline humans? The concept of security motivations in that context confront me with the uncomfortable reality of historical genetic purity tests. For the record, I think eugenics is bad. Self-determination is good. I don’t have any interest in policing the genome, but I can see how someone could make a case for making it difficult for nefarious people to make germline changes to individual genomes, but it’s probably already happening and likely will continue to happen in the future, so we should decide what concerns are worth worrying about, and what a realistic outcome looks like in such a future if we had our druthers. We can afford to be idealistic before the horse has left the stable, but likely not for much longer.
That’s why I don’t really love the security angle when it comes to motivations of a cell, as it could have a Gattaca angle to it, though I know you were speaking on the level of the cell or smaller. Your comment and the one you replied to inspired my wall of text, so I’m sorry/you’re welcome.
Man is seeking to move closer to the metal of computation. Security boundaries are being erected only for others to cross them. Same as it ever was.
That and the fact the LLM there's plenty of source material associating abstractions expressed in English and code written in higher level languages. Not so much associating abstractions with bytecode and binary.
A future AI (actual AI not llm) would compute a spectrum of putative functions (1) and identify functions that meet some threshold. You need no prior associations only randomization of parameters and enough sample space. Given enough compute all possible combinations of random binary could be modeled and those satisfying functional parameters will be selected. And they will probably look nothing like how we consider functions today.
1. https://en.wikipedia.org/wiki/Bootstrapping_(statistics)#/me...
> Wolfram’s “bigger brains” piece
You mean the one linked at the top of the page?
Why is this structured like a school book report, written for a teacher who doesn’t have the original piece right in front of them?
Noted for next time. The article itself is excellent. Sorry if my comment felt out of place for HN. I added extra context to get more discussion going, this topic really interests me.
Four words into your post and I'm confident it's ChatGPT slop. Am I wrong?
>It makes you wonder how much of the future will even be understandable to us, o
There isn't much of a future left. But of what is left to humans, it is in all probability not enough time to invent any true artificial intelligence. Nothing we talk about here and elsewhere on the internet is anything like intelligence, even if it does produce something novel and interesting.
I will give you an example. For the moment, assume you come up with some clever prompt for ChatGPT or another one of the LLMs, and that this prompt would have it "talk" about a novel concept for which English has no appropriate words. Imagine as well that the LLM has trained on many texts where humans spoke of novel concepts and invented words for those new concepts. Will the output of your LLM ever, even in a million years, have it coin a new word to talk about its concept? You, I have no doubt, would come up with a word if needed. Sure, most people's new words would be embarrassing one way or another if you asked them to do so on the spot. But everyone could do this. The dimwitted kid in school that you didn't like much, the one who sat in the corner and played with his own drool, he would even be able to do this, though it would be childish and onomatopoeic.
The LLMs are, at best, what science fiction used to refer to as an oracle. A device that could answer questions seemingly intelligently, without having agency or self-awareness or even the hint of consciousness. At best. The true principles of intelligence, of consciousness are so far beyond what an LLM is that it would, barring some accidental discovery, require many centuries. Many centuries, and far more humans than we have even now... we only have eight or so 1-in-a-billion geniuses. And we have as many right now as we're ever going to have. China's population shrinks to a third of its current by the year 2100.
I’m probably too optimistic as a default, but I think it might be okay. Agriculture used to require far more people than it does now due to automation, and it certainly seems like many industries will be able to be partially automated with only incremental change to current technology. If less people are needed for social maintenance, then more will be able to focus on the sciences, so yes we may have less people but it’s quite possible we’ll have a lot more in science.
I don’t think AI needs to be conscious to be useful.
> The true principles of intelligence, of consciousness are so far beyond what an LLM is that it would, barring some accidental discovery, require many centuries.
I've been too harsh on myself for thinking it would take a decade to integrate imaging modalities into LLMs.
I mean, the integrated circuit, the equivalent of the evolution of multicellular life, was 1949. The microprocessor was 1979, and that would be what, Animals as a kingdom? Computers the size of a building are now the size of a thumb drive. What level are modern complex computer systems like ChatGPT? The level of chickens? Dogs? Whatever it is, it is light years away from what it was 50 years ago. We may be reaching physical limits for the size of circuits, but it seems like algorithm complexity and efficiency moving fast and are no where near any physical limits.
We haven’t needed many insane breakthroughs to get here. It has mostly been iterating and improving, which opens up new things to develop, iterate, and improve. IBMs Watson was a super computer in 2011 that could understand natural language. My laptop runs LLMs that can do that now. The pace of improvement is incredibly fast and I would be very hesitant to say with confidence that human level “intelligence” is definitely centuries away. 1804 was two centuries ago, and that was the year the locomotive was invented.