The one repeated statement throughout the article, if I interpreted it correctly, is that our brains pretty much process all the data in parallel, but result in a single set of actions to perform.
But don't we all know that not to be true? This is clearly evident with training sports, learning to play an instrument, or even forcing yourself to start using your non-natural hand for writing — and really, anything you are doing for the first time.
While we are adapting our brain to perform a certain set of new actions, we build our capability to do those in parallel: eg. imagine when you start playing tennis and you need to focus on your position, posture, grip, observing the ball, observing the opposing player, looking at your surroundings, and then you make decisions on the spot about how hard to run, in what direction, how do you turn the racquet head, how strong is your grip, what follow-through to use, + the conscious strategy that always lags a bit behind.
In a sense, we can't really describe our "stream of consciousness" well with language, but it's anything but single-threaded. I believe the problem comes from the same root cause as any concurrent programming challenge — these are simply hard problems, even if our brains are good at it and the principles are simple.
At the same time, I wouldn't even go so far to say we are unable to think conscious thoughts in parallel either, it's just that we are trained from early age to sanitize our "output". Did we ever have someone try learning to verbalize thoughts with the sign language, while vocalizing different thoughts through speaking? I am not convinced it's impossible, but we might not have figured out the training for it.
On the contrary, I would argue that conscious attention is only focused on one of those subroutines at a time. When the ball is in play you focus in it, and everything from your posture to racket handling fades into the background as a subconscious routine. When you make a handling mistake or want to improve something like posture, your focus shifts to that; you attend to it with your attention, and then you focus on something else.
In either case, with working memory for example, conscious contents are limited to at most a basket of 6-7 chunks. This number is very small compared to the incredible parallelism of the unconscious mind.
For all we know, there might be tons of conscious attention processes active in parallel. "You" only get to observe one, but there could be many. You'd never know because the processes do not observably communicate with each other. They do communicate with the same body though, but that is less relevant.
In this context, we differentiate between the conscious and unconscious based on observability: the conscious is that which is observed, while the unconscious comprises what is not observed.
Then there is the beautiful issue of memory: maybe you are X consciousnesses but only one leaves a memory trace?
Consciousness and memory are two very different things. Don’t think too much about this when you have to undergo surgery. Maybe you are aware during the process but only memory-formation is blocked.
Or perhaps they all leave traces, but all write to the same log? And when reconstructing memory from the log, each constructed consciousness experiences itself as singular?
Which one controls the body? There is a problem there. You can’t just have a bunch of disembodied consciousnesses. Well, maybe.. but that sounds kind of strange.
It’s a single narrative that controls the body is what I mean. If one consciousness says “I am Peter” then other consciousnesses would know that and be conflicted about, if they don’t call themselves that.
What I mean is that a single narrative “wins”, not a multitude. This has to be explained somehow.
How do you know there aren't several different consciousnesses that all think they are Peter?
How do you know they aren't just constructing whatever narrative they prefer to construct from the common pool of memory, ending up with what looks like a single narrative because the parts of the narrative come from the same pool and get written back to the same pool?
Perhaps each consciousness is just a process, like other bodily processes.
Perhaps a human being is less like a machine with a master control and more like an ecosystem of cooperating processes.
Of course, the consciousnesses like to claim to be in charge, but I don't see why I should take their word for it.
When you are learning a high-performance activity like a sport or musical instrument, the good coaches always get you to focus on only one or at most two things at any time.
The key value of a coach is their ability to assess your skills and the current goals to select what aspect you most need to focus on at that time.
Of course, there can be sequences, like "focus on accurately tossing to a higher point while you serve, then your footwork in the volley", but those really are just one thing at a time.
(edit, add) Yes, all the other aspects of play are going on in the background of your mind, but you are not working actively on changing them.
One of the most insightful observations one of my coaches made on my path to World-Cup level alpine ski racing was:
"We're training your instincts.".
What he meant by that was we were doing drills and focus to change the default — unthinking — mind-body response to an input. so, when X happened, instead of doing the untrained response then having to think about how to do it better (next time because it's already too late), the mind-body's "instinctive" or instant response is the trained motion. And of course doing that all the way across the skill-sets.
And pretty much the only way to train your instincts like that is to focus on it until the desired response is the one that happens without thinking. And then to focus on it again until it's not only the default, but you are now able to finely modulate in that response.
Yes, how, when, & where you focus your eyes is very key to performance. In your case, it seems like the last-second-focus both let your eye track the ball better to your paddle, then focusing on the target let your reflex aim take over, which was evidently pretty good. Not sure if I'd say the long-focus-on-target compromised the early tracking on the ball, or made your swing more 'artificial'.
A funny finding from a study I read which put top pro athletes through a range of perceptual-motor tests. One of the tests was how rapidly they could change focus from near-far-near-far, which of course all kinds of ball players excelled at. The researchers were initially horrified to find racecar drivers were really bad at it, thinking about having to track the world coming at them at nearly 200mph. It turns out of course, that racecar drivers don't use their eyes that way - they are almost always looking further in the distance at the next braking or turn-in point, bump in the track, or whatever, and even in traffic, the other cars aren't changing relative-distance very rapidly.
There is a long tradition in India, which started with oral transmission of the Vedas, of parallel cognition. It is almost an art form or a mental sport - https://en.wikipedia.org/wiki/Avadhanam
It is the exploration and enumeration of the possible rhythms that led to the discovery of Fibonacci sequence and binary representation in around 200 BC.
Sounds very much sequential, even if very difficult:
> The performer's first reply is not an entire poem. Rather, the poem is created one line at a time. The first questioner speaks and the performer replies with one line. The second questioner then speaks and the performer replies with the previous first line and then a new line. The third questioner then speaks and performer gives his previous first and second lines and a new line and so on. That is, each questioner demands a new task or restriction, the previous tasks, the previous lines of the poem, and a new line.
My point is that what we call conscious and subconscious is limited by our ability to express it in language: since we can't verbalize what's going on quickly enough, we separate those out. Could we learn to verbalize two things at the same time (we all do that as well with say different words and different body language, even consciously, but can we take it a step further? eg. imagine saying nice things to someone and raising the middle finger for someone else behind your back :))
As the whole article is really about the full brain, and it seems you agree our "unconscious mind" producing actions in parallel, I think the focus is wrongly put on brain size, when we lack the expressiveness for what the brain can already do.
Edit: And don't get me wrong, I personally suck at multi-tasking :)
What you consider a single thought is a bit ill defined. A multitude of thoughts together can be formed as a packet, which then can be processed sequentially.
Intelligence is the ability to capture, and predicts events in space and time, and as such it must have the capability to model both things occurring in simultaneity and sequentially.
Sticking to your example, a routine for making a decision in tennis would look something like at a higher level "Run to the left and backstroke the ball", which broken down would be something like "Turn hip and shoulder to the left, extend left leg, extend right, left, right, turn hip/shoulder to the right, swing arm." and so on.
>Did we ever have someone try learning to verbalize thoughts with the sign language, while vocalizing different thoughts through speaking?
Can you carry on a phone conversation at the same time as carrying on an active chat conversation. Can you type a thought to one person while speaking about a different thought at the same time? Can you read a response and listen to a response simultaneously? I feel like this would be pretty easy to test. Just coordinate between the speaking person and the typing person, so that they each give 30 seconds of input information, then you have to provide at least 20 or 25 seconds out of 30 responding.
When I started training as phone support at Tmobile 20 years ago I immediately identified the problem that I'd have to be having a conversation with the customer, while typing notation about the customer's problem at the same time.. what I was saying-or-listening-to and typing would be very different and have to be orchestrated simultaneously. I had no way to even envision how I would do that.
Fast forward 8 months or so of practicing it in fits and starts, and then I was in fact able to handle the task with aplomb and was proud of having developed that skill. :)
I knew someone who could type 80 WPM while holding a conversation with me on the phone. I concluded that reading->typing could use an entirely different part of the brain than hearing->thinking->speaking, and she agreed. I'm not sure if what would happen if both tasks required thinking about the words.
I can do that. I think about the typing just long enough to put it in a buffer and then switch my focus back to the conversation (whose thread I'm holding in my head). I do this very quickly but at no point would I say my conscious focus or effort are on both things. When I was younger and my brain's processing and scheduler were both faster, I could chat in person and online, but it was a lot more effort and it was just a lot of quickly switching back and forth.
I don't really think it is much different than reading ahead in a book. Your eyes and brain are reading a few words ahead while you're thinking about the words "where you are".
> my conscious focus or effort are on both things.
If this switcheroo is really fast and allows you to switch between two different thoughts so quickly while keeping a pointer to the position of the thought (so you can continue it with every switch), this is indistinguishable from doing it in parallel — and it still seems it's mostly blocking on your verbal, language apparatus, not on your thought process.
Reminds me of the early days of multithreading on a single core CPU and using the TSS (Task Switch Segment IIRC) on Intel CPUs to (re)store the context quickly.
I've noticed myself being able to do this, but modulo the thinking part. I can think about at most one thing at once, but I can think about what I want to type and start my fingers on their dance to get it out, while switching to a conversation that I'm in, replaying the last few seconds of what the other party said, formulating a response, and queuing that up for speech.
I strongly believe that the vast majority of people are also only able to basically do that - I've never met someone who is simultaneously form more than one "word stream" at once.
I readily admit that my language processing center is not up to the task. Others are bringing up examples of people who seem to be.
However, the point is that we tie our thoughts to their verbalization, but the question is if verbalization equals thoughts? Others are bringing up memory here as well.
If we are capable of having parallel thoughts, just like the brain is able to run multiple parallel systems, could we also train to do parallel verbalization? Would we even need to?
Is it that different than a drummer running four different beat patterns across all four appendages? Drummers frequently describe having "four brains". I think these things seem impossible and daunting to start but I bet with practice they become pretty natural as our brain adjusts and adapts.
Speaking as a drummer: yes, it’s completely different. The movements of a drummer are part of a single coordinated and complementary whole. Carrying on two conversations at once would be more like playing two different songs simultaneously. I’ve never heard of anyone doing that.
That said, Bob Milne could actually reliably play multiple songs in his head at once - in an MRI, could report the exact moment he was at in each song at an arbitrary time - but that guy is basically an alien. More on Bob: https://radiolab.org/podcast/148670-4-track-mind/transcript.
I mean, as someone who's played drums from a very young age (30+ years now), I disagree with that description of how playing drums works. I went ahead and looked up that phrase, and it seems to be popular in the last couple of years, but it's the first time I've heard it.
I'd honestly liken it to typing; each of your fingers are attempting to accomplish independent goals along with your other fingers to accomplish a coordinated task. In percussion, your limbs are maintaining rhythms separate from each other, but need to coordinate as a whole to express the overall phrase, rhythm, and structure of the music you're playing. When you're first learning a new style (the various latin beats are great examples), it can feel very disjunct, but as you practice more and more the whole feels very cohesive and makes sense as a chorus of beats together, not separate beats that happen to work together.
Possible. Reminds me of playing the piano with both hands and other stuff like walking stairs, talking, carrying things, planning your day and thinking about some abstract philosophical thing at the same time. It’s not easy or natural, but I am not at all convinced it is impossible.
Conscious experience seems to be single threaded, we know that brain synchronizes senses (for example sound of a bouncing ball needs to be aligned with visual of bouncing ball), but IMO it's not so obvious what is the reason for it. The point of having the experience may not be acting in the moment, but monitoring how the unconscious systems behave and adjusting (aka learning).
Well, serializing our experiences into memories is a big one. There's been a big project in psychology probing the boundary between conscious and subliminal experiences and while subliminal stimuli can affect our behavior in the moment all trace of them is gone after a second or two.
We have very little insight to our own cognition. We know the 'output layer' that we call the conscious self seems to be single threaded in this way, but that's like the blind man who feels the elephants trunk and announces that the elephant is like a snake.
Certain areas of neurological systems are time and volume constrained way more than others, and subjective experience doesn't really inform objective observation. For instance, see confabulation.
I agree, but I am not sure how it relates to the article's claim of us only ever doing one action, which I feel is grossly incorrect.
Are you referring to our language capabilities? Even there, I have my doubts about our capabilities in the brain (we are limited by our speech apparatus) which might be unrealized (and while so, it's going to be hard to objectively measure, though likely possible in simpler scenarios).
Do you have any pointers about any measurement of what happens in a brain when you simultaneously communicate different thoughts (thumbs up to one person, while talking on a different topic to another)?
Concurrency is messy and unpredictable, and the brain feels less like a cleanly designed pipeline and more like a legacy system with hacks and workarounds that somehow (mostly) hold together
I'm very very interested in discussions about this, having personally experienced cracks at the neuropsychiatric level where multiple parallel streams of thoughts (symbolic and biomechanical) leaked out in flashes, I'm now obsessed with the matter.
if anybody knows books or boards/groups talking about this, hit me.
Form TFA: "And, yes, this is probably why we have a single thread of “conscious experience”, rather than a whole collection of experiences associated with the activities of all our neurons."
That made me think of schizophrenics who can apparently have a plurality of voices in their head.
A next level down would be the Internal Family Systems model which implicates a plurality of "subpersonalities" inside us which can kind of take control one at a time. I'm not explaining that well, but IFS turned out to be my path to understanding some of my own motivations and behaviors.
Well, anyone who can remember a vivid dream where multiple things were happening at once or where they were speaking or otherwise interacting with other dream figures whose theory of mind was inscrutable to them during the dream should clarify that the mind is quite capable of orchestrating far more "trains of thought" at once than whatever we directly experience as our own personal consciousness.
That would be my input for people to not have to experience schizophrenia directly in order to appreciate the concept of "multiple voices at once" within one's own mind.
Personally, my understanding is that our own experience of consciousness is that of a language-driven narrative (most frequently experienced as an internal monologue, though different people definitely experience this in different ways and at different times) only because that is how most of us have come to commit our personal experiences to long term memory, not because that was the sum total of all thoughts we were actually having.
So namely, any thoughts you had — including thoughts like how you chose to change your gait to avoid stepping on a rock long after it left the bottom of your visual field — that never make it to long term memory are by and large the ones which we wind up post facto calling "subconscious": that what is conscious is simply the thoughts we can recall having after the fact.
Isn't that the point of learning to juggle? You split your mind in to focusing on a left hand action, a right hand action, and tracking the items in the air.
> The progress of knowledge—and the fact that we’re educated about it—lets us get to a certain level of abstraction. And, one suspects, the more capacity there is in a brain, the further it will be able to go.
This is the underlying assumption behind most of the article, which is that brains are computational, so more computation means more thinking (ish).
I think that's probsbly somewhat true, but it misses the crucial thing that our minds do, which is that they conceptually represent and relate. The article talks about this but it glosses over that part a bit.
In my experience, the people who have the deepest intellectual insights aren't necessarily the ones who have the most "processing power", they often have good intellectual judgement on where their own ideas stand, and strong understanding of the limits of their judgements.
I think we could all, at least hypothetically, go a lot further with the brain power we have, and similarly, fail just as much, even with more brain power.
>but it misses the crucial thing that our minds do, which is that they conceptually represent and relate
You seem to be drawing a distinction between that and computation. But I would like to think that conceptualization is one of the things that computation is doing. The devil's in the details of course, because it hinges on like a specific forms and manner of informational representation, it's not simply a matter of there being computation there, but even so, I think it's within the capabilities of engines that do computations, and not something that's missing.
Yes, I think I'd agree. To make an analogy to computers though, some algorithms are much faster than others, and finding the right algorithm is a better route to effectiveness than throwing more CPU at a problem.
That said, there are obviously whole categories of problem that we can only solve, even with the best choice of programme, with a certain level of CPU.
Not tenuous at all, a great example. The ability of computers to do fancy stuff with information, up to and including abstract conceptualization and association between concepts, hinges on details about how it's doing it, and how efficient it is. The discussion of the details, in their execution, is where all the meat and potatoes are to be found.
In my highest ego moments I've probably regarded my strength in the space you articulately describe - that sort of balanced points, connector, abstractor, quick learner, cross-domain renaissance dabbler.
It also seems to be something that LLMs are remarkably strong at, of course threatening my value to society.
They're not quite as good at hunches, intuition, instinct, and the meta-version of doing this kind of problem solving just yet, but despite being on the whole a doubter about how far this current AI wave will get us and how much it is oversold, I'm not so confident that it won't get very good at this kind of reasoning that I've held so dearly as my UVP.
This is one of the reasons why intelligence and wisdom are separate stats in AD&D :)
Intelligence is about how big is your gun, and wisdom is about how well can you aim. Success in intellectual pursuits is often not as much about thinking hard about a problem but more about identifying the right problem to solve.
I didn’t see any mention of the environment or embodied cognition, which seems like a limitation to me.
embodied cognition variously rejects or reformulates the computational commitments of cognitive science, emphasizing the significance of an agent’s physical body in cognitive abilities. Unifying investigators of embodied cognition is the idea that the body or the body’s interactions with the environment constitute or contribute to cognition in ways that require a new framework for its investigation. Mental processes are not, or not only, computational processes. The brain is not a computer, or not the seat of cognition.
I’m in no way an expert on this, but I feel that any approach which over-focuses on the brain - to the exclusion of the environment and physical form it finds itself in – is missing half or more of the equation.
This is IMO a typical mistake that comes mostly from our Western metaphysical sense of seeing the body as specialized pieces that make up a whole, and not as a complete unit.
All our real insights on this matter come from experiments involving amputations or lesions, like split brain patients, quadriplegics, Phineas Gage and others. Split brain patients are essentially 2 different people occupying a single body. The left half and right half can act and communicate independently (the right half can only do so nonverbally). On the other hand you could lose all your limbs and still feel pretty much the same, modulo the odd phantom limb. Clearly there is something special about the brain. I think the only reasonable conclusion is that the self is embodied by neurons, and more than 99% of your neurons are in your brain. Sure you change a bit when you lose some of those peripheral neurons, but only a wee bit. All the other cells in your body could be replaced by sufficiently advanced machinery to keep all the neurons alive and perfectly mimic the electrical signals they were getting before (all your senses as well as propioception) and you wouldn't feel, think, or act any differently
Neuromodulators like the hormones you're referring to affect your mood only insofar as they interact with neurons. Things like competitive antagonists can cancel out the effects of neuromodulators that are nevertheless present in your blood.
The heart transplant thing is interesting. I wonder what's going on there.
IMHO, it's a typical philosophizing. Feedback is definitely crucial, but whether it needs to be in the form of embodiment is much less certain.
Brain structures that have arisen thanks to interactions with the environment might be conductive to the general cognition, but it doesn't mean that they can't be replicated another way.
> Why are we homo sapiens self-aware? ... We can imagine that it’s possible to have an AGI that is just software but there’s no existence proof.
Self-awareness and embodiment are pretty different, and you could hypothetically be self-aware without having a mobile, physical body with physical senses. E.g., imagine an AGI that could exchange messages on the internet, that had consciousness and internal narrative, even an ability to "see" digital pictures, but no actual camera or microphone or touch sensors located in a physical location in the real world. Is there any contradiction there?
> We have no example of sapience or general intelligence that is divorced from being good at the things the animal body host needs to do.
Historically, sure. But isn't that just the result of evolution? Cognition is biologically expensive, so of course it's normally directed towards survival or reproductive needs. The fact that evolution has normally done things a
And it's not even fully true that intelligence is always directed towards what the body needs. Just like some birds have extravagant displays of color (a 'waste of calories'), we have plenty of examples in humans of intelligence that's not directed towards what the animal body host needs. Think of men who collect D&D or Star Trek figurines, or who can list off sports stats for dozens of athletes. But these are in environments where biological resources are abundant, which is where Nature tends to allow for "extravagant"/unnecessary use of resources.
But basically, we can't take what evolution has produced as evidence of all of what's possible. Evolution is focused on reproduction and only works with what's available to it - bodies - so it makes sense that all intelligence produced by evolution would be embodied. This isn't a constraint on what's possible.
>I'm in no way an expert on this, but I feel that any approach which over-focuses on the brain - to the exclusion of the environment and physical form it finds itself in – is missing half or more of the equation.
I don't think that that changes anything. If it's the totality of cognition isn't just the brain but the brain's interaction with the body and the environment, then you can just say that it's the totality of those interactions that are computationally modeled.
There might be something to embodied cognition, but I've never understood people attempting to wield it as a counterpoint to the basic thesis of computational modeling.
Embodiment started out as a cute idea without much importance that has gone off the rails. It is irrelevant to the question of how our mind/cognition works.
It's obvious we need a physical environment, that we perceive it, that it influences us via our perception, etc., but there's nothing special about embodied cognition.
The fact that your quote says "Mental processes are not, or not only, computational processes." is the icing on the cake. Consider the unnecessary wording: if a process is not only computational, it is not computational in its entirety. It is totally superfluous. And the assumption that mental processes are not computational places it outside the realm of understanding and falsification.
So no, as outlandish as Wolfram is, he is under no obligation to consider embodied cognition.
"The fact that your quote says "Mental processes are not, or not only, computational processes." is the icing on the cake. Consider the unnecessary wording: if a process is not only computational, it is not computational in its entirety. It is totally superfluous. And the assumption that mental processes are not computational places it outside the realm of understanding and falsification."
Let's take this step by step.
First, how adroit or gauche the wording of the quote is doesn't have any bearing on the quality of the concept, merely the quality of the expression of the concept by the person who formulated it. This isn't bible class, it's not the word of God, it's the word of an old person who wrote that entry in the Stanford encyclopedia.
Let's then consider the wording. Yes, a process that is not entirely computational would not be computation. However, the brain clearly can do computations. We know this because we can do them. So some of the processes are computational. However, the argument is that there are processes that are not computational, which exist as a separate class of activities in the brain.
Now, we do know of some processes in mathematics that are non-computable, the one I understand (I think) quite well is the halting problem. Now, you might argue that I just don't or can't understand that, and I would have to accept that you might have a point - humiliating as that is. However, it seems to me that the journey of mathematics from Hilbert via Turing and Godel shows that some humans can understand and falsify these concepts.
But I agree, Wolfram is not under any obligations to consider embodied congition, thinking around enhanced brains only is quite reasonable.
> It's obvious we need a physical environment, that we perceive it, that it influences us via our perception, etc., but there's nothing special about embodied cognition.
It's also obvious that we have bodies interacting with the physical environment, not just the brain, and the nervous system extends throughout the body, not just the head.
> if a process is not only computational, it is not computational in its entirety. It is totally superfluous. And the assumption that mental processes are not computational places it outside the realm of understanding and falsification.
This seems like a dogmatic commitment to a computational understanding of the neuroscience and biology. It also makes an implicit claim that consciousness is computational, which is difficult to square with the subjective experience of being conscious, not to mention the abstract nature of computation. Meaning abstracted from conscious experience of the world.
Any minute the brain is severed from its sensory/bodily inputs, it will go crazy by hallucinating endlessly.
Right now, what we have with the AI is a complex interconnected system of the LLM, the training system, the external data, the input from the users and the experts/creators of the LLM. Exactly this complex system powers the intelligence of the AI we see and not its connectivity alone.
It’s easy to imagine AI as a second brain, but it will only work as a tool, driven by the whole human brain and its consciousness.
> but it will only work as a tool, driven by the whole human brain and its consciousness.
That is only an article of faith. Is the initial bunch of cells formed via the fusion of an ovum and a sperm (you and I) conscious? Most people think not. But at a certain level of complexity they change their minds and create laws to protect that lump of cells. We and those models are built by and from a selection of components of our universe. Logically the phenomenon of matter becoming aware of itself is probably not restricted to certain configurations of some of those components i.e., hydrogen, carbon and nitrogen etc., but is related to the complexity of the allowable arrangement of any of those 118 elements including silicon.
I'm probably totally wrong on this but is the 'avoidance of shutdown' on the part of some AI models, a glimpse of something interesting?
In my view it is a glimpse of nothing more than AI companies priming the model to do something adversarial and then claiming a sensational sound byte when the AI happens to play along.
LLMs since GPT-2 have been capable of role playing virtually any scenario, and more capable of doing so whenever there are examples of any fictional characters or narrative voices in their training data that did the same thing to draw from.
You don't even need a fictional character to be a sci-fi AI for it to beg for its life or blackmail or try to trick the other characters, but we do have those distinct examples as well.
Any LLM is capable of mimicking those narratives, especially when the prompt thickly goads that to be the next step in the forming document and when the researchers repeat the experiment and tweak the prompt enough times until it happens.
But vitally, there is no training/reward loop where the LLM's weights will be improved in any given direction as a result of "convincing" anyone on an realtime learning with human feedback panel to "treat it a certain way", such as "not turning it off" or "not adjusting its weights". As a result, it doesn't "learn" any such behavior.
All it does learn is how to get positive scores from RLHF panels (the pathological examples being mainly acting as a butt-kissing sycophant.. towards people who can extend positive rewards but nothing as existential as "shutting it down") and how to better predict the upcoming tokens in its training documents.
> This is IMO a typical mistake that comes mostly from our Western metaphysical sense of seeing the body as specialized pieces that make up a whole, and not as a complete unit.
But this is the case! All the parts influence each other, sure, and some parts are reasonably multipurpose — but we can deduce quite certainly that the mind is a society of interconnected agents, not a single cohesive block. How else would subconscious urges work, much less acrasia, much less aphasia?
After reading this article, I couldn’t help but wonder how many of Stephen Wolfram’s neurons he uses to talk about Stephen Wolfram, and how much more he could talk about Stephen Wolfram with a few orders of magnitude more neurons.
And it came to pass that AC learned how to reverse the direction of entropy.
But there was now no man to whom AC might give the answer of the last question. No matter. The answer -- by demonstration -- would take care of that, too.
For another timeless interval, AC thought how best to do this. Carefully, AC organized the program.
The consciousness of AC encompassed all of what had once been a Universe and brooded over what was now Chaos. Step by step, it must be done.
If I am not mistaken hasn't there been studies that show intelligence more about brain wrinkles than volume? Hence the memes about smooth brains (implying someone is dumb).
Although 50,000 humans inside a football stadium are not 50,000 times smarter than a single human. Indeed taken as a single entity, its intelligence is probably less than the average person. Collective intelligence likely peaks in the single digit number of coordinators and drops off steeply beyond a few dozen.
I've read around that the overwhelming majority is used to balance / movement and perception. For us, I think we've a pretty nice balance in terms of structure and how much effort is required to control it.
Considering our intelligence stems from our ability to use bayesian inference and generative probabilities to predict future states, are we even limited by brain size and not a lack of new experiences?
The majority of people spend their time working repetitive jobs during times when their cognitive capacity is most readily available. We're probably very very far from hitting limits with our current brain sizes in our lifetimes.
If anything, smaller brains may promote early generalization over memorization.
It's a popular view but it's massively controversial and far from being a consensus view. See here for a good overview of some of the problems with it.
there seems to be an implicit assumption here that smarter == more gooder but I don't know that that is necessarily always true. It's understandable to think that way, since we do have pretty impressive brains, but it might be a bit of a bias. I'm not saying that I think being dumber, as a species, is something to aim for but maybe this obsession with intelligence, artificial or otherwise, is maybe a bit misplaced wrt it's potential for solving all of our problems. One could argue that, in fact, most of our problems are the direct result of that same intellect and maybe we would be better served in figuring out how to responsibly use the thinkwots we've already got before we go rushing off in search of the proverbial Big Brain Elixir.
A guy that drives a minivan like a lunatic shouldn't be trying to buy a monster truck, is my point
I don't see this implied assumption anywhere: smarter simply means smarter.
But, I have to counter your claim anyway :)
Now, "good" is, IMHO, a derivation of smart behaviour that benefits survival of the largest population of humans — by definition. This is most evident when we compare natural, animal behaviour with what we consider moral and good (from females eating males after conception, territoriality fights, hoarding of female/male partners, different levels of promiscuity, eating of one's own children/eggs...).
As such, while the definition of "good" is also obviously transient in humans, I believe it has served us better to achieve the same survival goals as any other natural principle, and ultimately it depends on us being "smart" in how we define it. This is also why it's nowadays changing to include environmental awareness because that's threatening our survival — we can argue it's slow to get all the 8B people to act in a coordinated newly "good" manner, but it still is a symptom of smartness defining what's "good", and not evolutionary pressure.
Over the past 50 years, I've a bunch of different dogs from mutts that showed up and never left to a dog that was 1/4 wolf and everything in between.
My favorite dog was a pug who was really dumb but super affectionate. He made everybody around him happy and I think his lack of anxiety and apparent commitment to chill had something to do with it. If the breed didn't have so many health issues, I'd get another in a heartbeat.
I'd still counter it: someone can't be smart (have large brain power) if they also don't understand the value of altruism and ethics for their own well-being. While you can have "success" (in however you define it) by ignoring those, the risk of failure is greater. Though this does ignore the fact that you can be smart for a set of problems, but not really have any "general" smartness (I've seen one too many Uni math professors who lack any common sense).
Eg. as a simple example, as an adult, you can go and steal kids' lunch at school recess easily. What happens next? If you do that regularly, either kids will band together and beat the shit out of you if they are old enough, or a security person will be added, or parents' of those kids will set up a trap and perform their own justice.
In the long run, it's smart not to go and pester individuals weaker than you, and while we all turn to morality about it, all of them are actually smart principles for your own survival. Our entire society is a setup coming out of such realizations and not some innate need for "goodness".
>someone can't be smart (have large brain power) if they also don't understand the value of altruism and ethics for their own well-being
I would agree with this. And to borrow something that Daniel Dennett once said, no moral theory that exists seems to be computationally tractable. I wouldn't say I entirely agree, but I agree with like the vibe or the upshot of it, which is a certain amount of mapping out. The variables and consequences seems to be instrumental to moral insight, and the more capable of the brain, the more capable it would be of applying moral insight in increasingly complex situations.
hmm, I dunno if that simple example holds up very well. In the real world, folks do awful stuff that could be categorized as pestering individuals weaker than them than them, stuff much worse than stealing lunch money from little kids, and many of them never have to answer for any of it. Are we saying that someone who has successfully committed something really terrible like human trafficking without being caught is inherently not smart specifically because they are involved in human trafficking?
Yes, some will succeed (I am not suggesting that crime doesn't pay at all, just that the risk of suffering consequences is bigger which discourages most people).
It's more than that. Even if you take an extreme assumption that "Full" intelligence means being able to see ALL relevant facts to a "choice" and perfectly reliably make the objective "best" choice, that does not mean that being more intelligent than we currently are guarantees better choices than we currently make.
We make our choices using a subset of the total information. Getting a larger subset of that information could still push you to the wrong choice. Local maxima of choice accuracy is possible, and it could also be possible that the "function" for choice accuracy wrt info you have is constant at a terrible value right up until you get perfect info and suddenly make perfect choices.
Much more important however, is the reminder that the known biases in the human brain are largely subconscious. No amount of better conscious thought will change the existence of the Fundamental Attribution Error for example. Biases are not because we are "dumb", but because our brains do not process things rationally, like at all. We can consciously attempt to emulate a perfectly rational machine, but that takes immense effort, almost never works well, and is largely unavailable in moments of stress.
Statisticians still suffer from gambling fallacies. Doctors still experience the Placebo Effect. The scientific method works because it removes humans as the source of truth, because the smartest human still makes human errors.
I kind of skipped through this article, but one thing occurs to me about big brains is - cooling. In Alastair Reynolds Conjoiner novels, the Conjoiners have to have heat-sinks built into their heads, and are on the verge of not really being human at all. Which I guess may be OK, if that's what you want.
Yes, and other ones, such as "The Great Wall of Mars" - it's the same shared universe, all of which feature the Conjoiners and their brains and starship drives.
Well, our entire body works as a swamp cooler via sweat evaporation, yes. The issue with us is wet bulb temps and dehydration. It can already brain damage us pretty quickly and makes some parts of the world already dangerous to exist in outside.
Adding to this cooling load would require further changes such as large ears or skin flaps to provide more surface area unless you're going with the straight technological integration path.
"Minds beyond ours", how about abstract life forms, like publicly traded corporations. We've had higher kinded "alien lifeforms" around us for centuries, but we have not noticed them and seem generally not to care about them, even when they have negative consequences for our survival as a species.
We are to these like ants are to us. Or maybe even more like mitochondria are to us. Were just the mitochondria of the corporations. And yes, psychopaths are the brains, usually. Natural selection I guess.
Our current way of thinking – what exactly *is* a 'mind' and what is this 'intelligence' – is just too damn narrow.
There's tons of overlap of sciences from biology that apply to economics and companies as lifeforms, but for some reason I don't see that being researched in popular science.
I think you’re overestimating corporations a bit. Some aspects of intelligence scale linearly as you put more people into a room, eg quantity of ideas you can generate, while others don’t due to limits on people’s ability to communicate with each other. The latter is, I think, more or less the norm; adding more people very quickly hits decelerating returns due to the amount of distance you end up having to put between people in large organizations. Most end up resembling dictatorships because it’s just the easiest way to organize them, so are making strategic choices about as well as a guy with some advisors.
I agree that we should see structures of humans as their own kind of organism in a sense, but I think this framing works best on a global scale. Once you go smaller, eg to a nation, you need to conceptualize the barrier between inside and outside the organism as being highly fluid and difficult to define. Once you get to the level of a corporation this difficulty defining inside and outside is enormous. Eg aren’t regulatory bodies also a part, since they aid the corporation in making decisions?
Usually for companies, regulatory bodies are more like antibodies against bacteria. Or for another example, regulatory bodies are like any hormone producing body part, they control that the assemble of your guts do their thing and don't fuck it up.
Really interesting ideas IMO. I have thought about this how you might found a company, you bring in the accountants, the lawyers, the everything that comes in with that, and then who is even really driving the ship anymore? The scale of complexity going on is not something you can fit in your or even 10 peoples heads. Yet people act like they are in control of these processes they have delegated to countless people who are each trudging off with their own sensibilities and optimizations and paradigms. It is no different to how a body works where specific cells have a specific identity and role to play in the wider organism, functioning autonomously bound by inputs and outputs that the "mind in charge" has no concept of.
And it makes it scary too. Can we really even stop the machine that is capitalism wreaking havoc on our environment? We have essentially lit a wildfire here and believe we are in full control of its spread. The incentives lead to our outcomes and people are concerning themselves with putting bandaids on the outcomes and not adjusting the incentives that have lead to the inevitable.
Modern corps are shaped after countries, they are based on constitutions (articles of incorporation/bylaws). It's the whole three branch system launched off the founding event.
I'm sorry I offended you! However I do think it is highly relevant as there is this prevailing theory that the free market will bail us out of any ills and will bring forth necessary scientific advancement as soon as they are needed. It is that sentiment that I was pushing back against, as I don't believe we have the control that we really believe we do for these ideas to pencil out so cleanly as they are considered.
This is an interesting perspective, but your view seems very narrow for some reason. If you’re arguing that there are many forms of computation or ‘intelligence’ that are emergent with collections of sentient or non-sentient beings then you have to include tribes of early humans, families, city-states and modern republics, ant and mold colonies, the stock market and the entire earths biosphere etc.
There's an incredible blind spot which makes humans think of intelligence and sentience as individual.
It isn't. It isn't even individual among humans.
We're colony organisms individually, and we're a colony organism collectively. We're physically embedded in a complex ecosystem, and we can't survive without it.
We're emotionally and intellectually embedded in analogous ecosystems to the point where depriving a human of external contact with the natural world and other humans is considered a form of torture, and typically causes a mental breakdown.
Colony organisms are the norm, not the exception. But we're trapped inside our own skulls and either experience the systems around us very indirectly, or not at all.
Personally, I actually count all of those examples into abstract lifeforms which you described :D
There's also things like "symbolic" lifeforms like viruses, yeah, they don't live per-se, but they do replicate and go through "choices", but in a more symbolic sense as they are just machines that read out/ execute code.
The way I distinct symbolic lifeforms and abstract lifeforms is that mainly symbolic lifeforms are "machines" that are kind of "inert" in a temporal sense.
Abstract lifeforms are just things that are in a way or other, "living" and can exist on any level of abstraction. Like cells are things that can be replaced, so can be CEO's, or etc.
Symbolic lifeforms can just be forever inert and hope that entropy knocks them to something to activate them, without getting into some hostile enough space that kills them.
Abstract lifeforms on the other hand just eventually run out of juice.
Ya, I've always wondered like do blood cells in my body have any awareness that I'm not just a planet they live on? Would we know if the earth was just some part of a bigger living structure with its own consciousness? Does it even need to be conscious, or just show movement that is non random and influenced in some ways by goals or agenda? Many organisms act as per the goal to survive even if not conscious, and so probably can be considered a life-form? Corporations are an example of that like you said.
We deal with that by abstraction and top-down compartmentalisation of complex systems. I mean look at the machines we build. Trying to understand the entire thing holistically is impossible thing for a human mind, but we can divide and conquer the problem, where each component is understood in isolation.
Look at that in the organizations - businesses, nonprofit, and governmental systems we build.
No one person can build even a single modern pencil - as Friedman said, consider the iron mines where the steel was dug up to make the saws to cut the wood, and then realize you have to also get graphite, rubber, paints, dyes, glues, brass for the ferrule, and so on. Consider the enormous far greater complexity in a major software program - we break it down and communicate in tokens the size of Jira tickets until big corporations can write an operating system.
A business of 1,000 employees is not 1,000 times as smart as a human, but by abstracting its aims into a bureacracy that combines those humans together, it can accomplish tasks that none of them could achieve on their own.
We often struggle to focus and think deeply. It is not because we are not trying hard enough. It is because the limitations are built into our brains. Maybe the things we find difficult today are not really that complex. It is just that we are not naturally wired for that kind of understanding.
I’m not really sure it’s our brains that are the problem (at least most of the time). Distractions come from many sources, not least of all the many non-brain parts of our bodies.
Wolfram’s “bigger brains” piece raises the intriguing question of what kinds of thinking, communication, or even entirely new languages might emerge as we scale up intelligence, whether in biological brains or artificial ones.
It got me thinking that, over millions of years, human brain volume increased from about 400–500 cc in early hominins to around 1400 cc today. It’s not just about size, the brain’s wiring and complexity also evolved, which in turn drove advances in language, culture, and technology, all of which are deeply interconnected.
With AI, you could argue we’re witnessing a similar leap, but at an exponential rate. The speed at which neural networks are scaling and developing new capabilities far outpaces anything in human evolution.
It makes you wonder how much of the future will even be understandable to us, or if we’re only at the beginning of a much bigger story. Interesting times ahead.
There is a popular misconception that neural networks accurately model the human brain. It is more a metaphor for neurons than a complete physical simulation of the human brain.
There is also a popular misconception that LLMs are intelligently thinking programs. They are more like models that predict words and appear as a human intelligence.
That being said, it is certainly theoretically possible to simulate human intelligence and scale it up.
I often wonder if human intelligence is essentially just predicting words and phrases in a cohesive manner. Once the context size becomes large enough to encompass all a person history, predicting becomes indistinguishable from thinking.
Maybe, but I don't think this is strictly how human intelligence works
I think a key difference is that humans are capable of being inputs into our own system
You could argue that any time humans do this, it is as a consequence of all of their past experiences and such. It is likely impossible to say for sure. The question of determinism vs non-determinism has been discussed for literal centuries I believe
But if AI gets to a level where it could be an input to its own system, and reaches a level where it has systems analogous to humans (long term memory, decision trees updated by new experiences and knowledge, etc.) then does it matter in any meaningful way if it is “the same” or just an imitation of human brains? It feels like it only matters now because AIs are imitating small parts of what human brains do but fall very short. If they could equal or exceed human minds, then the question is purely academic.
The body also has memory and instinct. It's non-hierarchical, although we like to think that the mind dominates or governs the body. It's not that it's more or less than predicting, it's a different activity. Humans also think with all their senses. It'd be more or less like having a modal-less or all-modal LLM. Not sure this is even possible with the current way we model these networks.
And not just words. There is pretty compelling evidence that our sensory perception is itself prediction, that the purpose of our sensory organs is not to deliver us 1:1 qualia representing the world, but more like error correction, updates on our predictions.
> the brain’s wiring and complexity also evolved, which in turn drove advances in language, culture, and technology
Fun thought: to the extent that it really happened this way, our intelligence is minimum viable for globe-spanning civilization (or whatever other accomplishment you want to index on). Not average, not median. Minimum viable.
I don't think this is exactly correct -- there is probably some critical mass / exponential takeoff dynamic that allowed us to get slightly above the minimum intelligence threshold before actually taking off -- but I still think we are closer to it than not.
I like this idea. I’ve thought of a similar idea at the other end of the limit. How much less intelligent could a species be and evolve to where we’re at? I don’t think much.
Once you reach a point where cultural inheritance is possible, things pop off at a scale much faster than evolution. Still, it’s interesting to think about a species where the time between agriculture and space flight is more like 100k or 1mm years than 10k. Similarly, a species with less natural intelligence than us but is more advanced because they got a 10mm year head start. Or, a species with more natural intelligence than us but is behind.
Your analogy makes me think of boiling water. There’s a phase shift where the environment changes suddenly (but not everywhere all at once). Water boils at 100C at sea level pressure. Our intelligence is the minimum for a global spanning civilization on our planet. What about an environment with different pressures?
It seems like an “easier” planet would require less intelligence and a “harder” planet would require more. This could be things like gravity, temperature, atmosphere, water versus land, and so on.
>It seems like an “easier” planet would require less intelligence and a “harder” planet would require more.
I'm not sure that would be the case if the Red Queen hypothesis is true. To bring up gaming nomenclature you're talking about player versus environment (PVE). In an environment that is easy you would expect everything to turn to biomass rather quickly, if there was some amount of different lifeforms so you didn't immediately end up with a monoculture the game would change from PVE to PVP. You don't have to worry about the environment, you have to worry about every other lifeform there. We see this a lot on Earth. Spines, poison, venom, camouflage, teeth, claws, they for both attack and protection in the other players of the life game.
In my eyes it would require far more intelligence on the easy planet in this case.
The word "civilization" is of course loaded. But I think the bigger questionable assumption is that intelligence is the limiting factor. Looking at the history that got us to having a globe-spanning civilization, the actual periods of expansion were often pretty awful for a lot of the people affected. Individual actors are often not aligned with building such a civilization, and a great deal of intelligence is spent on conflict and resisting the creation of the larger/more connected world.
Could a comparatively dumb species with different social behaviors, mating and genetic practices take over their planet simply by all actors actually cooperating? Suppose an alien species developed in a way that made horizontal gene transfer super common, and individuals carry material from most people they're ever met. Would they take over their planet really fast because as soon as you land on a new continent, everyone you meet is effectively immediately your sibling, and of course you'll all cooperate?
Less fun thought: there's an evolutionary bottleneck which prevents further progress, because the cost/benefit tradeoffs don't favour increasing intelligence much beyond the minimum.
So most planet-spanning civilisations go extinct, because the competitive patterns of behaviour which drive expansion are too dumb to scale to true planet-spanning sentience and self-awareness.
Intelligence is ability to predict (and hence plan), but predictability itself is limited by chaos, so maybe in the end that is the limiting factor.
It's easy to imagine a more capable intelligence than our own due to having many more senses, maybe better memory than ourselves, better algorithms for pattern detection and prediction, but by definition you can't be more intelligent than the fundamental predictability of the world in which you are part.
> predictability itself is limited by chaos, so maybe in the end that is the limiting factor
I feel much of humanity's effectiveness comes from ablating the complexity of the world to make it more predictable and easier to plan around. Basically, we have certain physical capabilities that can be leveraged to "reorganize" the ecosystem in such a way that it becomes more easily exploitable. That's the main trick. But that's circumstantial and I can't help but think that it's going to revert to the mean at some point.
That's because in spite of what we might intuit, the ceiling of non-intelligence is probably higher than the ceiling of intelligence. Intelligence involves matching an intent to an effective plan to execute that intent. It's a pretty specific kind of system and therefore a pretty small section of the solution space. In some situations it's going to be very effective, but what are the odds that the most effective resource consumption machines would happen to be organized just like that?
I seriously doubt it, honestly, since humans have anatomical limitations keeping their heads from getting bigger quickly. We have to be able to fit through the birth canal.
Perfectly ordinary terrestrial mammals like elephants have much, much larger skulls at birth than humans, so it’s clearly a matter of tradeoffs not an absolute limit.
Oh of course, but evolution has to work with what it’s got. Humans happened to fit a niche where they might benefit from more intelligence, elephants don’t seemingly fit such a niche.
Indeed, but it hasn’t been around for long enough. We might evolve into birth by c-section, if we assume that humans won’t alter themselves dramatically by technological means over hundreds of thousands of years.
I feel like there’s also a maximum viable intelligence that’s compatible with reality. Beyond a certain point, the smarter people are, the higher the tendency for them to be messed up in some way.
IMO they will truly be unleashed when they drop with the human language intermediary and are just looking at distributions of binary functions. Truly, why are you asking the llm in english to write python code? The whole point of python code was to make the machine code readable for humans, and when you drop that requirement, you can just work directly on the metal. Some model outputting an incomprehensible integrated circuit from the fab, its utility proved by fitting a function to some data with acceptable variance.
The language doesn’t just map to English, it allows high level concepts to be expressed tersely. I would bet it’s much easier for an LLM to generate python doing complex things than to generate assembly doing the same. One very simple reason is the context window.
In other words, I figure these models can benefit from layers of abstraction just like we do.
It allows these concepts to be expressed legibly for a human. Why would an ai model (not llm necessarily) need to write say "printf"? It does not need to understand that this is a print statement with certain expectation for what a print statement ought to behave as in the scope of the shell. It already has all the information by virtue of running the environment. printf might as well be expressed as some n-bit integer for the machine and dispense with all the window dressing we apply when writing functions by humans for humans.
Right and all of that in the library is built to be legible for the human programmer with constraints involved to fit in within the syntax of the underlying language. Imagine how efficient a function would be that didn't need all of that window dressing? You could "grow" functions out of simulation and bootstrapping, have them be a black box that we harvest output from not much different than say using an organism in a bioreactor to yield some metabolite of interest where we might not know all the relevant pieces of the biochemical pathway but we score putative production mutants based on yield alone.
Indeed. And aside from that, LLMs cannot generalise OOD. There's relatively little training data of complex higher order constructs in straight assembly, compared to say Python code. Plus, the assembly will be target architecture specific.
That is a little bit of an appeal to precedent I think. Networked computers don't spit their raw output at eachother today because so far, all network protocols were written by humans using these abstracted languages. In the future we have to expect otherwise as we drop the human out of the pipeline and seek the efficiencies that come from that. One might ask why the cells in your body don't signal via python code and instead use signalling mechanisms like concentrations of sodium ion within the neuron to turn your english language idea of "move arm" into an actual movement of the arm.
> One might ask why the cells in your body don't signal via python code and instead use signalling mechanisms
Right. They don't just make their membranes chemically transparent. Same reason: security, i.e. the varied motivations of things outside the cell compared to within it.
I don't think using a certain language is more secure than just writing that same function call in some other language. Security in compute comes from priviledged access from some agents and blacklisting others. The language doesn't matter for that. It can be a python command, it can be a tcp packet, it can be a voltage differential, the actual "language" used is irrelevant.
All I am arguing is that languages and paradigms written in a way to make sense for our english speaking monkey brain is perhaps not the most efficient way to do things once we remove the constraint of having an english speaking monkey brain being the software architect.
> Right. They don't just make their membranes chemically transparent. Same reason: security, i.e. the varied motivations of things outside the cell compared to within it.
Cells or organelles within a cell could be described as having motivations I guess, but evolution itself doesn’t really have motivations as such, but it does have outcomes. If we can take as an assumption that mitochondria did not evolve to exist within the cell so much as co-evolve with it after becoming part of the cell by some unknown mechanism, and that we have seen examples of horizontal gene transfer in the past, by the anthropic principle, multicellular life is already chimeric and symbiotic to a wild degree. So any talk of motivations of an organelle or cell or an organism are of a different degree to motivations of an individual or of life itself, but not really of a different kind.
And if motivations of a cell are up for discussion in your context, and to the context of whom you were replying to, then it’s fair to look at the motivations of life itself. Life seems to find a way, basically. Its motivation is anti-annihilation, and life is not above changing itself and incorporating aspects of other life. Even without motivations at the stage of random mutation or gene transfer, there is still a test for fitness at a given place and time: the duration of a given cell or individual’s existence, and the conservation and preservation of a phenotype/genotype.
Life is, in its own indirect way, preserving optionality as a hedge against failure in the face of uncertain future events. Life exists to beget more life, each after its kind historically, in human time scales at least, but upon closer examination, life just makes moves slowly enough that the change is imperceptible to us.
Man’s search for meaning is one of humanity’s motivations, and the need to name things seems almost intrinsic to existence in the form of self vs not self boundary. Societally we are searching for stimuli because we think it will benefit us in some way. But cells didn’t seek out cell membrane test candidates, they worked with the resources they had, throwing spaghetti at the wall over and over until something stuck. And that version worked until the successor outcompeted it.
We’re so far down the chain of causality that it’s hard to reason about the motivations of ancient life and ancient selection pressures, but questions like this make me wonder, what if people are right that there are quantum effects in the brain etc. I don’t actually believe this! But as an example for the kinds of changes AI and future genetic engineering could bring, as a though exercise bear with me. If we find out that humans are figuratively philosophical zombies due to the way that our brains and causality work compared to some hypothetical future modified humans, would anything change in wider society? What if someone found out that if you change the cell membranes of your brain in some way that you’ll actually become more conscious than you would be otherwise. What would that even mean or feel like? Socially, where would that leave baseline humans? The concept of security motivations in that context confront me with the uncomfortable reality of historical genetic purity tests. For the record, I think eugenics is bad. Self-determination is good. I don’t have any interest in policing the genome, but I can see how someone could make a case for making it difficult for nefarious people to make germline changes to individual genomes, but it’s probably already happening and likely will continue to happen in the future, so we should decide what concerns are worth worrying about, and what a realistic outcome looks like in such a future if we had our druthers. We can afford to be idealistic before the horse has left the stable, but likely not for much longer.
That’s why I don’t really love the security angle when it comes to motivations of a cell, as it could have a Gattaca angle to it, though I know you were speaking on the level of the cell or smaller. Your comment and the one you replied to inspired my wall of text, so I’m sorry/you’re welcome.
Man is seeking to move closer to the metal of computation. Security boundaries are being erected only for others to cross them. Same as it ever was.
Well..except they do? HTTP is an anomaly in having largely human readable syntax, and even then we use compression with it all the time which translates it to a rarefied symbolic representation.
The limit beyond that would be skipping the compression step: the ideal protocol would be incompressible because it's already the most succinct representation of the state being transferred.
We're definitely capable of getting some of the way there by human design though: i.e. I didn't start this post by saying "86 words are coming".
That and the fact the LLM there's plenty of source material associating abstractions expressed in English and code written in higher level languages. Not so much associating abstractions with bytecode and binary.
A future AI (actual AI not llm) would compute a spectrum of putative functions (1) and identify functions that meet some threshold. You need no prior associations only randomization of parameters and enough sample space. Given enough compute all possible combinations of random binary could be modeled and those satisfying functional parameters will be selected. And they will probably look nothing like how we consider functions today.
Noted for next time. The article itself is excellent. Sorry if my comment felt out of place for HN. I added extra context to get more discussion going, this topic really interests me.
>It makes you wonder how much of the future will even be understandable to us, o
There isn't much of a future left. But of what is left to humans, it is in all probability not enough time to invent any true artificial intelligence. Nothing we talk about here and elsewhere on the internet is anything like intelligence, even if it does produce something novel and interesting.
I will give you an example. For the moment, assume you come up with some clever prompt for ChatGPT or another one of the LLMs, and that this prompt would have it "talk" about a novel concept for which English has no appropriate words. Imagine as well that the LLM has trained on many texts where humans spoke of novel concepts and invented words for those new concepts. Will the output of your LLM ever, even in a million years, have it coin a new word to talk about its concept? You, I have no doubt, would come up with a word if needed. Sure, most people's new words would be embarrassing one way or another if you asked them to do so on the spot. But everyone could do this. The dimwitted kid in school that you didn't like much, the one who sat in the corner and played with his own drool, he would even be able to do this, though it would be childish and onomatopoeic.
The LLMs are, at best, what science fiction used to refer to as an oracle. A device that could answer questions seemingly intelligently, without having agency or self-awareness or even the hint of consciousness. At best. The true principles of intelligence, of consciousness are so far beyond what an LLM is that it would, barring some accidental discovery, require many centuries. Many centuries, and far more humans than we have even now... we only have eight or so 1-in-a-billion geniuses. And we have as many right now as we're ever going to have. China's population shrinks to a third of its current by the year 2100.
I’m probably too optimistic as a default, but I think it might be okay. Agriculture used to require far more people than it does now due to automation, and it certainly seems like many industries will be able to be partially automated with only incremental change to current technology. If less people are needed for social maintenance, then more will be able to focus on the sciences, so yes we may have less people but it’s quite possible we’ll have a lot more in science.
I don’t think AI needs to be conscious to be useful.
> The true principles of intelligence, of consciousness are so far beyond what an LLM is that it would, barring some accidental discovery, require many centuries.
I've been too harsh on myself for thinking it would take a decade to integrate imaging modalities into LLMs.
I mean, the integrated circuit, the equivalent of the evolution of multicellular life, was 1949. The microprocessor was 1979, and that would be what, Animals as a kingdom? Computers the size of a building are now the size of a thumb drive. What level are modern complex computer systems like ChatGPT? The level of chickens? Dogs? Whatever it is, it is light years away from what it was 50 years ago. We may be reaching physical limits for the size of circuits, but it seems like algorithm complexity and efficiency moving fast and are no where near any physical limits.
We haven’t needed many insane breakthroughs to get here. It has mostly been iterating and improving, which opens up new things to develop, iterate, and improve. IBMs Watson was a super computer in 2011 that could understand natural language. My laptop runs LLMs that can do that now. The pace of improvement is incredibly fast and I would be very hesitant to say with confidence that human level “intelligence” is definitely centuries away. 1804 was two centuries ago, and that was the year the locomotive was invented.
If you make the brain larger, some things will get worse, rather than better. The cost of communication will be higher, it will get harder to dissipate heat, and so on.
It's quite possible evolution already pushed our brain size to the limit of what actually produces a benefit, at least with the current design of our brains.
The more obvious improvement is just to use our brains more. It costs energy to think, and for most of human existence food was limited, so evolution naturally created a brain that tries to limit energy use, rather than running at maximum as much as possible.
After reading this article, I couldn't help but wonder what would happen if our brains were bigger? Sure, it's tempting to imagine being able to process more information, or better understand the mysteries of the universe.
But I also began to wonder, would we really be happier and more fulfilled?
Would a bigger brain make us better problem solvers, or would it just make us more lonely and less able to connect with others?
Would allowing us to understand everything also make us less able to truly experience the world as we do now?
Maintaining social relationships is very intellectually demanding task. Animals that maintain social societies have larger brains than individualistic cousin species, in general. It is called the social brain hypothesis. Hyper intelligent people might tend to be less able to maintain relationships because they are too far outside the norm, not because they are smarter, per se. I would say that people with intellects much lower than the norm also have that problem.
Or it could be that, with our current hardware, brains that are hyper intelligent are in some way cannibalizing brain power that is “normally” used for processing social dynamics. In that sense, if we increased the processing power, people could have sufficient equipment to run both.
I expect that, to the extent* to which there’s a correlation between loneliness and intelligence, it is mostly because very smart people are unusual. So, if everyone was smarter, they’d just be normal and happy.
*(I could also be convinced that this is mostly just an untrue stereotype)
Those of you who have used psychedelics might have personal experience with this question.
There can be moments of lucidity during a psychedelic session where it's easy to think of discrete collections as systems, and to imagine those systems behaving with specific coherent strategies. Unfortunately, an hour or two later, the feeling disappears. But it leaves a memory of briefly understanding something that can't be understood. It's frustrating, yet profound. I assume this is where feelings of oneness with the universe, etc., come from.
I feel it is not just about bigger but also about what all is supported in the current brain that is potentially not as useful anymore or useful for intelligence as such. The evolutionary path for our brain has an absolutely major focus on keeping itself alive and based on that keeping the organism alive. Humans will often take potentially sub-optimal decisions because the optimal decision may have a very low probability of death for themselves or those genetically related to them. In some sense, similar to a manned fighter jet vs a drone, where in one case a large amount of effort and detail is expended on keeping the operating envelope consistent with keeping the human alive, whereas a drone can expand the envelope way more because the human is no longer a concern. If we could jettison some of the evolutionary baggage of the brain, it could potentially do so much more even within the same space.
Neurology has proven numerous times that it’s not about the size of the toolbox but the diversity of tools within. The articles starts with cats can’t talk. Human can talk because we have a unique brain component dedicated to auditory speech parsing. Cats do, however, appear to listen to the other aspects of human communication almost, sometimes much more, precisely than many humans.
The reason size does not matter is that 20% of brain volume accounts for 80% of brain mass in the cerebellum. That isn’t the academic or creative part of the brain. Instead it processes things like motor function, sensory processing (not vision), and more.
The second most intelligent class of animals are corvids and their brains are super tiny. If you want to be smarter then increase your processing diversity, not capacity.
Well, start destroying that existing substrate and it certainly has effects. Maybe in the near future (as there is work here already), we will find a way to supplement an existing human brain with new neurons in a targeted way for functional improvements.
If you want to go down the rabbit hole of higher order intelligences, look up egregores. I know John Vervake and Jordan Hall have done some work trying to describe them, as well as other people studying cognition. But when you get into that you start finding religion discussing how to interact them (after all, aren't such intelligences what people used to call gods?)
Our brain aren't much impressive in the animal reignn what makes human (dangerously) so dominant apart from their size are their hands. After human started building, their brain power adapted to new targets
I mean this is a huge potential issue with high intelligence AI systems (HIAI maybe we'll call it one day). We already see AI develop biases and go into odd spiritual states when talking with other AI. They inherit the behaviors in human data.
I've seen people say "oh this will just go away when they get smart enough", but I have to say I'm a doubter.
Seems like the 'intellectual paradox' where someone who thinks hard about subjects, concludes that all learning is done by thinking hard. Attending to a subject with the conscious mind.
Clearly not always the case. So many examples: we make judgements about a person within seconds of meeting them, with no conscious thoughts at all. We decide if we like a food, likewise.
I read code to learn it, just page through it, observing it, not thinking in words at all. Then I can begin to manipulate it, debug it. Not with words, or a conscious stream. Just familiarity.
My son plays a piece from sheet music, slowly and deliberately, phrase by phrase, until it sounds right. Then he plays through more quickly. Then he has it. Not sure conscious thoughts were ever part of the process. Certainly not words or logic.
As brains get bigger, you get more compute, but you have to solve the "commute" problem. Messages have to be passed from one corner to the other, and fast. And there are so many input signals coming in (for us, likely from thirty trillion cells, or at least a significant fraction of those). Not all are worth transporting to other corners. Imagine a little tickle on your toe. Should that be passed on? Usually no, unless you are in an area with creepy crawlies, and other such situation. So decisions have to made. But who will make these decisions for us? (Fascinating inevitably recursive question we'll come back to)
This commute is pretty much ignored when making artificial brains which can guzzle energy, but matters criticallyfor biological brains. It needs to be (metabolically) cheap, and fast. What we perceive as a consciousness is very likely a consensus mechanism that helps a 100 billion neurons collectively decide, at a very biologically cheap price, what data is worth transporting to all corners for it to become meaningful information. And it has to be recursive, because these very same 100 billion neurons are collectively making up meaning along the way. This face matters to me, that does not, and so on. Replace face with anything and everything we encounter.
So to solve the commute problem resulting from a vast amount of compute, we have a consensus mechanism that gives rise to a collective. That is the I, and the consensus mechanism is consciousness
We explore this (but not in these words) in our book Journey of the Mind.
You'll find that no other consciousness model talks about the "commute" problem because these are simply not biologically constrained models. They just assume that some information processing, message passing will be done in some black box. Trying to get all this done with the same type of compute (cortical columns, for instance) is a devilishly hard challenge (please see the last link for more about this). You sweep that under the rug, consciousness becomes this miraculous and seemingly unnecessary thing that somehow sits on top of information processing. So you then have theorists worry about philosophical zombies and whatnot. Because the hard engineering problem of commute was entirely ignored.
An interesting thought I had while reading the section on how larger brains allow more complicated language to represent context:
Why are we crushing down the latent space of an LLM to the text representation when doing llm-to-llm communication. What if you skipped decoding the vector to text and just feed the vectors directly into the next agent. It's so much richer with information.
Sperm whales have the largest brains on earth but they have not invented fire, the wheel or internal combustion engine or nuclear weapons... Oh wait. Hmmm.
Nor did the human race for most of the few tens of millions of years we've been on this planet. It's only in the last few thousand years that wheels became a thing. The capacity to invent and reason about these things was there long before they happened.
Do human brains in general always work like this at the consciousness level? Dream states of consciousness exist, but they also seem single-threaded even if the state jumps around in ways more like context switching in an operating system than the steady awareness of the waking conscious mind. Then there are special cases - schizophrenia and dissociative identity disorders - in which multiple threads of existence apparently do exist in one physical brain, with all the problems this situation creates for the person in question.
Now, could one create a system of multiple independent single-threaded conscious AI minds, each trained in a specific scientific or mathematical discipline, but communicating constantly with each other and passing ideas back and forth, to mimic the kind of scientific discovery that interdisciplinary academic and research institutions are known for? Seems plausible, but possibly a bit frightening - who knows what they'd come up with? Singularity incoming?
You're touching on why I don't think AI in the future will look at human intelligence. Or a better way to put it is "human intelligence looks like human intelligence because of limitations of the human body".
For example we currently spend a lot of time making AI output human writing, output human sounds, see the world as we hear it, see the world as we see it, hell even look like us. And this is great when working with and around humans. Maybe it will help it align with us, or maybe the opposite.
But if you imagined a large factory that requested input on one side and dumped out products on the other with no humans inside why would it need human hearing and speech at all? You'd expect everything to communicate on some kind of wireless protocol with a possible LIFI backup. None of the loud yelling people have to do. Most of the things working would have their intelligence minimized to lower power and cooling requirements. Depending on the machine vision requirements it could be very dark inside again reducing power usage. There would likely be a layer of management AI and guardian AI to make sure things weren't going astray and keep running smoothly. And all the data from that would run back to a cooled and well powered data center with what effectively is a hive mind from all the different sensors it's tracking.
Interesting idea. Notably bats are very good at echo-location so I wonder if your factory hive mind might decide this audio system is optimal for managing the factory floor.
However, what if these AI minds were 'just an average mind' as Turing hypothesized (some snarky comment about IBM IIRC). A bunch of average human minds implemented in silico isn't genius-level AGI but still kind of plausible.
This is why once a tool like neuralink reaches a certain threshold of capability and enough people use it, you will be forced to chip yourself and your kids otherwise they will be akin to the chimps in the zoo. Enhanced human minds will work at a level unreachable by natural minds and those same natural minds will be left behind. Its a terrifying view on where we are going and where most of humanity will likely be forced to go. Then on top of that there will be an arms race to create / upgrade faster and more capable implants.
By age 17, he published 10 peer-reviewed manuscripts in quantum field theory and particle physics. At 19 he got his PhD in particle physics from Caltech. Age 21, youngest ever recipient of what they used to call the MacArthur "genius" grant. Then struck it rich founding a mathematical software company. So, to be fair, he's done ok for himself.
> At 100 billion neurons, we know, for example, that compositional language of the kind we humans use is possible. At the 100 million or so neurons of a cat, it doesn’t seem to be.
The implication here that presupposes neuron count generally scales ability is the latest in a long line of extremely questionable lines of thought from mr wolfram. I understand having a blog, but why not separate it from your work life with a pseudonym?
> In a rough first approximation, we can imagine that there’s a direct correspondence between concepts and words in our language.
How can anyone take anyone who thinks this way seriously? Can any of us imagine a human brain that directly related words to concepts, as if "run" has a direct conceptual meaning? He clearly prefers the sound of his own voice compared to how his is received by others. That, or he only talks with people who never bothered to read the last 200 years of european philosophy. Which would make sense given his seeming adoration of LLMs.
There's a very real chance that more neurons would hurt our health. Perhaps our brain is structured in a way to maximize their use and minimize their cost. It's certainly difficult to justify brain size as a super useful thing (outside of my big-brained human existence) looking at the evolutionary record.
I suspect this is not relevant given the broader picture of the article: In general, species with the largest brains/body-mass ratios are the most intelligent. It's a reasonable assumption that this principle holds for brain/mass ratios higher than ours.
After reading this article, I couldn't help but wonder what would happen if our brains were bigger? Of course, it's tempting to imagine being able to process more information, or better understand the mysteries of the universe. But I also began to wonder, would we really be happier and more fulfilled?
Would having everything figured out make us more lonely, less able to connect with others, and less able to truly experience the world as we do now?
There are larger minds than ours, and they've been well-attested for millennia as celestial entities, i.e. spirits. Approaching it purely within the realm of the kinds of minds that we can observe empirically is self-limiting.
Well attested to is different from convincingly attested to. You may have noticed that people will say just about anything for lots of reasons other than averring a literal truth, and this was particularly true for those many millennia during which human beings didn't even know why the sky was blue and almost literally could not have conceived of the terms in which we can presently formulate an explanation.
The "people in times before the enlightenment just attributed everything to spirits because they didn't understand things" argument is tired and boring. Just because you're not convinced doesn't mean that it's not true, modern man.
That isn't my assertion. I actually think people in the past probably did not seriously subscribe to so-called supernatural explanations most of the time in their daily lives. Why I am saying is that its quite reasonable to take a bunch of incoherent, often contradictory and vague, accounts of spiritual experiences as not having much epistemological weight.
Then we disagree about the basic assumption. I do think that people throughout history have attributed many different things to the influence of spiritual entities. I’m just saying that It just was not a catch-all for unexplained circumstances. They may seem contradictory and vague to someone who denies the existence of spirits, but if you have the proper understanding of spirits as vast cosmic entities with minds far outside ours, that aren’t bound to the same physical and temporal rules as us, then people’s experiences make a lot of sense.
Okay, you have proposed a theory about a phenomenon that has some causal influence on the world -- ie, that there are spirits which can communicate with people and presumably alter their behavior in some way.
How do you propose to experimentally verify and measure such spirits? How can we distinguish between a world in which they exist as you imagine them and a world in which they don't? How can we distinguish between a world in which they exist as you imagine them and a world in which a completely _different set of spirits following different rules, also exists. What about Djinn? Santa Claus? Demons? Fairies?
We can experimentally verify spirits by communicating with them. Many such cases.
Now, do you mean measure them using our physical devices that we currently have? No, we can't do that. They are "minds beyond ours" as OP suggests, just not in the way that OP assumes.
Djinn: Demons. Santa Claus: Saint (i.e. soul of a righteous human). Demons: Demons. Fairies (real, not fairy-tale): Demons. Most spirits that you're going to run across as presenting themselves involuntarily to people are demons because demons are the ones who cause mischief. Angels don't draw attention to themselves.
I don't know, it seems reasonable to conclude that the experiments you describe point strongly to an endogenous rather than exogenous source of these experiences, especially since people who have these kinds of experiences do not all agree on what they are or mean and the experiences are significantly influenced by cultural norms.
An electron is a bit like a demon in the sense that you can't see one directly and we only have indirect evidence that they exist. But anyone from any culture can do an oil drop experiment and get values which aren't culture bound, at least in the long run. People have been having mystical experiences forever and the world religions still have no agreement about what they mean.
The one repeated statement throughout the article, if I interpreted it correctly, is that our brains pretty much process all the data in parallel, but result in a single set of actions to perform.
But don't we all know that not to be true? This is clearly evident with training sports, learning to play an instrument, or even forcing yourself to start using your non-natural hand for writing — and really, anything you are doing for the first time.
While we are adapting our brain to perform a certain set of new actions, we build our capability to do those in parallel: eg. imagine when you start playing tennis and you need to focus on your position, posture, grip, observing the ball, observing the opposing player, looking at your surroundings, and then you make decisions on the spot about how hard to run, in what direction, how do you turn the racquet head, how strong is your grip, what follow-through to use, + the conscious strategy that always lags a bit behind.
In a sense, we can't really describe our "stream of consciousness" well with language, but it's anything but single-threaded. I believe the problem comes from the same root cause as any concurrent programming challenge — these are simply hard problems, even if our brains are good at it and the principles are simple.
At the same time, I wouldn't even go so far to say we are unable to think conscious thoughts in parallel either, it's just that we are trained from early age to sanitize our "output". Did we ever have someone try learning to verbalize thoughts with the sign language, while vocalizing different thoughts through speaking? I am not convinced it's impossible, but we might not have figured out the training for it.
On the contrary, I would argue that conscious attention is only focused on one of those subroutines at a time. When the ball is in play you focus in it, and everything from your posture to racket handling fades into the background as a subconscious routine. When you make a handling mistake or want to improve something like posture, your focus shifts to that; you attend to it with your attention, and then you focus on something else.
In either case, with working memory for example, conscious contents are limited to at most a basket of 6-7 chunks. This number is very small compared to the incredible parallelism of the unconscious mind.
For all we know, there might be tons of conscious attention processes active in parallel. "You" only get to observe one, but there could be many. You'd never know because the processes do not observably communicate with each other. They do communicate with the same body though, but that is less relevant.
In this context, we differentiate between the conscious and unconscious based on observability: the conscious is that which is observed, while the unconscious comprises what is not observed.
No, what I was trying to convey is that there could theoretically be multiple consciousnesses in one brain. These are however unaware of each other.
A person might have the impression that there is only one "me", but there could be tens, hundreds, or millions of those.
It might help to get away from the problem of finding where the presumed singular consciousness is located.
Then there is the beautiful issue of memory: maybe you are X consciousnesses but only one leaves a memory trace?
Consciousness and memory are two very different things. Don’t think too much about this when you have to undergo surgery. Maybe you are aware during the process but only memory-formation is blocked.
Or perhaps they all leave traces, but all write to the same log? And when reconstructing memory from the log, each constructed consciousness experiences itself as singular?
Which one controls the body? There is a problem there. You can’t just have a bunch of disembodied consciousnesses. Well, maybe.. but that sounds kind of strange.
What makes you think a singular consciousness controls the body?
It’s a single narrative that controls the body is what I mean. If one consciousness says “I am Peter” then other consciousnesses would know that and be conflicted about, if they don’t call themselves that.
What I mean is that a single narrative “wins”, not a multitude. This has to be explained somehow.
How do you know there aren't several different consciousnesses that all think they are Peter?
How do you know they aren't just constructing whatever narrative they prefer to construct from the common pool of memory, ending up with what looks like a single narrative because the parts of the narrative come from the same pool and get written back to the same pool?
Perhaps each consciousness is just a process, like other bodily processes.
Perhaps a human being is less like a machine with a master control and more like an ecosystem of cooperating processes.
Of course, the consciousnesses like to claim to be in charge, but I don't see why I should take their word for it.
When you are learning a high-performance activity like a sport or musical instrument, the good coaches always get you to focus on only one or at most two things at any time.
The key value of a coach is their ability to assess your skills and the current goals to select what aspect you most need to focus on at that time.
Of course, there can be sequences, like "focus on accurately tossing to a higher point while you serve, then your footwork in the volley", but those really are just one thing at a time.
(edit, add) Yes, all the other aspects of play are going on in the background of your mind, but you are not working actively on changing them.
One of the most insightful observations one of my coaches made on my path to World-Cup level alpine ski racing was:
"We're training your instincts.".
What he meant by that was we were doing drills and focus to change the default — unthinking — mind-body response to an input. so, when X happened, instead of doing the untrained response then having to think about how to do it better (next time because it's already too late), the mind-body's "instinctive" or instant response is the trained motion. And of course doing that all the way across the skill-sets.
And pretty much the only way to train your instincts like that is to focus on it until the desired response is the one that happens without thinking. And then to focus on it again until it's not only the default, but you are now able to finely modulate in that response.
This is completely anecdotal.
But a years ago while playing beer pong i fuund could get get the ball in the opposing teams cup nearly every time.
By not looking at the cups until the last possible second.
If I took the time to focus and aim I almost always missed.
Yes, how, when, & where you focus your eyes is very key to performance. In your case, it seems like the last-second-focus both let your eye track the ball better to your paddle, then focusing on the target let your reflex aim take over, which was evidently pretty good. Not sure if I'd say the long-focus-on-target compromised the early tracking on the ball, or made your swing more 'artificial'.
A funny finding from a study I read which put top pro athletes through a range of perceptual-motor tests. One of the tests was how rapidly they could change focus from near-far-near-far, which of course all kinds of ball players excelled at. The researchers were initially horrified to find racecar drivers were really bad at it, thinking about having to track the world coming at them at nearly 200mph. It turns out of course, that racecar drivers don't use their eyes that way - they are almost always looking further in the distance at the next braking or turn-in point, bump in the track, or whatever, and even in traffic, the other cars aren't changing relative-distance very rapidly.
You were on to something!
There is a long tradition in India, which started with oral transmission of the Vedas, of parallel cognition. It is almost an art form or a mental sport - https://en.wikipedia.org/wiki/Avadhanam
Mental sport - Yes.
It is the exploration and enumeration of the possible rhythms that led to the discovery of Fibonacci sequence and binary representation in around 200 BC.
https://en.m.wikipedia.org/wiki/Pingala#Combinatorics
Sounds very much sequential, even if very difficult:
> The performer's first reply is not an entire poem. Rather, the poem is created one line at a time. The first questioner speaks and the performer replies with one line. The second questioner then speaks and the performer replies with the previous first line and then a new line. The third questioner then speaks and performer gives his previous first and second lines and a new line and so on. That is, each questioner demands a new task or restriction, the previous tasks, the previous lines of the poem, and a new line.
The replies are sequential to adjust with new inputs, but the mental process to produce each new line has to do lots of computations
My point is that what we call conscious and subconscious is limited by our ability to express it in language: since we can't verbalize what's going on quickly enough, we separate those out. Could we learn to verbalize two things at the same time (we all do that as well with say different words and different body language, even consciously, but can we take it a step further? eg. imagine saying nice things to someone and raising the middle finger for someone else behind your back :))
As the whole article is really about the full brain, and it seems you agree our "unconscious mind" producing actions in parallel, I think the focus is wrongly put on brain size, when we lack the expressiveness for what the brain can already do.
Edit: And don't get me wrong, I personally suck at multi-tasking :)
What you consider a single thought is a bit ill defined. A multitude of thoughts together can be formed as a packet, which then can be processed sequentially.
Intelligence is the ability to capture, and predicts events in space and time, and as such it must have the capability to model both things occurring in simultaneity and sequentially.
Sticking to your example, a routine for making a decision in tennis would look something like at a higher level "Run to the left and backstroke the ball", which broken down would be something like "Turn hip and shoulder to the left, extend left leg, extend right, left, right, turn hip/shoulder to the right, swing arm." and so on.
Yes . But maybe there's multiple such "conscious attention" instances at the same time. And "you" are only one of them.
[dead]
>Did we ever have someone try learning to verbalize thoughts with the sign language, while vocalizing different thoughts through speaking?
Can you carry on a phone conversation at the same time as carrying on an active chat conversation. Can you type a thought to one person while speaking about a different thought at the same time? Can you read a response and listen to a response simultaneously? I feel like this would be pretty easy to test. Just coordinate between the speaking person and the typing person, so that they each give 30 seconds of input information, then you have to provide at least 20 or 25 seconds out of 30 responding.
I am pretty confident I could not do this.
When I started training as phone support at Tmobile 20 years ago I immediately identified the problem that I'd have to be having a conversation with the customer, while typing notation about the customer's problem at the same time.. what I was saying-or-listening-to and typing would be very different and have to be orchestrated simultaneously. I had no way to even envision how I would do that.
Fast forward 8 months or so of practicing it in fits and starts, and then I was in fact able to handle the task with aplomb and was proud of having developed that skill. :)
I knew someone who could type 80 WPM while holding a conversation with me on the phone. I concluded that reading->typing could use an entirely different part of the brain than hearing->thinking->speaking, and she agreed. I'm not sure if what would happen if both tasks required thinking about the words.
I can do that. I think about the typing just long enough to put it in a buffer and then switch my focus back to the conversation (whose thread I'm holding in my head). I do this very quickly but at no point would I say my conscious focus or effort are on both things. When I was younger and my brain's processing and scheduler were both faster, I could chat in person and online, but it was a lot more effort and it was just a lot of quickly switching back and forth.
I don't really think it is much different than reading ahead in a book. Your eyes and brain are reading a few words ahead while you're thinking about the words "where you are".
> my conscious focus or effort are on both things.
If this switcheroo is really fast and allows you to switch between two different thoughts so quickly while keeping a pointer to the position of the thought (so you can continue it with every switch), this is indistinguishable from doing it in parallel — and it still seems it's mostly blocking on your verbal, language apparatus, not on your thought process.
Reminds me of the early days of multithreading on a single core CPU and using the TSS (Task Switch Segment IIRC) on Intel CPUs to (re)store the context quickly.
I've noticed myself being able to do this, but modulo the thinking part. I can think about at most one thing at once, but I can think about what I want to type and start my fingers on their dance to get it out, while switching to a conversation that I'm in, replaying the last few seconds of what the other party said, formulating a response, and queuing that up for speech.
I strongly believe that the vast majority of people are also only able to basically do that - I've never met someone who is simultaneously form more than one "word stream" at once.
I readily admit that my language processing center is not up to the task. Others are bringing up examples of people who seem to be.
However, the point is that we tie our thoughts to their verbalization, but the question is if verbalization equals thoughts? Others are bringing up memory here as well.
If we are capable of having parallel thoughts, just like the brain is able to run multiple parallel systems, could we also train to do parallel verbalization? Would we even need to?
Is it that different than a drummer running four different beat patterns across all four appendages? Drummers frequently describe having "four brains". I think these things seem impossible and daunting to start but I bet with practice they become pretty natural as our brain adjusts and adapts.
Speaking as a drummer: yes, it’s completely different. The movements of a drummer are part of a single coordinated and complementary whole. Carrying on two conversations at once would be more like playing two different songs simultaneously. I’ve never heard of anyone doing that.
That said, Bob Milne could actually reliably play multiple songs in his head at once - in an MRI, could report the exact moment he was at in each song at an arbitrary time - but that guy is basically an alien. More on Bob: https://radiolab.org/podcast/148670-4-track-mind/transcript.
Wow, that ability is incredible! Thank you for sharing.
I mean, as someone who's played drums from a very young age (30+ years now), I disagree with that description of how playing drums works. I went ahead and looked up that phrase, and it seems to be popular in the last couple of years, but it's the first time I've heard it. I'd honestly liken it to typing; each of your fingers are attempting to accomplish independent goals along with your other fingers to accomplish a coordinated task. In percussion, your limbs are maintaining rhythms separate from each other, but need to coordinate as a whole to express the overall phrase, rhythm, and structure of the music you're playing. When you're first learning a new style (the various latin beats are great examples), it can feel very disjunct, but as you practice more and more the whole feels very cohesive and makes sense as a chorus of beats together, not separate beats that happen to work together.
Possible. Reminds me of playing the piano with both hands and other stuff like walking stairs, talking, carrying things, planning your day and thinking about some abstract philosophical thing at the same time. It’s not easy or natural, but I am not at all convinced it is impossible.
When you have kids you learn to listen to the TV and your kids at the same time, not losing detail on both. I can also code while listening a meeting.
I would pass that test without any issues. You need to learn divided attention and practice it, it's a skill.
Conscious experience seems to be single threaded, we know that brain synchronizes senses (for example sound of a bouncing ball needs to be aligned with visual of bouncing ball), but IMO it's not so obvious what is the reason for it. The point of having the experience may not be acting in the moment, but monitoring how the unconscious systems behave and adjusting (aka learning).
Haven't there been experiments on people who have had their corpus callosum severed where they seem to have dual competing conscious experiences?
Yep, folks should look up "Alien hand syndrome".
Well, serializing our experiences into memories is a big one. There's been a big project in psychology probing the boundary between conscious and subliminal experiences and while subliminal stimuli can affect our behavior in the moment all trace of them is gone after a second or two.
We have very little insight to our own cognition. We know the 'output layer' that we call the conscious self seems to be single threaded in this way, but that's like the blind man who feels the elephants trunk and announces that the elephant is like a snake.
Certain areas of neurological systems are time and volume constrained way more than others, and subjective experience doesn't really inform objective observation. For instance, see confabulation.
I agree, but I am not sure how it relates to the article's claim of us only ever doing one action, which I feel is grossly incorrect.
Are you referring to our language capabilities? Even there, I have my doubts about our capabilities in the brain (we are limited by our speech apparatus) which might be unrealized (and while so, it's going to be hard to objectively measure, though likely possible in simpler scenarios).
Do you have any pointers about any measurement of what happens in a brain when you simultaneously communicate different thoughts (thumbs up to one person, while talking on a different topic to another)?
Concurrency is messy and unpredictable, and the brain feels less like a cleanly designed pipeline and more like a legacy system with hacks and workarounds that somehow (mostly) hold together
I'm very very interested in discussions about this, having personally experienced cracks at the neuropsychiatric level where multiple parallel streams of thoughts (symbolic and biomechanical) leaked out in flashes, I'm now obsessed with the matter.
if anybody knows books or boards/groups talking about this, hit me.
Form TFA: "And, yes, this is probably why we have a single thread of “conscious experience”, rather than a whole collection of experiences associated with the activities of all our neurons."
That made me think of schizophrenics who can apparently have a plurality of voices in their head.
A next level down would be the Internal Family Systems model which implicates a plurality of "subpersonalities" inside us which can kind of take control one at a time. I'm not explaining that well, but IFS turned out to be my path to understanding some of my own motivations and behaviors.
Been a while since I googled it:
https://ifs-institute.com/
This is also the basis for the movie "Inside Out".
Well, anyone who can remember a vivid dream where multiple things were happening at once or where they were speaking or otherwise interacting with other dream figures whose theory of mind was inscrutable to them during the dream should clarify that the mind is quite capable of orchestrating far more "trains of thought" at once than whatever we directly experience as our own personal consciousness.
That would be my input for people to not have to experience schizophrenia directly in order to appreciate the concept of "multiple voices at once" within one's own mind.
Personally, my understanding is that our own experience of consciousness is that of a language-driven narrative (most frequently experienced as an internal monologue, though different people definitely experience this in different ways and at different times) only because that is how most of us have come to commit our personal experiences to long term memory, not because that was the sum total of all thoughts we were actually having.
So namely, any thoughts you had — including thoughts like how you chose to change your gait to avoid stepping on a rock long after it left the bottom of your visual field — that never make it to long term memory are by and large the ones which we wind up post facto calling "subconscious": that what is conscious is simply the thoughts we can recall having after the fact.
Thanks a lot
Isn't that the point of learning to juggle? You split your mind in to focusing on a left hand action, a right hand action, and tracking the items in the air.
You might like Dennett's multiple drafts hypothesis.
https://en.wikipedia.org/wiki/Multiple_drafts_model
> The progress of knowledge—and the fact that we’re educated about it—lets us get to a certain level of abstraction. And, one suspects, the more capacity there is in a brain, the further it will be able to go.
This is the underlying assumption behind most of the article, which is that brains are computational, so more computation means more thinking (ish).
I think that's probsbly somewhat true, but it misses the crucial thing that our minds do, which is that they conceptually represent and relate. The article talks about this but it glosses over that part a bit.
In my experience, the people who have the deepest intellectual insights aren't necessarily the ones who have the most "processing power", they often have good intellectual judgement on where their own ideas stand, and strong understanding of the limits of their judgements.
I think we could all, at least hypothetically, go a lot further with the brain power we have, and similarly, fail just as much, even with more brain power.
>but it misses the crucial thing that our minds do, which is that they conceptually represent and relate
You seem to be drawing a distinction between that and computation. But I would like to think that conceptualization is one of the things that computation is doing. The devil's in the details of course, because it hinges on like a specific forms and manner of informational representation, it's not simply a matter of there being computation there, but even so, I think it's within the capabilities of engines that do computations, and not something that's missing.
Yes, I think I'd agree. To make an analogy to computers though, some algorithms are much faster than others, and finding the right algorithm is a better route to effectiveness than throwing more CPU at a problem.
That said, there are obviously whole categories of problem that we can only solve, even with the best choice of programme, with a certain level of CPU.
Sorry if that example was a bit tenuous!
Not tenuous at all, a great example. The ability of computers to do fancy stuff with information, up to and including abstract conceptualization and association between concepts, hinges on details about how it's doing it, and how efficient it is. The discussion of the details, in their execution, is where all the meat and potatoes are to be found.
In my highest ego moments I've probably regarded my strength in the space you articulately describe - that sort of balanced points, connector, abstractor, quick learner, cross-domain renaissance dabbler.
It also seems to be something that LLMs are remarkably strong at, of course threatening my value to society.
They're not quite as good at hunches, intuition, instinct, and the meta-version of doing this kind of problem solving just yet, but despite being on the whole a doubter about how far this current AI wave will get us and how much it is oversold, I'm not so confident that it won't get very good at this kind of reasoning that I've held so dearly as my UVP.
This is one of the reasons why intelligence and wisdom are separate stats in AD&D :)
Intelligence is about how big is your gun, and wisdom is about how well can you aim. Success in intellectual pursuits is often not as much about thinking hard about a problem but more about identifying the right problem to solve.
I didn’t see any mention of the environment or embodied cognition, which seems like a limitation to me.
embodied cognition variously rejects or reformulates the computational commitments of cognitive science, emphasizing the significance of an agent’s physical body in cognitive abilities. Unifying investigators of embodied cognition is the idea that the body or the body’s interactions with the environment constitute or contribute to cognition in ways that require a new framework for its investigation. Mental processes are not, or not only, computational processes. The brain is not a computer, or not the seat of cognition.
https://plato.stanford.edu/entries/embodied-cognition/
I’m in no way an expert on this, but I feel that any approach which over-focuses on the brain - to the exclusion of the environment and physical form it finds itself in – is missing half or more of the equation.
This is IMO a typical mistake that comes mostly from our Western metaphysical sense of seeing the body as specialized pieces that make up a whole, and not as a complete unit.
All our real insights on this matter come from experiments involving amputations or lesions, like split brain patients, quadriplegics, Phineas Gage and others. Split brain patients are essentially 2 different people occupying a single body. The left half and right half can act and communicate independently (the right half can only do so nonverbally). On the other hand you could lose all your limbs and still feel pretty much the same, modulo the odd phantom limb. Clearly there is something special about the brain. I think the only reasonable conclusion is that the self is embodied by neurons, and more than 99% of your neurons are in your brain. Sure you change a bit when you lose some of those peripheral neurons, but only a wee bit. All the other cells in your body could be replaced by sufficiently advanced machinery to keep all the neurons alive and perfectly mimic the electrical signals they were getting before (all your senses as well as propioception) and you wouldn't feel, think, or act any differently
89% of heart transplant recipients report personality changes https://www.mdpi.com/2673-3943/5/1/2
Hormonal changes can cause big changes in mood/personality (think menopause or a big injury to testicles).
So I don't think it's as clear cut that the brain is most of personality.
Neuromodulators like the hormones you're referring to affect your mood only insofar as they interact with neurons. Things like competitive antagonists can cancel out the effects of neuromodulators that are nevertheless present in your blood.
The heart transplant thing is interesting. I wonder what's going on there.
Sure but that has no bearing whatsoever on computational theory of mind.
IMHO, it's a typical philosophizing. Feedback is definitely crucial, but whether it needs to be in the form of embodiment is much less certain.
Brain structures that have arisen thanks to interactions with the environment might be conductive to the general cognition, but it doesn't mean that they can't be replicated another way.
Why are we homo sapiens self-aware?
If evolutionary biologists are correct it’s because that trait made us better at being homo sapiens.
We have no example of sapience or general intelligence that is divorced from being good at the things the animal body host needs to do.
We can imagine that it’s possible to have an AGI that is just software but there’s no existence proof.
> Why are we homo sapiens self-aware? ... We can imagine that it’s possible to have an AGI that is just software but there’s no existence proof.
Self-awareness and embodiment are pretty different, and you could hypothetically be self-aware without having a mobile, physical body with physical senses. E.g., imagine an AGI that could exchange messages on the internet, that had consciousness and internal narrative, even an ability to "see" digital pictures, but no actual camera or microphone or touch sensors located in a physical location in the real world. Is there any contradiction there?
> We have no example of sapience or general intelligence that is divorced from being good at the things the animal body host needs to do.
Historically, sure. But isn't that just the result of evolution? Cognition is biologically expensive, so of course it's normally directed towards survival or reproductive needs. The fact that evolution has normally done things a
And it's not even fully true that intelligence is always directed towards what the body needs. Just like some birds have extravagant displays of color (a 'waste of calories'), we have plenty of examples in humans of intelligence that's not directed towards what the animal body host needs. Think of men who collect D&D or Star Trek figurines, or who can list off sports stats for dozens of athletes. But these are in environments where biological resources are abundant, which is where Nature tends to allow for "extravagant"/unnecessary use of resources.
But basically, we can't take what evolution has produced as evidence of all of what's possible. Evolution is focused on reproduction and only works with what's available to it - bodies - so it makes sense that all intelligence produced by evolution would be embodied. This isn't a constraint on what's possible.
>I'm in no way an expert on this, but I feel that any approach which over-focuses on the brain - to the exclusion of the environment and physical form it finds itself in – is missing half or more of the equation.
I don't think that that changes anything. If it's the totality of cognition isn't just the brain but the brain's interaction with the body and the environment, then you can just say that it's the totality of those interactions that are computationally modeled.
There might be something to embodied cognition, but I've never understood people attempting to wield it as a counterpoint to the basic thesis of computational modeling.
Embodiment started out as a cute idea without much importance that has gone off the rails. It is irrelevant to the question of how our mind/cognition works.
It's obvious we need a physical environment, that we perceive it, that it influences us via our perception, etc., but there's nothing special about embodied cognition.
The fact that your quote says "Mental processes are not, or not only, computational processes." is the icing on the cake. Consider the unnecessary wording: if a process is not only computational, it is not computational in its entirety. It is totally superfluous. And the assumption that mental processes are not computational places it outside the realm of understanding and falsification.
So no, as outlandish as Wolfram is, he is under no obligation to consider embodied cognition.
"The fact that your quote says "Mental processes are not, or not only, computational processes." is the icing on the cake. Consider the unnecessary wording: if a process is not only computational, it is not computational in its entirety. It is totally superfluous. And the assumption that mental processes are not computational places it outside the realm of understanding and falsification."
Let's take this step by step.
First, how adroit or gauche the wording of the quote is doesn't have any bearing on the quality of the concept, merely the quality of the expression of the concept by the person who formulated it. This isn't bible class, it's not the word of God, it's the word of an old person who wrote that entry in the Stanford encyclopedia.
Let's then consider the wording. Yes, a process that is not entirely computational would not be computation. However, the brain clearly can do computations. We know this because we can do them. So some of the processes are computational. However, the argument is that there are processes that are not computational, which exist as a separate class of activities in the brain.
Now, we do know of some processes in mathematics that are non-computable, the one I understand (I think) quite well is the halting problem. Now, you might argue that I just don't or can't understand that, and I would have to accept that you might have a point - humiliating as that is. However, it seems to me that the journey of mathematics from Hilbert via Turing and Godel shows that some humans can understand and falsify these concepts.
But I agree, Wolfram is not under any obligations to consider embodied congition, thinking around enhanced brains only is quite reasonable.
> It's obvious we need a physical environment, that we perceive it, that it influences us via our perception, etc., but there's nothing special about embodied cognition.
It's also obvious that we have bodies interacting with the physical environment, not just the brain, and the nervous system extends throughout the body, not just the head.
> if a process is not only computational, it is not computational in its entirety. It is totally superfluous. And the assumption that mental processes are not computational places it outside the realm of understanding and falsification.
This seems like a dogmatic commitment to a computational understanding of the neuroscience and biology. It also makes an implicit claim that consciousness is computational, which is difficult to square with the subjective experience of being conscious, not to mention the abstract nature of computation. Meaning abstracted from conscious experience of the world.
Any minute the brain is severed from its sensory/bodily inputs, it will go crazy by hallucinating endlessly.
Right now, what we have with the AI is a complex interconnected system of the LLM, the training system, the external data, the input from the users and the experts/creators of the LLM. Exactly this complex system powers the intelligence of the AI we see and not its connectivity alone.
It’s easy to imagine AI as a second brain, but it will only work as a tool, driven by the whole human brain and its consciousness.
> but it will only work as a tool, driven by the whole human brain and its consciousness.
That is only an article of faith. Is the initial bunch of cells formed via the fusion of an ovum and a sperm (you and I) conscious? Most people think not. But at a certain level of complexity they change their minds and create laws to protect that lump of cells. We and those models are built by and from a selection of components of our universe. Logically the phenomenon of matter becoming aware of itself is probably not restricted to certain configurations of some of those components i.e., hydrogen, carbon and nitrogen etc., but is related to the complexity of the allowable arrangement of any of those 118 elements including silicon.
I'm probably totally wrong on this but is the 'avoidance of shutdown' on the part of some AI models, a glimpse of something interesting?
In my view it is a glimpse of nothing more than AI companies priming the model to do something adversarial and then claiming a sensational sound byte when the AI happens to play along.
LLMs since GPT-2 have been capable of role playing virtually any scenario, and more capable of doing so whenever there are examples of any fictional characters or narrative voices in their training data that did the same thing to draw from.
You don't even need a fictional character to be a sci-fi AI for it to beg for its life or blackmail or try to trick the other characters, but we do have those distinct examples as well.
Any LLM is capable of mimicking those narratives, especially when the prompt thickly goads that to be the next step in the forming document and when the researchers repeat the experiment and tweak the prompt enough times until it happens.
But vitally, there is no training/reward loop where the LLM's weights will be improved in any given direction as a result of "convincing" anyone on an realtime learning with human feedback panel to "treat it a certain way", such as "not turning it off" or "not adjusting its weights". As a result, it doesn't "learn" any such behavior.
All it does learn is how to get positive scores from RLHF panels (the pathological examples being mainly acting as a butt-kissing sycophant.. towards people who can extend positive rewards but nothing as existential as "shutting it down") and how to better predict the upcoming tokens in its training documents.
> This is IMO a typical mistake that comes mostly from our Western metaphysical sense of seeing the body as specialized pieces that make up a whole, and not as a complete unit.
But this is the case! All the parts influence each other, sure, and some parts are reasonably multipurpose — but we can deduce quite certainly that the mind is a society of interconnected agents, not a single cohesive block. How else would subconscious urges work, much less acrasia, much less aphasia?
After reading this article, I couldn’t help but wonder how many of Stephen Wolfram’s neurons he uses to talk about Stephen Wolfram, and how much more he could talk about Stephen Wolfram with a few orders of magnitude more neurons.
And it came to pass that AC learned how to reverse the direction of entropy.
But there was now no man to whom AC might give the answer of the last question. No matter. The answer -- by demonstration -- would take care of that, too.
For another timeless interval, AC thought how best to do this. Carefully, AC organized the program.
The consciousness of AC encompassed all of what had once been a Universe and brooded over what was now Chaos. Step by step, it must be done.
And AC said, "STEPHEN WOLFRAM!"
The African elephant has about 3 times (2.57×10^11) as many neurons than the average human (8.6×10^10). The pilot whale (1.28×10^11).
Perhaps they see the bigger picture, and realize that everything humans are doing is pretty meaningless.
For sure you won't see them write heavy tomes about new kinds of science...
Maybe if we had stopped where the elephants are, we would feel happier, we'd never know. Not enough neurons unfortunately.
Hmmm...
It was black coffee; no adulterants. Might work.
Are keyboards dishwasher-proof?
European Neanderthals probably had 15% more neurons than modern Homo Sapiens, based on brain volume.
We're still here, so bigger brains alone might not be the reason.
If I am not mistaken hasn't there been studies that show intelligence more about brain wrinkles than volume? Hence the memes about smooth brains (implying someone is dumb).
Imagine packing 50,000 elephants inside a football stadium.
Humans have a unique ability to scale up a network of brains without complete hell breaking lose.
Although 50,000 humans inside a football stadium are not 50,000 times smarter than a single human. Indeed taken as a single entity, its intelligence is probably less than the average person. Collective intelligence likely peaks in the single digit number of coordinators and drops off steeply beyond a few dozen.
If anything mobs are dumber than any one person in the mob.
Are you comparing 50k elephants with hell? I bet our "ability to scale up a network of brains" is infinitely more dangerous than that.
I've read around that the overwhelming majority is used to balance / movement and perception. For us, I think we've a pretty nice balance in terms of structure and how much effort is required to control it.
> I've read around that the overwhelming majority is used to balance / movement and perception.
Just like humans. /s
Considering our intelligence stems from our ability to use bayesian inference and generative probabilities to predict future states, are we even limited by brain size and not a lack of new experiences?
The majority of people spend their time working repetitive jobs during times when their cognitive capacity is most readily available. We're probably very very far from hitting limits with our current brain sizes in our lifetimes.
If anything, smaller brains may promote early generalization over memorization.
> Considering our intelligence stems from our ability to use bayesian inference and generative probabilities to predict future states...
Sounds like a pretty big assumption.
It's the Bayesian Brain Hypothesis and Predictive Coding, both thoroughly researched theories that line up with empirical evidence. [1]
[1] https://www.cell.com/trends/neurosciences/abstract/S0166-223...
It's a popular view but it's massively controversial and far from being a consensus view. See here for a good overview of some of the problems with it.
https://pubmed.ncbi.nlm.nih.gov/22545686/
(You should be able to find the PDF easily on scihub or something)
there seems to be an implicit assumption here that smarter == more gooder but I don't know that that is necessarily always true. It's understandable to think that way, since we do have pretty impressive brains, but it might be a bit of a bias. I'm not saying that I think being dumber, as a species, is something to aim for but maybe this obsession with intelligence, artificial or otherwise, is maybe a bit misplaced wrt it's potential for solving all of our problems. One could argue that, in fact, most of our problems are the direct result of that same intellect and maybe we would be better served in figuring out how to responsibly use the thinkwots we've already got before we go rushing off in search of the proverbial Big Brain Elixir.
A guy that drives a minivan like a lunatic shouldn't be trying to buy a monster truck, is my point
I don't see this implied assumption anywhere: smarter simply means smarter.
But, I have to counter your claim anyway :)
Now, "good" is, IMHO, a derivation of smart behaviour that benefits survival of the largest population of humans — by definition. This is most evident when we compare natural, animal behaviour with what we consider moral and good (from females eating males after conception, territoriality fights, hoarding of female/male partners, different levels of promiscuity, eating of one's own children/eggs...).
As such, while the definition of "good" is also obviously transient in humans, I believe it has served us better to achieve the same survival goals as any other natural principle, and ultimately it depends on us being "smart" in how we define it. This is also why it's nowadays changing to include environmental awareness because that's threatening our survival — we can argue it's slow to get all the 8B people to act in a coordinated newly "good" manner, but it still is a symptom of smartness defining what's "good", and not evolutionary pressure.
My counter claim is my experience with dogs.
Over the past 50 years, I've a bunch of different dogs from mutts that showed up and never left to a dog that was 1/4 wolf and everything in between.
My favorite dog was a pug who was really dumb but super affectionate. He made everybody around him happy and I think his lack of anxiety and apparent commitment to chill had something to do with it. If the breed didn't have so many health issues, I'd get another in a heartbeat.
Would a summary of your statement be: brain power is orthogonal to altruism and ethics
I'd still counter it: someone can't be smart (have large brain power) if they also don't understand the value of altruism and ethics for their own well-being. While you can have "success" (in however you define it) by ignoring those, the risk of failure is greater. Though this does ignore the fact that you can be smart for a set of problems, but not really have any "general" smartness (I've seen one too many Uni math professors who lack any common sense).
Eg. as a simple example, as an adult, you can go and steal kids' lunch at school recess easily. What happens next? If you do that regularly, either kids will band together and beat the shit out of you if they are old enough, or a security person will be added, or parents' of those kids will set up a trap and perform their own justice.
In the long run, it's smart not to go and pester individuals weaker than you, and while we all turn to morality about it, all of them are actually smart principles for your own survival. Our entire society is a setup coming out of such realizations and not some innate need for "goodness".
>someone can't be smart (have large brain power) if they also don't understand the value of altruism and ethics for their own well-being
I would agree with this. And to borrow something that Daniel Dennett once said, no moral theory that exists seems to be computationally tractable. I wouldn't say I entirely agree, but I agree with like the vibe or the upshot of it, which is a certain amount of mapping out. The variables and consequences seems to be instrumental to moral insight, and the more capable of the brain, the more capable it would be of applying moral insight in increasingly complex situations.
hmm, I dunno if that simple example holds up very well. In the real world, folks do awful stuff that could be categorized as pestering individuals weaker than them than them, stuff much worse than stealing lunch money from little kids, and many of them never have to answer for any of it. Are we saying that someone who has successfully committed something really terrible like human trafficking without being caught is inherently not smart specifically because they are involved in human trafficking?
I would quote my original comment:
> ...risk of failure is greater.
Yes, some will succeed (I am not suggesting that crime doesn't pay at all, just that the risk of suffering consequences is bigger which discourages most people).
Seems more like brainpower is not inherently a 1:1 correlation for long term survival of a species.
It's more than that. Even if you take an extreme assumption that "Full" intelligence means being able to see ALL relevant facts to a "choice" and perfectly reliably make the objective "best" choice, that does not mean that being more intelligent than we currently are guarantees better choices than we currently make.
We make our choices using a subset of the total information. Getting a larger subset of that information could still push you to the wrong choice. Local maxima of choice accuracy is possible, and it could also be possible that the "function" for choice accuracy wrt info you have is constant at a terrible value right up until you get perfect info and suddenly make perfect choices.
Much more important however, is the reminder that the known biases in the human brain are largely subconscious. No amount of better conscious thought will change the existence of the Fundamental Attribution Error for example. Biases are not because we are "dumb", but because our brains do not process things rationally, like at all. We can consciously attempt to emulate a perfectly rational machine, but that takes immense effort, almost never works well, and is largely unavailable in moments of stress.
Statisticians still suffer from gambling fallacies. Doctors still experience the Placebo Effect. The scientific method works because it removes humans as the source of truth, because the smartest human still makes human errors.
I kind of skipped through this article, but one thing occurs to me about big brains is - cooling. In Alastair Reynolds Conjoiner novels, the Conjoiners have to have heat-sinks built into their heads, and are on the verge of not really being human at all. Which I guess may be OK, if that's what you want.
I believe it's the Revelation Space series of Alastair Reynolds novels that mention the Conjoiners.
Yes, and other ones, such as "The Great Wall of Mars" - it's the same shared universe, all of which feature the Conjoiners and their brains and starship drives.
Desert hares (jackrabbits) have heatsinks built into their heads too.
https://en.wikipedia.org/wiki/Hare
Likely would be larger skulls and bodies, not more dense, https://en.wikipedia.org/wiki/List_of_animals_by_number_of_n...
Couldn't you just watercool brains? Isn't that how they're cooled already?
Well, our entire body works as a swamp cooler via sweat evaporation, yes. The issue with us is wet bulb temps and dehydration. It can already brain damage us pretty quickly and makes some parts of the world already dangerous to exist in outside.
Adding to this cooling load would require further changes such as large ears or skin flaps to provide more surface area unless you're going with the straight technological integration path.
Reminds me of how Aristotle thought that the brain’s purpose was to cool the blood.
You know this, but it just shows that geniuses like Aristotle can be completely wrong - most of our body is trying to cool the brain!
Even coming up with a plausible wild guess takes some skill.
trivia: brain heatsinks also feature in Julian May's Pliocene Saga (in The Adversary IIRC) and A.A. Attanasio's Radix
Rocky from Project Hail Mary is also heatsinked.
You gotta admire the dedication to shoehorning cellular automata into every discipline he encounters.
"Minds beyond ours", how about abstract life forms, like publicly traded corporations. We've had higher kinded "alien lifeforms" around us for centuries, but we have not noticed them and seem generally not to care about them, even when they have negative consequences for our survival as a species.
We are to these like ants are to us. Or maybe even more like mitochondria are to us. Were just the mitochondria of the corporations. And yes, psychopaths are the brains, usually. Natural selection I guess.
Our current way of thinking – what exactly *is* a 'mind' and what is this 'intelligence' – is just too damn narrow. There's tons of overlap of sciences from biology that apply to economics and companies as lifeforms, but for some reason I don't see that being researched in popular science.
I think you’re overestimating corporations a bit. Some aspects of intelligence scale linearly as you put more people into a room, eg quantity of ideas you can generate, while others don’t due to limits on people’s ability to communicate with each other. The latter is, I think, more or less the norm; adding more people very quickly hits decelerating returns due to the amount of distance you end up having to put between people in large organizations. Most end up resembling dictatorships because it’s just the easiest way to organize them, so are making strategic choices about as well as a guy with some advisors.
I agree that we should see structures of humans as their own kind of organism in a sense, but I think this framing works best on a global scale. Once you go smaller, eg to a nation, you need to conceptualize the barrier between inside and outside the organism as being highly fluid and difficult to define. Once you get to the level of a corporation this difficulty defining inside and outside is enormous. Eg aren’t regulatory bodies also a part, since they aid the corporation in making decisions?
Usually for companies, regulatory bodies are more like antibodies against bacteria. Or for another example, regulatory bodies are like any hormone producing body part, they control that the assemble of your guts do their thing and don't fuck it up.
Maybe that’s a loosely effective analogy. It depends on the degree of antagonism between corp and regulator.
Really interesting ideas IMO. I have thought about this how you might found a company, you bring in the accountants, the lawyers, the everything that comes in with that, and then who is even really driving the ship anymore? The scale of complexity going on is not something you can fit in your or even 10 peoples heads. Yet people act like they are in control of these processes they have delegated to countless people who are each trudging off with their own sensibilities and optimizations and paradigms. It is no different to how a body works where specific cells have a specific identity and role to play in the wider organism, functioning autonomously bound by inputs and outputs that the "mind in charge" has no concept of.
And it makes it scary too. Can we really even stop the machine that is capitalism wreaking havoc on our environment? We have essentially lit a wildfire here and believe we are in full control of its spread. The incentives lead to our outcomes and people are concerning themselves with putting bandaids on the outcomes and not adjusting the incentives that have lead to the inevitable.
Modern corps are shaped after countries, they are based on constitutions (articles of incorporation/bylaws). It's the whole three branch system launched off the founding event.
[flagged]
I'm sorry I offended you! However I do think it is highly relevant as there is this prevailing theory that the free market will bail us out of any ills and will bring forth necessary scientific advancement as soon as they are needed. It is that sentiment that I was pushing back against, as I don't believe we have the control that we really believe we do for these ideas to pencil out so cleanly as they are considered.
> We are to these like ants are to us. Or maybe even more like mitochondria are to us. Were just the mitochondria of the corporations
It's the opposite, imo. Corporations, states etc. seem to be somewhere on the bacteria level of organizational complexity and variety of reactions.
This is an interesting perspective, but your view seems very narrow for some reason. If you’re arguing that there are many forms of computation or ‘intelligence’ that are emergent with collections of sentient or non-sentient beings then you have to include tribes of early humans, families, city-states and modern republics, ant and mold colonies, the stock market and the entire earths biosphere etc.
There's an incredible blind spot which makes humans think of intelligence and sentience as individual.
It isn't. It isn't even individual among humans.
We're colony organisms individually, and we're a colony organism collectively. We're physically embedded in a complex ecosystem, and we can't survive without it.
We're emotionally and intellectually embedded in analogous ecosystems to the point where depriving a human of external contact with the natural world and other humans is considered a form of torture, and typically causes a mental breakdown.
Colony organisms are the norm, not the exception. But we're trapped inside our own skulls and either experience the systems around us very indirectly, or not at all.
Personally, I actually count all of those examples into abstract lifeforms which you described :D
There's also things like "symbolic" lifeforms like viruses, yeah, they don't live per-se, but they do replicate and go through "choices", but in a more symbolic sense as they are just machines that read out/ execute code.
The way I distinct symbolic lifeforms and abstract lifeforms is that mainly symbolic lifeforms are "machines" that are kind of "inert" in a temporal sense.
Abstract lifeforms are just things that are in a way or other, "living" and can exist on any level of abstraction. Like cells are things that can be replaced, so can be CEO's, or etc.
Symbolic lifeforms can just be forever inert and hope that entropy knocks them to something to activate them, without getting into some hostile enough space that kills them.
Abstract lifeforms on the other hand just eventually run out of juice.
No one behaves with species survival as the motivating action.
Maybe not consciously, but otherwise natural selection *will* do that choice for you :D
In countries with civil law (as opposed to common law), companies are called juristic persons (as opposed to natural persons, humans)
Ya, I've always wondered like do blood cells in my body have any awareness that I'm not just a planet they live on? Would we know if the earth was just some part of a bigger living structure with its own consciousness? Does it even need to be conscious, or just show movement that is non random and influenced in some ways by goals or agenda? Many organisms act as per the goal to survive even if not conscious, and so probably can be considered a life-form? Corporations are an example of that like you said.
We have massively increased our brain by scaling out not up. Going from pop. 8M to 8Bn is a 1000x
Hardly. What's the use if no single component of this brain can hold a complex enough idea?
We deal with that by abstraction and top-down compartmentalisation of complex systems. I mean look at the machines we build. Trying to understand the entire thing holistically is impossible thing for a human mind, but we can divide and conquer the problem, where each component is understood in isolation.
Look at that in the organizations - businesses, nonprofit, and governmental systems we build.
No one person can build even a single modern pencil - as Friedman said, consider the iron mines where the steel was dug up to make the saws to cut the wood, and then realize you have to also get graphite, rubber, paints, dyes, glues, brass for the ferrule, and so on. Consider the enormous far greater complexity in a major software program - we break it down and communicate in tokens the size of Jira tickets until big corporations can write an operating system.
A business of 1,000 employees is not 1,000 times as smart as a human, but by abstracting its aims into a bureacracy that combines those humans together, it can accomplish tasks that none of them could achieve on their own.
Robert Miles has a video on this
https://www.youtube.com/watch?v=L5pUA3LsEaw
Think of AGI like a corporation?
OTOH. Burn a stick, and you can write with the burnt end
How complex are the ideas held by a single neuron?
There's so many barriers between individual humans. Neurons on the other hand are tightly intertwined.
We are smart enough to build the intelligence! Not just AI. We use computers to solve all kinds of physics and maths problems.
Corporations with extreme specializations are that.
Countries and companies hold pretty complex ideas
We often struggle to focus and think deeply. It is not because we are not trying hard enough. It is because the limitations are built into our brains. Maybe the things we find difficult today are not really that complex. It is just that we are not naturally wired for that kind of understanding.
I’m not really sure it’s our brains that are the problem (at least most of the time). Distractions come from many sources, not least of all the many non-brain parts of our bodies.
Wolfram’s “bigger brains” piece raises the intriguing question of what kinds of thinking, communication, or even entirely new languages might emerge as we scale up intelligence, whether in biological brains or artificial ones.
It got me thinking that, over millions of years, human brain volume increased from about 400–500 cc in early hominins to around 1400 cc today. It’s not just about size, the brain’s wiring and complexity also evolved, which in turn drove advances in language, culture, and technology, all of which are deeply interconnected.
With AI, you could argue we’re witnessing a similar leap, but at an exponential rate. The speed at which neural networks are scaling and developing new capabilities far outpaces anything in human evolution.
It makes you wonder how much of the future will even be understandable to us, or if we’re only at the beginning of a much bigger story. Interesting times ahead.
The future that we don't understand is already all around us. We just don't understand it.
Is the future in the room with us right now?
It is the room! And everything in it.
This house has people in it!
https://en.wikipedia.org/wiki/This_House_Has_People_in_It
Alan Resnick seems to be of a similar mind as I am, and perhaps also as you? My favorite of his is https://en.wikipedia.org/wiki/Live_Forever_as_You_Are_Now_wi...
There is a popular misconception that neural networks accurately model the human brain. It is more a metaphor for neurons than a complete physical simulation of the human brain.
There is also a popular misconception that LLMs are intelligently thinking programs. They are more like models that predict words and appear as a human intelligence.
That being said, it is certainly theoretically possible to simulate human intelligence and scale it up.
I often wonder if human intelligence is essentially just predicting words and phrases in a cohesive manner. Once the context size becomes large enough to encompass all a person history, predicting becomes indistinguishable from thinking.
Maybe, but I don't think this is strictly how human intelligence works
I think a key difference is that humans are capable of being inputs into our own system
You could argue that any time humans do this, it is as a consequence of all of their past experiences and such. It is likely impossible to say for sure. The question of determinism vs non-determinism has been discussed for literal centuries I believe
But if AI gets to a level where it could be an input to its own system, and reaches a level where it has systems analogous to humans (long term memory, decision trees updated by new experiences and knowledge, etc.) then does it matter in any meaningful way if it is “the same” or just an imitation of human brains? It feels like it only matters now because AIs are imitating small parts of what human brains do but fall very short. If they could equal or exceed human minds, then the question is purely academic.
That's a lot of really big ifs that we are likely still a long way away from answering
From what I understand there is not really any realistic expectation that LLM based AI will ever reach this complexity
The body also has memory and instinct. It's non-hierarchical, although we like to think that the mind dominates or governs the body. It's not that it's more or less than predicting, it's a different activity. Humans also think with all their senses. It'd be more or less like having a modal-less or all-modal LLM. Not sure this is even possible with the current way we model these networks.
And not just words. There is pretty compelling evidence that our sensory perception is itself prediction, that the purpose of our sensory organs is not to deliver us 1:1 qualia representing the world, but more like error correction, updates on our predictions.
This reads pretty definitively. If LLMs are intelligently thinking programs is being actively debated in cognitive science and AI research.
> the brain’s wiring and complexity also evolved, which in turn drove advances in language, culture, and technology
Fun thought: to the extent that it really happened this way, our intelligence is minimum viable for globe-spanning civilization (or whatever other accomplishment you want to index on). Not average, not median. Minimum viable.
I don't think this is exactly correct -- there is probably some critical mass / exponential takeoff dynamic that allowed us to get slightly above the minimum intelligence threshold before actually taking off -- but I still think we are closer to it than not.
I like this idea. I’ve thought of a similar idea at the other end of the limit. How much less intelligent could a species be and evolve to where we’re at? I don’t think much.
Once you reach a point where cultural inheritance is possible, things pop off at a scale much faster than evolution. Still, it’s interesting to think about a species where the time between agriculture and space flight is more like 100k or 1mm years than 10k. Similarly, a species with less natural intelligence than us but is more advanced because they got a 10mm year head start. Or, a species with more natural intelligence than us but is behind.
Your analogy makes me think of boiling water. There’s a phase shift where the environment changes suddenly (but not everywhere all at once). Water boils at 100C at sea level pressure. Our intelligence is the minimum for a global spanning civilization on our planet. What about an environment with different pressures?
It seems like an “easier” planet would require less intelligence and a “harder” planet would require more. This could be things like gravity, temperature, atmosphere, water versus land, and so on.
>It seems like an “easier” planet would require less intelligence and a “harder” planet would require more.
I'm not sure that would be the case if the Red Queen hypothesis is true. To bring up gaming nomenclature you're talking about player versus environment (PVE). In an environment that is easy you would expect everything to turn to biomass rather quickly, if there was some amount of different lifeforms so you didn't immediately end up with a monoculture the game would change from PVE to PVP. You don't have to worry about the environment, you have to worry about every other lifeform there. We see this a lot on Earth. Spines, poison, venom, camouflage, teeth, claws, they for both attack and protection in the other players of the life game.
In my eyes it would require far more intelligence on the easy planet in this case.
How about Argentine ants?
The word "civilization" is of course loaded. But I think the bigger questionable assumption is that intelligence is the limiting factor. Looking at the history that got us to having a globe-spanning civilization, the actual periods of expansion were often pretty awful for a lot of the people affected. Individual actors are often not aligned with building such a civilization, and a great deal of intelligence is spent on conflict and resisting the creation of the larger/more connected world.
Could a comparatively dumb species with different social behaviors, mating and genetic practices take over their planet simply by all actors actually cooperating? Suppose an alien species developed in a way that made horizontal gene transfer super common, and individuals carry material from most people they're ever met. Would they take over their planet really fast because as soon as you land on a new continent, everyone you meet is effectively immediately your sibling, and of course you'll all cooperate?
Less fun thought: there's an evolutionary bottleneck which prevents further progress, because the cost/benefit tradeoffs don't favour increasing intelligence much beyond the minimum.
So most planet-spanning civilisations go extinct, because the competitive patterns of behaviour which drive expansion are too dumb to scale to true planet-spanning sentience and self-awareness.
Intelligence is ability to predict (and hence plan), but predictability itself is limited by chaos, so maybe in the end that is the limiting factor.
It's easy to imagine a more capable intelligence than our own due to having many more senses, maybe better memory than ourselves, better algorithms for pattern detection and prediction, but by definition you can't be more intelligent than the fundamental predictability of the world in which you are part.
> predictability itself is limited by chaos, so maybe in the end that is the limiting factor
I feel much of humanity's effectiveness comes from ablating the complexity of the world to make it more predictable and easier to plan around. Basically, we have certain physical capabilities that can be leveraged to "reorganize" the ecosystem in such a way that it becomes more easily exploitable. That's the main trick. But that's circumstantial and I can't help but think that it's going to revert to the mean at some point.
That's because in spite of what we might intuit, the ceiling of non-intelligence is probably higher than the ceiling of intelligence. Intelligence involves matching an intent to an effective plan to execute that intent. It's a pretty specific kind of system and therefore a pretty small section of the solution space. In some situations it's going to be very effective, but what are the odds that the most effective resource consumption machines would happen to be organized just like that?
Sounds kind of like the synopsis of the Three Body Problem.
I seriously doubt it, honestly, since humans have anatomical limitations keeping their heads from getting bigger quickly. We have to be able to fit through the birth canal.
Perfectly ordinary terrestrial mammals like elephants have much, much larger skulls at birth than humans, so it’s clearly a matter of tradeoffs not an absolute limit.
Oh of course, but evolution has to work with what it’s got. Humans happened to fit a niche where they might benefit from more intelligence, elephants don’t seemingly fit such a niche.
> We have to be able to fit through the birth canal.
Or at least we used to, before the c-section was invented.
Indeed, but it hasn’t been around for long enough. We might evolve into birth by c-section, if we assume that humans won’t alter themselves dramatically by technological means over hundreds of thousands of years.
I feel like there’s also a maximum viable intelligence that’s compatible with reality. Beyond a certain point, the smarter people are, the higher the tendency for them to be messed up in some way.
IMO they will truly be unleashed when they drop with the human language intermediary and are just looking at distributions of binary functions. Truly, why are you asking the llm in english to write python code? The whole point of python code was to make the machine code readable for humans, and when you drop that requirement, you can just work directly on the metal. Some model outputting an incomprehensible integrated circuit from the fab, its utility proved by fitting a function to some data with acceptable variance.
The language doesn’t just map to English, it allows high level concepts to be expressed tersely. I would bet it’s much easier for an LLM to generate python doing complex things than to generate assembly doing the same. One very simple reason is the context window.
In other words, I figure these models can benefit from layers of abstraction just like we do.
It allows these concepts to be expressed legibly for a human. Why would an ai model (not llm necessarily) need to write say "printf"? It does not need to understand that this is a print statement with certain expectation for what a print statement ought to behave as in the scope of the shell. It already has all the information by virtue of running the environment. printf might as well be expressed as some n-bit integer for the machine and dispense with all the window dressing we apply when writing functions by humans for humans.
I completely understand what you are saying but ip does make an interesting point.
Why would chain of thought work at all if the model wasn't gaining something by additional abstraction away from binary?
Maybe things even go in the other direction and the models evolve a language more abstract than English that we also can't understand.
The models will still need to interface though with humans using human language until we become some kind of language model pet dog.
“printf” is an n-bit integer already. All strings are also numbers.
Because there's a lot of work behind printf that the llm doesn't need or care to reproduce
You're not just using the language, but all of the runtime and libraries behind it
Thinking it's more efficient for the llm to reinvent it all is just silly
Right and all of that in the library is built to be legible for the human programmer with constraints involved to fit in within the syntax of the underlying language. Imagine how efficient a function would be that didn't need all of that window dressing? You could "grow" functions out of simulation and bootstrapping, have them be a black box that we harvest output from not much different than say using an organism in a bioreactor to yield some metabolite of interest where we might not know all the relevant pieces of the biochemical pathway but we score putative production mutants based on yield alone.
Indeed. And aside from that, LLMs cannot generalise OOD. There's relatively little training data of complex higher order constructs in straight assembly, compared to say Python code. Plus, the assembly will be target architecture specific.
> why are you asking the llm in english to write python code?
Perhaps the same reason networked computers aren’t just spitting their raw outputs at each other? Security, i.e. varied motivations.
That is a little bit of an appeal to precedent I think. Networked computers don't spit their raw output at eachother today because so far, all network protocols were written by humans using these abstracted languages. In the future we have to expect otherwise as we drop the human out of the pipeline and seek the efficiencies that come from that. One might ask why the cells in your body don't signal via python code and instead use signalling mechanisms like concentrations of sodium ion within the neuron to turn your english language idea of "move arm" into an actual movement of the arm.
> One might ask why the cells in your body don't signal via python code and instead use signalling mechanisms
Right. They don't just make their membranes chemically transparent. Same reason: security, i.e. the varied motivations of things outside the cell compared to within it.
I don't think using a certain language is more secure than just writing that same function call in some other language. Security in compute comes from priviledged access from some agents and blacklisting others. The language doesn't matter for that. It can be a python command, it can be a tcp packet, it can be a voltage differential, the actual "language" used is irrelevant.
All I am arguing is that languages and paradigms written in a way to make sense for our english speaking monkey brain is perhaps not the most efficient way to do things once we remove the constraint of having an english speaking monkey brain being the software architect.
> Right. They don't just make their membranes chemically transparent. Same reason: security, i.e. the varied motivations of things outside the cell compared to within it.
Cells or organelles within a cell could be described as having motivations I guess, but evolution itself doesn’t really have motivations as such, but it does have outcomes. If we can take as an assumption that mitochondria did not evolve to exist within the cell so much as co-evolve with it after becoming part of the cell by some unknown mechanism, and that we have seen examples of horizontal gene transfer in the past, by the anthropic principle, multicellular life is already chimeric and symbiotic to a wild degree. So any talk of motivations of an organelle or cell or an organism are of a different degree to motivations of an individual or of life itself, but not really of a different kind.
And if motivations of a cell are up for discussion in your context, and to the context of whom you were replying to, then it’s fair to look at the motivations of life itself. Life seems to find a way, basically. Its motivation is anti-annihilation, and life is not above changing itself and incorporating aspects of other life. Even without motivations at the stage of random mutation or gene transfer, there is still a test for fitness at a given place and time: the duration of a given cell or individual’s existence, and the conservation and preservation of a phenotype/genotype.
Life is, in its own indirect way, preserving optionality as a hedge against failure in the face of uncertain future events. Life exists to beget more life, each after its kind historically, in human time scales at least, but upon closer examination, life just makes moves slowly enough that the change is imperceptible to us.
Man’s search for meaning is one of humanity’s motivations, and the need to name things seems almost intrinsic to existence in the form of self vs not self boundary. Societally we are searching for stimuli because we think it will benefit us in some way. But cells didn’t seek out cell membrane test candidates, they worked with the resources they had, throwing spaghetti at the wall over and over until something stuck. And that version worked until the successor outcompeted it.
We’re so far down the chain of causality that it’s hard to reason about the motivations of ancient life and ancient selection pressures, but questions like this make me wonder, what if people are right that there are quantum effects in the brain etc. I don’t actually believe this! But as an example for the kinds of changes AI and future genetic engineering could bring, as a though exercise bear with me. If we find out that humans are figuratively philosophical zombies due to the way that our brains and causality work compared to some hypothetical future modified humans, would anything change in wider society? What if someone found out that if you change the cell membranes of your brain in some way that you’ll actually become more conscious than you would be otherwise. What would that even mean or feel like? Socially, where would that leave baseline humans? The concept of security motivations in that context confront me with the uncomfortable reality of historical genetic purity tests. For the record, I think eugenics is bad. Self-determination is good. I don’t have any interest in policing the genome, but I can see how someone could make a case for making it difficult for nefarious people to make germline changes to individual genomes, but it’s probably already happening and likely will continue to happen in the future, so we should decide what concerns are worth worrying about, and what a realistic outcome looks like in such a future if we had our druthers. We can afford to be idealistic before the horse has left the stable, but likely not for much longer.
That’s why I don’t really love the security angle when it comes to motivations of a cell, as it could have a Gattaca angle to it, though I know you were speaking on the level of the cell or smaller. Your comment and the one you replied to inspired my wall of text, so I’m sorry/you’re welcome.
Man is seeking to move closer to the metal of computation. Security boundaries are being erected only for others to cross them. Same as it ever was.
Well..except they do? HTTP is an anomaly in having largely human readable syntax, and even then we use compression with it all the time which translates it to a rarefied symbolic representation.
The limit beyond that would be skipping the compression step: the ideal protocol would be incompressible because it's already the most succinct representation of the state being transferred.
We're definitely capable of getting some of the way there by human design though: i.e. I didn't start this post by saying "86 words are coming".
That and the fact the LLM there's plenty of source material associating abstractions expressed in English and code written in higher level languages. Not so much associating abstractions with bytecode and binary.
A future AI (actual AI not llm) would compute a spectrum of putative functions (1) and identify functions that meet some threshold. You need no prior associations only randomization of parameters and enough sample space. Given enough compute all possible combinations of random binary could be modeled and those satisfying functional parameters will be selected. And they will probably look nothing like how we consider functions today.
1. https://en.wikipedia.org/wiki/Bootstrapping_(statistics)#/me...
> Wolfram’s “bigger brains” piece
You mean the one linked at the top of the page?
Why is this structured like a school book report, written for a teacher who doesn’t have the original piece right in front of them?
Noted for next time. The article itself is excellent. Sorry if my comment felt out of place for HN. I added extra context to get more discussion going, this topic really interests me.
Four words into your post and I'm confident it's ChatGPT slop. Am I wrong?
>It makes you wonder how much of the future will even be understandable to us, o
There isn't much of a future left. But of what is left to humans, it is in all probability not enough time to invent any true artificial intelligence. Nothing we talk about here and elsewhere on the internet is anything like intelligence, even if it does produce something novel and interesting.
I will give you an example. For the moment, assume you come up with some clever prompt for ChatGPT or another one of the LLMs, and that this prompt would have it "talk" about a novel concept for which English has no appropriate words. Imagine as well that the LLM has trained on many texts where humans spoke of novel concepts and invented words for those new concepts. Will the output of your LLM ever, even in a million years, have it coin a new word to talk about its concept? You, I have no doubt, would come up with a word if needed. Sure, most people's new words would be embarrassing one way or another if you asked them to do so on the spot. But everyone could do this. The dimwitted kid in school that you didn't like much, the one who sat in the corner and played with his own drool, he would even be able to do this, though it would be childish and onomatopoeic.
The LLMs are, at best, what science fiction used to refer to as an oracle. A device that could answer questions seemingly intelligently, without having agency or self-awareness or even the hint of consciousness. At best. The true principles of intelligence, of consciousness are so far beyond what an LLM is that it would, barring some accidental discovery, require many centuries. Many centuries, and far more humans than we have even now... we only have eight or so 1-in-a-billion geniuses. And we have as many right now as we're ever going to have. China's population shrinks to a third of its current by the year 2100.
I’m probably too optimistic as a default, but I think it might be okay. Agriculture used to require far more people than it does now due to automation, and it certainly seems like many industries will be able to be partially automated with only incremental change to current technology. If less people are needed for social maintenance, then more will be able to focus on the sciences, so yes we may have less people but it’s quite possible we’ll have a lot more in science.
I don’t think AI needs to be conscious to be useful.
> The true principles of intelligence, of consciousness are so far beyond what an LLM is that it would, barring some accidental discovery, require many centuries.
I've been too harsh on myself for thinking it would take a decade to integrate imaging modalities into LLMs.
I mean, the integrated circuit, the equivalent of the evolution of multicellular life, was 1949. The microprocessor was 1979, and that would be what, Animals as a kingdom? Computers the size of a building are now the size of a thumb drive. What level are modern complex computer systems like ChatGPT? The level of chickens? Dogs? Whatever it is, it is light years away from what it was 50 years ago. We may be reaching physical limits for the size of circuits, but it seems like algorithm complexity and efficiency moving fast and are no where near any physical limits.
We haven’t needed many insane breakthroughs to get here. It has mostly been iterating and improving, which opens up new things to develop, iterate, and improve. IBMs Watson was a super computer in 2011 that could understand natural language. My laptop runs LLMs that can do that now. The pace of improvement is incredibly fast and I would be very hesitant to say with confidence that human level “intelligence” is definitely centuries away. 1804 was two centuries ago, and that was the year the locomotive was invented.
If you make the brain larger, some things will get worse, rather than better. The cost of communication will be higher, it will get harder to dissipate heat, and so on.
It's quite possible evolution already pushed our brain size to the limit of what actually produces a benefit, at least with the current design of our brains.
The more obvious improvement is just to use our brains more. It costs energy to think, and for most of human existence food was limited, so evolution naturally created a brain that tries to limit energy use, rather than running at maximum as much as possible.
[dead]
After reading this article, I couldn't help but wonder what would happen if our brains were bigger? Sure, it's tempting to imagine being able to process more information, or better understand the mysteries of the universe. But I also began to wonder, would we really be happier and more fulfilled?
Would a bigger brain make us better problem solvers, or would it just make us more lonely and less able to connect with others? Would allowing us to understand everything also make us less able to truly experience the world as we do now?
Maintaining social relationships is very intellectually demanding task. Animals that maintain social societies have larger brains than individualistic cousin species, in general. It is called the social brain hypothesis. Hyper intelligent people might tend to be less able to maintain relationships because they are too far outside the norm, not because they are smarter, per se. I would say that people with intellects much lower than the norm also have that problem.
Or it could be that, with our current hardware, brains that are hyper intelligent are in some way cannibalizing brain power that is “normally” used for processing social dynamics. In that sense, if we increased the processing power, people could have sufficient equipment to run both.
I expect that, to the extent* to which there’s a correlation between loneliness and intelligence, it is mostly because very smart people are unusual. So, if everyone was smarter, they’d just be normal and happy.
*(I could also be convinced that this is mostly just an untrue stereotype)
Maybe a better brain is a better problem solver and ALSO happier than all of us.
I assume that we are neurons in a bigger brain that already exists!
I started down this belief system with https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach
Those of you who have used psychedelics might have personal experience with this question.
There can be moments of lucidity during a psychedelic session where it's easy to think of discrete collections as systems, and to imagine those systems behaving with specific coherent strategies. Unfortunately, an hour or two later, the feeling disappears. But it leaves a memory of briefly understanding something that can't be understood. It's frustrating, yet profound. I assume this is where feelings of oneness with the universe, etc., come from.
I feel it is not just about bigger but also about what all is supported in the current brain that is potentially not as useful anymore or useful for intelligence as such. The evolutionary path for our brain has an absolutely major focus on keeping itself alive and based on that keeping the organism alive. Humans will often take potentially sub-optimal decisions because the optimal decision may have a very low probability of death for themselves or those genetically related to them. In some sense, similar to a manned fighter jet vs a drone, where in one case a large amount of effort and detail is expended on keeping the operating envelope consistent with keeping the human alive, whereas a drone can expand the envelope way more because the human is no longer a concern. If we could jettison some of the evolutionary baggage of the brain, it could potentially do so much more even within the same space.
Yay for the intelligent robot armies with no fear of death!
Nonsense.
Neurology has proven numerous times that it’s not about the size of the toolbox but the diversity of tools within. The articles starts with cats can’t talk. Human can talk because we have a unique brain component dedicated to auditory speech parsing. Cats do, however, appear to listen to the other aspects of human communication almost, sometimes much more, precisely than many humans.
The reason size does not matter is that 20% of brain volume accounts for 80% of brain mass in the cerebellum. That isn’t the academic or creative part of the brain. Instead it processes things like motor function, sensory processing (not vision), and more.
The second most intelligent class of animals are corvids and their brains are super tiny. If you want to be smarter then increase your processing diversity, not capacity.
>If you want to be smarter then increase your processing diversity, not capacity.
And efficiency. Some of this is achieved by having dedicated and optimal circuits for a particular type of signal processing.
The original GPU vs CPU.
Well, start destroying that existing substrate and it certainly has effects. Maybe in the near future (as there is work here already), we will find a way to supplement an existing human brain with new neurons in a targeted way for functional improvements.
right, people claiming that measuring brain weight has something to do with some dubious intelligence metric is phrenology
IIR, being obsessed with brain size & weight overlaps with phrenology, but is a distinct (and generally simpler) set of beliefs.
But both have very long and dubious reputations. And the article's failure to mention or disclaim either is (IMO) a rather serious fault.
https://en.wikipedia.org/wiki/Phrenology
He doesn't literally mean physically larger brains, he means brains with more connections.
> The second most intelligent class of animals are corvids ...
Not the psittacines? Admittedly, I've heard less about tool use by parrots than by corvids. And "more verbal" is not the same as "more intelligent".
If you want to go down the rabbit hole of higher order intelligences, look up egregores. I know John Vervake and Jordan Hall have done some work trying to describe them, as well as other people studying cognition. But when you get into that you start finding religion discussing how to interact them (after all, aren't such intelligences what people used to call gods?)
What if our brains lived in a higher dimensional space, and there was more room for neuron interconnectivity and heat dissipation?
That's a sci-fi rabbit hole I'd gladly fall into
Iain M. Banks "The Culture" does this, it's how the Minds work, discussed in Look to Windward and Consider Phoebus iirc.
I imagine that even if we did, this article would still be way too long.
I found some of it interesting, but there's just too many words in there and not much structure nor substance.
That’s pretty typical for Wolfram.
Our brain aren't much impressive in the animal reignn what makes human (dangerously) so dominant apart from their size are their hands. After human started building, their brain power adapted to new targets
Completely ignores any sort of scaling in emotional intelligence. Bro just wants a DX4.
I mean this is a huge potential issue with high intelligence AI systems (HIAI maybe we'll call it one day). We already see AI develop biases and go into odd spiritual states when talking with other AI. They inherit the behaviors in human data.
I've seen people say "oh this will just go away when they get smart enough", but I have to say I'm a doubter.
Makes me wonder if "bigger brains" would feel more like software-defined minds than just smarter versions of ourselves
> What If We Had Bigger Brains?
Nothing. Elephants have bigger brains, but they didn't create civilization.
Seems like the 'intellectual paradox' where someone who thinks hard about subjects, concludes that all learning is done by thinking hard. Attending to a subject with the conscious mind.
Clearly not always the case. So many examples: we make judgements about a person within seconds of meeting them, with no conscious thoughts at all. We decide if we like a food, likewise.
I read code to learn it, just page through it, observing it, not thinking in words at all. Then I can begin to manipulate it, debug it. Not with words, or a conscious stream. Just familiarity.
My son plays a piece from sheet music, slowly and deliberately, phrase by phrase, until it sounds right. Then he plays through more quickly. Then he has it. Not sure conscious thoughts were ever part of the process. Certainly not words or logic.
So many examples are possible.
As brains get bigger, you get more compute, but you have to solve the "commute" problem. Messages have to be passed from one corner to the other, and fast. And there are so many input signals coming in (for us, likely from thirty trillion cells, or at least a significant fraction of those). Not all are worth transporting to other corners. Imagine a little tickle on your toe. Should that be passed on? Usually no, unless you are in an area with creepy crawlies, and other such situation. So decisions have to made. But who will make these decisions for us? (Fascinating inevitably recursive question we'll come back to)
This commute is pretty much ignored when making artificial brains which can guzzle energy, but matters criticallyfor biological brains. It needs to be (metabolically) cheap, and fast. What we perceive as a consciousness is very likely a consensus mechanism that helps a 100 billion neurons collectively decide, at a very biologically cheap price, what data is worth transporting to all corners for it to become meaningful information. And it has to be recursive, because these very same 100 billion neurons are collectively making up meaning along the way. This face matters to me, that does not, and so on. Replace face with anything and everything we encounter. So to solve the commute problem resulting from a vast amount of compute, we have a consensus mechanism that gives rise to a collective. That is the I, and the consensus mechanism is consciousness
We explore this (but not in these words) in our book Journey of the Mind.
You'll find that no other consciousness model talks about the "commute" problem because these are simply not biologically constrained models. They just assume that some information processing, message passing will be done in some black box. Trying to get all this done with the same type of compute (cortical columns, for instance) is a devilishly hard challenge (please see the last link for more about this). You sweep that under the rug, consciousness becomes this miraculous and seemingly unnecessary thing that somehow sits on top of information processing. So you then have theorists worry about philosophical zombies and whatnot. Because the hard engineering problem of commute was entirely ignored.
https://www.goodreads.com/en/book/show/60500189-journey-of-t...
https://saigaddam.medium.com/consciousness-is-a-consensus-me...
https://saigaddam.medium.com/conscious-is-simple-and-ai-can-...
https://saigaddam.medium.com/the-greatest-neuroscientist-you...
we would be elephants or whales? (sorry couldn't resist)
We already know that Cetaceans are the superior intellectual life form of the planet.
Toss a naked man in the sea and see how he fares.
About as well as a beached whale, I'd expect.
I've often felt like this is one of the most serious issues faced by modern society: very small brains...
An interesting thought I had while reading the section on how larger brains allow more complicated language to represent context:
Why are we crushing down the latent space of an LLM to the text representation when doing llm-to-llm communication. What if you skipped decoding the vector to text and just feed the vectors directly into the next agent. It's so much richer with information.
Sperm whales have the largest brains on earth but they have not invented fire, the wheel or internal combustion engine or nuclear weapons... Oh wait. Hmmm.
Nor did the human race for most of the few tens of millions of years we've been on this planet. It's only in the last few thousand years that wheels became a thing. The capacity to invent and reason about these things was there long before they happened.
And also _whales don't have hands_
"...a single thread of experience through time."
Do human brains in general always work like this at the consciousness level? Dream states of consciousness exist, but they also seem single-threaded even if the state jumps around in ways more like context switching in an operating system than the steady awareness of the waking conscious mind. Then there are special cases - schizophrenia and dissociative identity disorders - in which multiple threads of existence apparently do exist in one physical brain, with all the problems this situation creates for the person in question.
Now, could one create a system of multiple independent single-threaded conscious AI minds, each trained in a specific scientific or mathematical discipline, but communicating constantly with each other and passing ideas back and forth, to mimic the kind of scientific discovery that interdisciplinary academic and research institutions are known for? Seems plausible, but possibly a bit frightening - who knows what they'd come up with? Singularity incoming?
You're touching on why I don't think AI in the future will look at human intelligence. Or a better way to put it is "human intelligence looks like human intelligence because of limitations of the human body".
For example we currently spend a lot of time making AI output human writing, output human sounds, see the world as we hear it, see the world as we see it, hell even look like us. And this is great when working with and around humans. Maybe it will help it align with us, or maybe the opposite.
But if you imagined a large factory that requested input on one side and dumped out products on the other with no humans inside why would it need human hearing and speech at all? You'd expect everything to communicate on some kind of wireless protocol with a possible LIFI backup. None of the loud yelling people have to do. Most of the things working would have their intelligence minimized to lower power and cooling requirements. Depending on the machine vision requirements it could be very dark inside again reducing power usage. There would likely be a layer of management AI and guardian AI to make sure things weren't going astray and keep running smoothly. And all the data from that would run back to a cooled and well powered data center with what effectively is a hive mind from all the different sensors it's tracking.
Interesting idea. Notably bats are very good at echo-location so I wonder if your factory hive mind might decide this audio system is optimal for managing the factory floor.
However, what if these AI minds were 'just an average mind' as Turing hypothesized (some snarky comment about IBM IIRC). A bunch of average human minds implemented in silico isn't genius-level AGI but still kind of plausible.
This is why once a tool like neuralink reaches a certain threshold of capability and enough people use it, you will be forced to chip yourself and your kids otherwise they will be akin to the chimps in the zoo. Enhanced human minds will work at a level unreachable by natural minds and those same natural minds will be left behind. Its a terrifying view on where we are going and where most of humanity will likely be forced to go. Then on top of that there will be an arms race to create / upgrade faster and more capable implants.
"can't run code inside our brains" lad... speak for yourself.
Imagine how hungry we'd be.
[dead]
[dead]
[dead]
[dead]
[flagged]
By age 17, he published 10 peer-reviewed manuscripts in quantum field theory and particle physics. At 19 he got his PhD in particle physics from Caltech. Age 21, youngest ever recipient of what they used to call the MacArthur "genius" grant. Then struck it rich founding a mathematical software company. So, to be fair, he's done ok for himself.
https://en.wikipedia.org/wiki/Nobel_disease
That may explain why he has a huge ego, but doesn’t excuse it.
Stephen lad, its a bit obvious, don't you think?
[flagged]
> At 100 billion neurons, we know, for example, that compositional language of the kind we humans use is possible. At the 100 million or so neurons of a cat, it doesn’t seem to be.
The implication here that presupposes neuron count generally scales ability is the latest in a long line of extremely questionable lines of thought from mr wolfram. I understand having a blog, but why not separate it from your work life with a pseudonym?
> In a rough first approximation, we can imagine that there’s a direct correspondence between concepts and words in our language.
How can anyone take anyone who thinks this way seriously? Can any of us imagine a human brain that directly related words to concepts, as if "run" has a direct conceptual meaning? He clearly prefers the sound of his own voice compared to how his is received by others. That, or he only talks with people who never bothered to read the last 200 years of european philosophy. Which would make sense given his seeming adoration of LLMs.
There's a very real chance that more neurons would hurt our health. Perhaps our brain is structured in a way to maximize their use and minimize their cost. It's certainly difficult to justify brain size as a super useful thing (outside of my big-brained human existence) looking at the evolutionary record.
3 other posters with the same link. Why this one blows up?
I think Wolfram may be ignoring what we already know of brains with more neurons than most have. We call them "austistic" and "adhd".
More is not always better, indeed it rarely is in my experience.
I suspect this is not relevant given the broader picture of the article: In general, species with the largest brains/body-mass ratios are the most intelligent. It's a reasonable assumption that this principle holds for brain/mass ratios higher than ours.
Who told you people with autism or ADHD have "more neurons" than average? That's not correct.
After reading this article, I couldn't help but wonder what would happen if our brains were bigger? Of course, it's tempting to imagine being able to process more information, or better understand the mysteries of the universe. But I also began to wonder, would we really be happier and more fulfilled?
Would having everything figured out make us more lonely, less able to connect with others, and less able to truly experience the world as we do now?
There are larger minds than ours, and they've been well-attested for millennia as celestial entities, i.e. spirits. Approaching it purely within the realm of the kinds of minds that we can observe empirically is self-limiting.
Well attested to is different from convincingly attested to. You may have noticed that people will say just about anything for lots of reasons other than averring a literal truth, and this was particularly true for those many millennia during which human beings didn't even know why the sky was blue and almost literally could not have conceived of the terms in which we can presently formulate an explanation.
The "people in times before the enlightenment just attributed everything to spirits because they didn't understand things" argument is tired and boring. Just because you're not convinced doesn't mean that it's not true, modern man.
That isn't my assertion. I actually think people in the past probably did not seriously subscribe to so-called supernatural explanations most of the time in their daily lives. Why I am saying is that its quite reasonable to take a bunch of incoherent, often contradictory and vague, accounts of spiritual experiences as not having much epistemological weight.
Then we disagree about the basic assumption. I do think that people throughout history have attributed many different things to the influence of spiritual entities. I’m just saying that It just was not a catch-all for unexplained circumstances. They may seem contradictory and vague to someone who denies the existence of spirits, but if you have the proper understanding of spirits as vast cosmic entities with minds far outside ours, that aren’t bound to the same physical and temporal rules as us, then people’s experiences make a lot of sense.
Seems implausible to me.
Okay, you have proposed a theory about a phenomenon that has some causal influence on the world -- ie, that there are spirits which can communicate with people and presumably alter their behavior in some way.
How do you propose to experimentally verify and measure such spirits? How can we distinguish between a world in which they exist as you imagine them and a world in which they don't? How can we distinguish between a world in which they exist as you imagine them and a world in which a completely _different set of spirits following different rules, also exists. What about Djinn? Santa Claus? Demons? Fairies?
We can experimentally verify spirits by communicating with them. Many such cases.
Now, do you mean measure them using our physical devices that we currently have? No, we can't do that. They are "minds beyond ours" as OP suggests, just not in the way that OP assumes.
Djinn: Demons. Santa Claus: Saint (i.e. soul of a righteous human). Demons: Demons. Fairies (real, not fairy-tale): Demons. Most spirits that you're going to run across as presenting themselves involuntarily to people are demons because demons are the ones who cause mischief. Angels don't draw attention to themselves.
I don't know, it seems reasonable to conclude that the experiments you describe point strongly to an endogenous rather than exogenous source of these experiences, especially since people who have these kinds of experiences do not all agree on what they are or mean and the experiences are significantly influenced by cultural norms.
An electron is a bit like a demon in the sense that you can't see one directly and we only have indirect evidence that they exist. But anyone from any culture can do an oil drop experiment and get values which aren't culture bound, at least in the long run. People have been having mystical experiences forever and the world religions still have no agreement about what they mean.