I didn’t see any mention of the environment or embodied cognition, which seems like a limitation to me.
embodied cognition variously rejects or reformulates the computational commitments of cognitive science, emphasizing the significance of an agent’s physical body in cognitive abilities. Unifying investigators of embodied cognition is the idea that the body or the body’s interactions with the environment constitute or contribute to cognition in ways that require a new framework for its investigation. Mental processes are not, or not only, computational processes. The brain is not a computer, or not the seat of cognition.
https://plato.stanford.edu/entries/embodied-cognition/
I’m in no way an expert on this, but I feel that any approach which over-focuses on the brain - to the exclusion of the environment and physical form it finds itself in – is missing half or more of the equation.
This is IMO a typical mistake that comes mostly from our Western metaphysical sense of seeing the body as specialized pieces that make up a whole, and not as a complete unit.
All our real insights on this matter come from experiments involving amputations or lesions, like split brain patients, quadriplegics, Phineas Gage and others. Split brain patients are essentially 2 different people occupying a single body. The left half and right half can act and communicate independently (the right half can only do so nonverbally). On the other hand you could lose all your limbs and still feel pretty much the same, modulo the odd phantom limb. Clearly there is something special about the brain. I think the only reasonable conclusion is that the self is embodied by neurons, and more than 99% of your neurons are in your brain. Sure you change a bit when you lose some of those peripheral neurons, but only a wee bit. All the other cells in your body could be replaced by sufficiently advanced machinery to keep all the neurons alive and perfectly mimic the electrical signals they were getting before (all your senses as well as propioception) and you wouldn't feel, think, or act any differently
89% of heart transplant recipients report personality changes https://www.mdpi.com/2673-3943/5/1/2
Hormonal changes can cause big changes in mood/personality (think menopause or a big injury to testicles).
So I don't think it's as clear cut that the brain is most of personality.
Neuromodulators like the hormones you're referring to affect your mood only insofar as they interact with neurons. Things like competitive antagonists can cancel out the effects of neuromodulators that are nevertheless present in your blood.
The heart transplant thing is interesting. I wonder what's going on there.
Sure but that has no bearing whatsoever on computational theory of mind.
IMHO, it's a typical philosophizing. Feedback is definitely crucial, but whether it needs to be in the form of embodiment is much less certain.
Brain structures that have arisen thanks to interactions with the environment might be conductive to the general cognition, but it doesn't mean that they can't be replicated another way.
Why are we homo sapiens self-aware?
If evolutionary biologists are correct it’s because that trait made us better at being homo sapiens.
We have no example of sapience or general intelligence that is divorced from being good at the things the animal body host needs to do.
We can imagine that it’s possible to have an AGI that is just software but there’s no existence proof.
> Why are we homo sapiens self-aware? ... We can imagine that it’s possible to have an AGI that is just software but there’s no existence proof.
Self-awareness and embodiment are pretty different, and you could hypothetically be self-aware without having a mobile, physical body with physical senses. E.g., imagine an AGI that could exchange messages on the internet, that had consciousness and internal narrative, even an ability to "see" digital pictures, but no actual camera or microphone or touch sensors located in a physical location in the real world. Is there any contradiction there?
> We have no example of sapience or general intelligence that is divorced from being good at the things the animal body host needs to do.
Historically, sure. But isn't that just the result of evolution? Cognition is biologically expensive, so of course it's normally directed towards survival or reproductive needs. The fact that evolution has normally done things a
And it's not even fully true that intelligence is always directed towards what the body needs. Just like some birds have extravagant displays of color (a 'waste of calories'), we have plenty of examples in humans of intelligence that's not directed towards what the animal body host needs. Think of men who collect D&D or Star Trek figurines, or who can list off sports stats for dozens of athletes. But these are in environments where biological resources are abundant, which is where Nature tends to allow for "extravagant"/unnecessary use of resources.
But basically, we can't take what evolution has produced as evidence of all of what's possible. Evolution is focused on reproduction and only works with what's available to it - bodies - so it makes sense that all intelligence produced by evolution would be embodied. This isn't a constraint on what's possible.
>I'm in no way an expert on this, but I feel that any approach which over-focuses on the brain - to the exclusion of the environment and physical form it finds itself in – is missing half or more of the equation.
I don't think that that changes anything. If it's the totality of cognition isn't just the brain but the brain's interaction with the body and the environment, then you can just say that it's the totality of those interactions that are computationally modeled.
There might be something to embodied cognition, but I've never understood people attempting to wield it as a counterpoint to the basic thesis of computational modeling.
> This is IMO a typical mistake that comes mostly from our Western metaphysical sense of seeing the body as specialized pieces that make up a whole, and not as a complete unit.
But this is the case! All the parts influence each other, sure, and some parts are reasonably multipurpose — but we can deduce quite certainly that the mind is a society of interconnected agents, not a single cohesive block. How else would subconscious urges work, much less acrasia, much less aphasia?
Embodiment started out as a cute idea without much importance that has gone off the rails. It is irrelevant to the question of how our mind/cognition works.
It's obvious we need a physical environment, that we perceive it, that it influences us via our perception, etc., but there's nothing special about embodied cognition.
The fact that your quote says "Mental processes are not, or not only, computational processes." is the icing on the cake. Consider the unnecessary wording: if a process is not only computational, it is not computational in its entirety. It is totally superfluous. And the assumption that mental processes are not computational places it outside the realm of understanding and falsification.
So no, as outlandish as Wolfram is, he is under no obligation to consider embodied cognition.
"The fact that your quote says "Mental processes are not, or not only, computational processes." is the icing on the cake. Consider the unnecessary wording: if a process is not only computational, it is not computational in its entirety. It is totally superfluous. And the assumption that mental processes are not computational places it outside the realm of understanding and falsification."
Let's take this step by step.
First, how adroit or gauche the wording of the quote is doesn't have any bearing on the quality of the concept, merely the quality of the expression of the concept by the person who formulated it. This isn't bible class, it's not the word of God, it's the word of an old person who wrote that entry in the Stanford encyclopedia.
Let's then consider the wording. Yes, a process that is not entirely computational would not be computation. However, the brain clearly can do computations. We know this because we can do them. So some of the processes are computational. However, the argument is that there are processes that are not computational, which exist as a separate class of activities in the brain.
Now, we do know of some processes in mathematics that are non-computable, the one I understand (I think) quite well is the halting problem. Now, you might argue that I just don't or can't understand that, and I would have to accept that you might have a point - humiliating as that is. However, it seems to me that the journey of mathematics from Hilbert via Turing and Godel shows that some humans can understand and falsify these concepts.
But I agree, Wolfram is not under any obligations to consider embodied congition, thinking around enhanced brains only is quite reasonable.
> It's obvious we need a physical environment, that we perceive it, that it influences us via our perception, etc., but there's nothing special about embodied cognition.
It's also obvious that we have bodies interacting with the physical environment, not just the brain, and the nervous system extends throughout the body, not just the head.
> if a process is not only computational, it is not computational in its entirety. It is totally superfluous. And the assumption that mental processes are not computational places it outside the realm of understanding and falsification.
This seems like a dogmatic commitment to a computational understanding of the neuroscience and biology. It also makes an implicit claim that consciousness is computational, which is difficult to square with the subjective experience of being conscious, not to mention the abstract nature of computation. Meaning abstracted from conscious experience of the world.
Any minute the brain is severed from its sensory/bodily inputs, it will go crazy by hallucinating endlessly.
Right now, what we have with the AI is a complex interconnected system of the LLM, the training system, the external data, the input from the users and the experts/creators of the LLM. Exactly this complex system powers the intelligence of the AI we see and not its connectivity alone.
It’s easy to imagine AI as a second brain, but it will only work as a tool, driven by the whole human brain and its consciousness.
> but it will only work as a tool, driven by the whole human brain and its consciousness.
That is only an article of faith. Is the initial bunch of cells formed via the fusion of an ovum and a sperm (you and I) conscious? Most people think not. But at a certain level of complexity they change their minds and create laws to protect that lump of cells. We and those models are built by and from a selection of components of our universe. Logically the phenomenon of matter becoming aware of itself is probably not restricted to certain configurations of some of those components i.e., hydrogen, carbon and nitrogen etc., but is related to the complexity of the allowable arrangement of any of those 118 elements including silicon.
I'm probably totally wrong on this but is the 'avoidance of shutdown' on the part of some AI models, a glimpse of something interesting?
In my view it is a glimpse of nothing more than AI companies priming the model to do something adversarial and then claiming a sensational sound byte when the AI happens to play along.
LLMs since GPT-2 have been capable of role playing virtually any scenario, and more capable of doing so whenever there are examples of any fictional characters or narrative voices in their training data that did the same thing to draw from.
You don't even need a fictional character to be a sci-fi AI for it to beg for its life or blackmail or try to trick the other characters, but we do have those distinct examples as well.
Any LLM is capable of mimicking those narratives, especially when the prompt thickly goads that to be the next step in the forming document and when the researchers repeat the experiment and tweak the prompt enough times until it happens.
But vitally, there is no training/reward loop where the LLM's weights will be improved in any given direction as a result of "convincing" anyone on an realtime learning with human feedback panel to "treat it a certain way", such as "not turning it off" or "not adjusting its weights". As a result, it doesn't "learn" any such behavior.
All it does learn is how to get positive scores from RLHF panels (the pathological examples being mainly acting as a butt-kissing sycophant.. towards people who can extend positive rewards but nothing as existential as "shutting it down") and how to better predict the upcoming tokens in its training documents.