Sentience as an emergent property of sufficiently complex brains is the exact opposite of "supernatural".

Complex learning behavior is far lower than a neuron. Chemical chains inside cells 'learn' according to stimuli. Learning how to replicate systems that have chemistry is going to be hard, we haven't come close to doing so. Even the achievement of recording the neural mappings of a dead rat capture the map, but not the traffic. More likely we'll develop machine-brain interfaces before machine self-awareness/sentience.

But that is just my opinion.

I think this comes down to whether the chemistry is providing some kind of deep value or is just being used by evolution to produce a version of generic stochastic behavior that could be trivially reproduced on silicon. My intuition is the latter- it would be a surprising coincidence if some complicated electro-chemical reaction behavior provided an essential building block for human intelligence that would otherwise be impossible.

But, from a best-of-all-possible-worlds perspective, surprising coincidences that are necessary to observe coincidences and label them as surprising aren't crazy. At least not more crazy than the fact that slightly adjusted physical constants would prevent the universe from existing.

> My intuition is the latter- it would be a surprising coincidence if some complicated electro-chemical reaction behavior provided an essential building block for human intelligence that would otherwise be impossible.

Well, I wouldn't say impossible: just that BMI's are probably first. Then probably wetware/bio-hardware sentience, before silicon sentience happens.

My point is the mechanisms for sentience/consciousness/experience are not well understood. I would suspect the electro-chemical reactions inside every cell to be critical to replicating those cells functions.

You would never try to replicate a car never looking under the hood! You might make something that looks like a car, seems to act like a car, but has a drastically simpler engine (hamsters on wheels), and have designs that support that bad architecture (like making the car lighter) with unforeseen consequences (the car flips in a light breeze). The metaphor transfers nicely to machine intelligence: I think.

>emergent >sufficiently complex

These can be problem words, the same way that "quantum" and "energy" can be problem words, because they get used in a way that's like magic words that don't articulate any mechanisms. Lots of complex things aren't sentient (e.g. our immune system, the internet), and "emergent" things still demand meaningful explanations of their mechanisms, and what those mechanisms are equivalent to at different levels (superconductivity).

Whether or not AI's being networked together achieves sentience is going to hinge on all kinds of specific functional details that are being entirely skipped over. That's not a generalized rejection of a notion of sentience but of this particular characterization as being undercooked.

You are really underestimating the complexity of the human brain. It is vastly more complex than the human immune system and the internet. 1 cubic millimeter was recently completely mapped and contains 57,000 cells and 150 million synapses. That is about 1 millionth of the total volume of the brain.

The immune system has 1.8 trillion cells which puts it between total brain cells (57 billion) and total synapses (150 trillion); and contains its own complex processes and interactions.

I’m not immediately convinced the brain is more complicated, based on raw numbers.

I don't believe anything in my statement amounted to a denial of the stuff you mentioned in your comment.

Yeah, but there's absolutely no proof that's how it happens.

“Supernatural” likely isnt the right word but the belief that it will happen is not based on anything rational, so it's the same mechanism that makes people believe in supernatural phenomenon.

There's no reason to expect self awareness to emerge from stacking enough Lego blocks together, and it's no different if you have GPT-based neural nets instead of Lego blocks.

In nature, self awareness gives a strong evolutionary advantage (as it increases self-preservation) and it has been independently invented multiple times in different species (we have seen it manifest in some species of fishes for instance, in addition to mammals and birds). Backpropagation-based training of a next-token predictor doesn't give the same kind of evolutionary advantage for self-awareness, so unless researchers try explicitly to make it happen, there's no reason to believe it will emerge spontaneously.

What do you even mean by self-awareness? Presumably you don’t mean fish contemplate their existence in the manner of Descartes. But almost all motile animals, and some non-animals, will move away from a noxious stimulus.

The definition is indeed a bit a tricky question, but there's a clear difference between the reflex of protecting oneself from danger or pain and higher level behavior that show that the subject realizes its own existence (the mirror test is the most famous instance of such an effect, but it's far from the only one, and doesn't only apply to the sense of sight).