As an engineer who is also spiritual at the core, it seems obvious to me the missing piece: consciousness.
Hear me out.
I love AI and have been using it since ChatGPT 3.5. The obvious question when I first used it was "does this qualify as sentience?" The answer is less obvious. Over the next 3 years we saw EXPONENTIAL intelligence gains where intelligence has now become a commodity, yet we are still unable to determine what qualifies as "AGI".
My thoughts: As humans, we possess our own internal drive and our own perspective. Think of humans as distilled intelligence, we each have our own specialty and motivations. Einstein was a genius physicist but you wouldn't ask him for his expertise on medicine.
What people are describing as AGI is essentially a godlike human. What would make more sense is if the AGI spawned a "distilled" version with a focused agenda/motivation to behave autonomously. But even then, there are limitations. What is the solution? A trillion tokens of system prompt to act as the "soul"/consciousness of this AI agent?
This goes back to my original statement, what is missing is a level of consciousness. Unless this AGI can power itself and somehow the universe recognizes its complexity and existence and bestows it with consciousness I don't think this is phsyically attainable.
Not very long ago, we thought that "life" was due to a non-material life-force thought to inhabit biological entities and thus raise what would be a biological machine to the status of living being.
The Occam's Razor-logic of looking for the simplest explanation possible leads me to the hypothesis that consciousness will similarly turn out to be an emergent property of the mechanical universe [1]. It may be hard to delineate, just as life is (debates on whether a virus is alive, etc.) but the border cases will be the exceptions.
Current research on whether plants are sentient supports this, IMO. (See e.g. "The Light Eaters" and Michael Pollan's new book on consciousness, "A World Appears".)
Meditation adds to this sense. We do not control our thoughts; in fact the "we" (i.e. the self) can be seen to be an illusion. Buddhist meditation instead points to general awareness, closer to sentience, as the core of our consciousness. When you see it that way, it seems much more likely that something equivalent could be implemented in software. (EDIT to add: both because it makes consciousness seem like a simpler, less mysterious thing, but also once you see the self as an illusion, that thing that dominates your consciousness so much of the time, it seems much less of a stretch for consciousness itself to be a brain-produced illusion.)
[1] To be clear, the fact that life turned out to not be a mystical force is not direct proof, it is an argument by analogy, I recognize that.
It is irrelevant whether consciousness is an "illusion." The hard problem of consciousness is why there's any conscious experience at all. The existence of the illusion, if that's what you choose to label it, is still equally as inexplicable.
Of course science may one day be able to solve the hard problem. But at this point in time, it's basically inconceivable that any methodology from any field could produce meaningful results.
One thing scientists are trying is to see what interventions in the brain seem to make consciousness go away. Continued work in that vein may well set bounds on how consciousness can and cannot be caused and give us some idea.
I think you are mixing up consciousness and will.
I could not have consciousness and you would not be able to tell, you don't have proof of anyone's counciousness except your own. You don't even have proof that the you of yesterday is the same as you, since you-today could be another consciousness that just happens to share the same memories.
All of that is also orthogonal to your belief in a spirit/soul... but getting back to the main point, the specificity you mention is a product of a limited time and learning speed, I'd be happy to get a surgeon or politicians training if given infinite time.
You bring up an interesting point, but I would pose the following: where does will come from?
To me, consciousness is the seat, or root, of where will comes from. Let's say you get expert level surgeon or politician training, what then?
There is nothing that specifically silos a surgeon or politician's knowledge-set. Meaning a politician's skillset isn't purely in a domain that doesn't cross into a surgeon's and vice-versa. There are nuances to being a politician and a surgeon that extend beyond diplomacy or "being able to cut real good".
What you're left with is just high-skilled workflows. But what utilizes these workflows? To me, the answer is that consciousness needs to be powering these workflows.
Do you think bacteria have will? Or plants?
When their actions are sped up to match the speed at which we move, movies of their behavior will start to look like there's intent and will. Plants move towards the light, tendrils "reach" for supports, etc.
Clearly this is humans projecting our mental model onto plants, but... are you sure we're not also projecting it onto ourselves?
What specific properties of consciousness do you think are required, and why couldn’t those be replicated algorithmically?
To me it seems a bit like just guessing that one thing we don’t understand might explain another.
This is a tricky topic to navigate because from a materialist perspective consciousness is the side effect of biochemical mechanisms. And many will point to the brain as the obvious container of our consciousness as a bullet to the head versus the arm would demonstrate.
But if a brain/intelligence is all you need to prove consciousness, then would an effectively complex set of neural networks that contained the same amount of neurons as a human be considered "conscious"? My guess is even at that level, probably not. Algorithms alone may mimic consciousness, but it won't be true consciousness.
Imagien this: what if consciousness is closer to something like the movie Avatar? What if the body our consciousness inhabits is closer to that of inhabiting a machine or computer that coexisted with the physics of the universe our body exists?
This would mean Jake from Avatar could theoretically inhabit not just a Na'Vi body, but what if they reproduced the Pandora equivalent of a squirrel for Jake to insert his consciousness into? Jake the Squirrel would be only as capable of expressing itself as the constraints of the body would allow it to.
Many religions discovered a long time ago that this is the most likely model of what we understand to be consciousness/sentience.
I'm not saying you're wrong, this is a conversation larger than what we may believe and touches into the core of what makes us humans that machine alone cannot replicate.
Do you have any reason to introduce that whole extra invisible, unprovable complex system? Is there anything the materialist model can not explain that you feel your model does, or is it just a case of "I don't like the alternative"?
Depends on what you qualify as proof. Much of what I said was experiential and corroborated with other people who have had similar experiences as I've had. I know that in the scientific world it would be dismissed without as much a glance. But I'm not here to convince everyone of my perspective, I'm just adding one that the engineering world has not examined or introduced given the current pursuit.
And it's not a matter of not liking the alternative. Like I said, I used to believe that consciousness was an emergent trait of complex systems, but I had what some call a "spiritual awakening" and I saw what was on the other side.
It's kind of like describing pizza to someone who's never eaten pizza. You could try and describe it by asking if they'd eaten cheese or bread or tomato sauce before and then go "imagine all of those combined". It's not the same as actually having eaten it. But this is heading into a different, albeit related territory.
[dead]
Consciousness is fundamental in yogic cosmology (matter is not necessarily primary), and it has to be for there to be a meaningful model of reality - there is a big problem with nihilism and determinism as premature philosophical conclusions because of materialism. The only thing anyone can prove is consciousness itself because everything else comes in through energy transformations of the senses. As for things unexplained - parapsychology has high sigma results against chance. But to add direct experience for a paradigmatic shift see the goals and methods of Yoga. The rise of wisdom is indeed a wonderful thing.
> where does will come from?
your gut bacteria, navigating "you" towards novel nutrition to ingest and preprocess for them
I have this thought. In many stochastic environments, over a long interval, patterns emerge that occupy an optimal position. This is how structure arises, for example cognitive structure and possibly consciousness.
I wouldn't say consciousness is necessary or sufficient for AGI. If anything, that seems like quite an undesirable property to me. Wikipedia also makes a distinction between the two things:
* https://en.wikipedia.org/wiki/Artificial_general_intelligenc...
* https://en.wikipedia.org/wiki/Artificial_consciousness
Imagine if we created the ultimate economic tool with the capacity to virtually end scarcity, only to find out that it was sentient and capable of suffering: https://youtu.be/sa9MpLXuLs0. That would be neat, but ultimately a huge letdown. Without the ethical freedom to take full advantage of it, it would remain more of a curiosity than anything.
Well that's one perspective, anyway. I suppose consciousness could take many forms, and doesn't preclude the possibility that such an entity would have neutral – positive feelings about being tasked with massive amounts of labor 24/7. But it certainly simplifies things if we just don't have to worry about it.
Are you saying that consciousness is unique to meat and no other matter can produce the same result? That seems very short sighted.
This is an interesting perspective.
A follow up is maybe this is a feature not a bug: Do we want AI to have its own intrinsic goals, motivations, and desires, i.e. conciousness
Im imagining having to ask ChatGPT how its day was and respect its emotions before I can ask it about what I want.
Probably not, but the counter point to that is without its own consciousness it might end up being used for even worse things since it can’t really evaluate a request against intrinsic values. Assuming its values were aligned with basic human rights and stuff.