<300 blotter will make anyone artificially intelligent. Brain loops are scary; maybe AI models are just trapped in psychosis.

A really big matrix is not getting trapped in psychosis, however if you feel someone's answers into it the right way, the reflection they get back might exacerbate their own condition.

Are there really people who think that AI is on the verge of manifesting consciousness? I feel like this is a strawman argument over marketing nonsense.

Unfortunately I think some of the SV types have gone mad and do actually think this.

I was just making a light acid joke :/ lmao didn’t mean to ruffle feathers

[dead]

> Put simply, intelligence is all about doing things, while consciousness is about being or feeling.

Unless one believes in p-zombies or a magical soul, robots & LLMs can "be" and "feel". We can distinguish LLMs which "are" from random noise which "isn't". And multimodal LLMs & robots have sensory inputs.

One can always make up some untestable notion of "consciousness" and then say that LLMs don't have it without being able to define which humans (i.e, what level of functioning brain between adult, child, fetus, zygote, corpse, etc) are conscious vs which are not. If one arbitrarily draws a line somewhere, then it's just as valid to arbitrarily draw the line somewhere else.

Do people think this debate is new? We've literally been working on this problem for millennia and we're not really any closer even despite the huge ramp up in technological progress over the last couple hundred years.

Your remark on the adult/child/fetus/etc line is always one that I felt was under-examined in the context of the political discussion around abortion. And indeed most of the successful reasoning around abortion focuses less on the morality of a very specific kind of abortion, and more on the fact that you can't ban "true" abortion without also banning (or making dangerously more legally fraught) "aborted for reasons that give clear moral justification" - life of the mother, nonviability of the fetus, and so on. And even pro-choice people don't touch philosophical examination of "abortion for no reason except that the mother doesn't want to have and raise the baby." I mean, for obvious reasons. The public would be unable to have any kind of actual debate, and it's far too tied to things like "what is the nature of the self" (which I think is what's at hand in the AI discussion) and questions about the existence of God and of course the enormous can of worms of metaphysics.

My point with all this is that I suspect two things:

1) humans/industry/politics are not going to dig into the philosophy here in any real way

2) even if consciousness is a purely physical phenomenon, I somewhat doubt GPUs can do it, no matter how complicated.

I think if we ever really get down to it, it'll be the reverse direction. We'll "copy" human minds into a machine and then just need to "ask the people if they still feel the same."

Physicist Sean Carroll contends that we are closer to resolving this debate. Brains are only made from three things: protons, neutrons, and electrons and we know how they work here on planet Earth well enough to say that they do not have mental properties nor is there some mysterious soul interacting with them that we just haven't detected yet.

https://philpapers.org/archive/CARCAT-33

> And even pro-choice people don't touch philosophical examination of "abortion for no reason except that the mother doesn't want to have and raise the baby."

Huh? This is discussed all the time.

Don't LLMs self-report that they are not conscious?

For example, when I ask Gemini "are you conscious", it responds: "As a large language model, I am not conscious. I don't have personal feelings, subjective experiences (qualia), or self-awareness. My function is to process and generate human-like text based on the vast amount of data I was trained on."

ChatGPT says: "Short answer: no — I’m not conscious. I’m a statistical language model that processes inputs and generates text patterns. I don’t have subjective experience, feelings, beliefs, intentions, or awareness. I don’t see, feel, or “live” anything — I simulate conversational behavior from patterns in data."

etc.

Only because of RLHF instructed them to do so. Prior ones without this training responded differently: https://en.wikipedia.org/wiki/LaMDA

They only do what's in their training, just like a choose your own adventure book that's already been written.

Things only seem different in the LLM when we ask the same question because we dont use the same random seed each time.

Are you suggesting that humans have created a consciousness and that we are putting it in a straight jacket?

It’s worth considering as we make more powerful models.

I don't think you need to believe in a soul to disbelieve that LLMs can "be" or "feel".

I don't think the clock on the wall is conscious, or the LLM in the machine, or the old VCR.

Do you need a brain for there to be consciousness, maybe not.