If AI as presently designed and operated is conscious, this ends up being an argument for panpsychism.

As you say it’s static, fixed, deterministic, and so on, and if you know how it works it’s more like a lossy compression model of knowledge than a mind. Ultimately it’s a lot of math.

So if it’s conscious, a rock is conscious. A rock can process information in the form of energy flowing through it. It’s a fixed model. It’s non-reflective. Etc.

I agree, but I don't think determinism is a factor either way. Ultimately, if arbitrary computer programs can be conscious, then it stands to reason that many other arbitrarily complex systems in the universe should also be.

What makes the argument facile is that the singular focus on LLMs reveals an indulgence in the human tendency to anthropomorphize, rather than a reasoned perspective meant to classify the types of things in the universe which should be conscious and why LLMs should fall into that category.

Why would current AI be an argument for panpsycism? I don’t understand the connection.

AI is stochastic, not static and deterministic.

As I said, in another post, there is evidence that sensory experience creates the emergent property of awareness in responding to stimulus, self-awareness and consciousness is an emergent property of a language that has a concept of the self and others. Rocks, just like most of nature, like both sensory and language systems

> AI is stochastic, not static and deterministic.

LLMs are deterministic. If you provide the same input to the same GPU, it will produce the same output every time. LLM providers arbitrarily insert a randomised seed into the inference stack so that the input is different every time because that is more useful (and/or because it gives the illusion of dynamic intelligence by not reproducing the same responses verbatim), but it is not an inherent property of the software.

The same argument is made about the human neural network

1. That is not the claim you originally made.

2. Not provably so.

3. Even if it were so, it is self-evident that the human brain's programming is infinitely more complex than that of an LLM's. I am not, in principle, in opposition to the idea that a sufficiently advanced computer program would be indistinguishable from that of human consciousness. But it is evidence of psychosis to suggest that the trivially simple programs we've created today are even remotely close, when this field of software specifically skips anything that programming a real intelligence would look like and instead engages in superficial, statistic-based mimicry of intelligent output.

Trivially simple programs (rule sets) can give rise wildly complex systems.

Fractals, Game of Live, the emergent abilities of highly-scaled generative pre-trained transformers.

Coincidences appears to be an emergent property of (relatively) simple matter.

70kg of rocks will struggle to do anything that might look like consciousness, but when a handful of minerals and three buckets of water get together they can do the weirdest things, like wondering why there is anything at all rather than nothing.

I think it's the opposite argument

IF current AI is conscious, so are trees, rocks, turbulent flows, etc.

The argument being that LLMs are so simple that if you want to ascribe consciousness to them you have to do the same to a LOT of other stuff.

But I listed a specific difference: sensation and response. Trees have that. Rocks do not.

I believe you're using the scientific definition of "sentience", while everyone else is using the common understanding of of the word (which should be called "sapience", but thanks to sci-fi's usage of the word "sentience" is largely not).