In philosophy of mind, there is the concept of a “zombie”. This is a person who acts just like a real person would in all circumstances, except that they do not have an internal experience of their senses. No “qualia”.
My little engineering brain has always recoiled at any use of these zombies in an argument. In my reckoning the only way a machine could act human in all circumstances would be if it had a rich internal representation of the world, including sensory data, goals, opinions, fears, weaknesses…
The LLMs are getting better at the Turing test, and as they get better I wonder how correct my intuition about zombies is.
If you pretend they have the intelligence of an infant, they can pass the test. For some reason, people always try to use adult human intelligence as a point of reference. Infants are intelligent too.
My take is that are still making too many assumptions about "intelligence" and conflating human intelligence with adult human intelligence with non-human animal intelligence, etc.