Something that is still not clear to me is, what is conscious even. It references the Chinese Room experiment:
> Suppose that artificial intelligence research has succeeded in programming a computer to behave as if it understands Chinese. The machine accepts Chinese characters as input, carries out each instruction of the program step by step, and then produces Chinese characters as output. The machine does this so perfectly that no one can tell that they are communicating with a machine and not a hidden Chinese speaker.
But what makes a human mind more "understanding"? Who says we're not simulating? Who says our mind even exists, in this space?
We're also a neural network, are we any more clever than a simulated one?
I think these questions were addressed by Searle[1]. His argument is not that AI is impossible, it's that the existence of surprisingly human-like behavior doesn't turn a non-cognitive system into a cognitive one. The strong AI hypothesis is that you can make a computational system where, if its output is similar to what a human would produce by cognition, the system must model cognition. The Chinese room is an argument against that hypothesis.
The paper also provides some suggestions as to where cog sci needs to go to make AI possible.
From my viewpoint, if you really think that LLMs can model cognition, then you are also going to have to bring along a model of human cognition to compare it to, and you have to do it "under the hood" as it were. The external behavior is not enough. In my formulation, if a space alien showed up with vastly different biology but appeared to be cognitively conscious, we may or may not want to believe in its cognitive ability, but it's just whattaboutism to use this hypothetical alien to argue for consciousness of the LLM.
1. https://www.cambridge.org/core/services/aop-cambridge-core/c...
> But what makes a human mind more "understanding"?
If you view understanding as knowledge + the ability to apply it, everything falls into its place. The Chinese room can't apply the knowledge that it has, even theoretically.
>But what makes a human mind more "understanding"? Who says we're not simulating? Who says our mind even exists, in this space?
The people running the experiment.
And yes is the answer to what should be a rhetorical question.
I see you have been downvoted. I also see how you can reason your way into your points of view. Let me see if I can add some points to consider.
1. Current ai models people use may be called neural networks, but they bare almost no real semblance to biological ones.
2. A complete human brain is not a compilation of all text books and internet chats. It is a nuanced technology shaped by human experiences , lived and biased by the chemistry of the human body. Human thought is not always linguistic. In the same way you do not tell your lungs to breathe every breath you can find a work of art astoundingly beautiful or a landscape to inspire you. Cleverness is one axis out of a billion to meter consciousness.
3. The human mind or consciousness stirs a lot of philosophical debate. Probably best not touched with a 10 ft poll on the Internet with strangers. I would encourage you to think about or read about human experiences, the synchronicities, and coincidences that happen between people. Especially under times of uncertainty, or blissful innovation. AI is seated backwards in the hierarchy of consciousness. That doesn't mean it's not useful but it's like comparing a water purification plant to the planets treatment of water from rain to subsurface flows and atmospheric chemistry.
Peace
I say.
[dead]