No, the Chinese Room is essentially the death knell for the Turing Test as a practical tool for evaluating whether an AI is actually intelligent.
No, the Chinese Room is essentially the death knell for the Turing Test as a practical tool for evaluating whether an AI is actually intelligent.
The Chinese Room didn't show anything. It's a misleading intuition pump that for some reason is being brought up again and again.
Just think about it. All the person in the room does are mechanical manipulations. The person's understanding or not understanding of Chinese language is causally disconnected from everything including functioning of the room. There's zero reasons to look at their understanding to make conclusions about the room.
The second point is that it's somehow about syntactic manipulation specifically. But why? What would change if the person in the room is solving QM-equations of your brain quantum state? Would it mean that the perfect model of your brain doesn't understand English language?
The Chinese Room argument is silent on the question of the necessary and sufficient conditions for intelligence, thinking, and understanding. It’s an argument against philosophical functionalism in the theory of mind which states that it is sufficient to compare inputs and outputs of a system to infer intelligence.
The Chinese Room is also an argument that mere symbolic manipulation is insufficient to model a human mind.
As for the QM-equations, the many-body problem in QM is your enemy. You would need a computer far larger than the entire universe to simulate the quantum states of a single neuron, never mind a human brain.
Again. It's not an argument. It's a misleading intuition pump. Or a failure of philosophy to filter away bullshit, if you will.
Please, read again what I wrote.
Regarding "larger than Universe": "the argument" places no restrictions on runtime or space complexity of the algorithm. It's just another intuitive notion: syntactic processing is manageable by a single person, other kinds of processing aren't.
I'm sorry for the confrontational tone, but I really dismayed that this thing keeps floating and keeps being regarded as a foundational result.
Only if you buy into the whole premise, which is dubious to say the least, and is a good example of begging the question.
What exactly is dubious about faking an AI with a giant lookup table and fooling would-be Turing Test judges with it? Or did you mean the Turing Test is dubious? Because that’s what the Chinese Room showed (back in 1980).
The dubious part is claiming that a large enough lookup table is not intelligent. It's basically asserted on the grounds "well of course it isn't", but no meaningful arguments are presented to this effect.
Is it just me, or would a giant lookup table fails much weaker tests that you can throw against it. (for instance: just keep asking it to do sums until it runs out)
Well presumably the lookup table can have steps you go through (produce this symbol, then go to row 3568), with state as well, so it’s more like a Turing machine than a single-shot table lookup.
That's a motte-and-baily really. If one starts out with a LUT and then retreats to a turing machine when challenged, that is. If our friend in the Chinese room is in fact permitted to operate some sort of Turing machine, I'd make a very different set of inferences!
The Chinese Room is a sophisticated way for humans to say they don't understand systematic systems and processes.
No, I think the Chinese Room is widely misunderstood by non-philosophers. The goal of the argument is not to show that machines are incapable of intelligent behaviour.
Even a thermostat can show intelligent behaviour. The issue for the thermostat is that all the intelligence has happened ahead of time.
I mean that is just talking about probabilistic systems where the probability is either zero or one. When you get in to probabilistic systems with a wider range of options than that, can you can feed back new data into the system you start getting systems that look adaptively intelligent.
There's nothing inherent to the Chinese Room thought experiment that prohibits the operator inside from using a random number source combined with an arbitrarily sophisticated sequence of lookup tables to produce "stochastic parrot" behaviour. Anyone who has played as a Dungeon Master in D&D has used dice and tables for this.
Similarly for feedback. All the operator needs to do is log each input in a file marked for that user and then when new input arrives the old input is used as context in the lookup table. Ultimately, arbitrarily sophisticated intelligent behaviour can be produced without the operator ever having any understanding of it.