The Chinese Room didn't show anything. It's a misleading intuition pump that for some reason is being brought up again and again.

Just think about it. All the person in the room does are mechanical manipulations. The person's understanding or not understanding of Chinese language is causally disconnected from everything including functioning of the room. There's zero reasons to look at their understanding to make conclusions about the room.

The second point is that it's somehow about syntactic manipulation specifically. But why? What would change if the person in the room is solving QM-equations of your brain quantum state? Would it mean that the perfect model of your brain doesn't understand English language?

The Chinese Room argument is silent on the question of the necessary and sufficient conditions for intelligence, thinking, and understanding. It’s an argument against philosophical functionalism in the theory of mind which states that it is sufficient to compare inputs and outputs of a system to infer intelligence.

The Chinese Room is also an argument that mere symbolic manipulation is insufficient to model a human mind.

As for the QM-equations, the many-body problem in QM is your enemy. You would need a computer far larger than the entire universe to simulate the quantum states of a single neuron, never mind a human brain.

Again. It's not an argument. It's a misleading intuition pump. Or a failure of philosophy to filter away bullshit, if you will.

Please, read again what I wrote.

Regarding "larger than Universe": "the argument" places no restrictions on runtime or space complexity of the algorithm. It's just another intuitive notion: syntactic processing is manageable by a single person, other kinds of processing aren't.

I'm sorry for the confrontational tone, but I really dismayed that this thing keeps floating and keeps being regarded as a foundational result.