That's about the only thing the Chinese room makes clear. The argument otherwise strikes me as valueless.

It's important to understand the argument in context.

Theoretical computer science established early that input-output behavior captures the essence of computation. The causal mechanism underlying the computation does not matter, because all plausible mechanisms seem to be fundamentally equivalent.

The Chinese room argument showed that this does not extend to intelligence. That intelligence is fundamentally a causal rather than computational concept. That you can't use input-output behavior to tell the difference between an intelligent entity and a hard-coded lookup table.

On one level, LLMs are literally hard-coded lookup tables. But they are also compressed in a way that leads to emergent structures. If you use the LLM through a chat interface, you are interacting with a Chinese room. But if the LLM has other inputs beyond your prompt, or if it has agency to act on its own instead of waiting for input, it's causally a different system. And if the system can update the model on its own instead of using a fixed lookup table to deal with the environment, this is also a meaningful causal difference.

> The Chinese room argument showed that this does not extend to intelligence.

Searle's argument doesn't actually show anything. It just illustrates a complex system that appears intelligent. Searle then asserts, without any particular reasoning, that the system is not intelligent, simply because, well, how could it be, it's just a bunch of books and a mindless automaton following them?

It's a cyclic argument: A non-human system can't be intelligent because, uhh, it's not human.

This is wrong. The room as a whole is intelligent, and knows Chinese.

People have, of course, made this argument, since it is obivous. Searle responds by saying "OK, well now imagine that the man in the room memorizes all the books and does the entire computation in his head. Now where's the intelligence???" Ummm, ok, now the man is emulating a system in his head, and the system is intelligent and knows Chinese, even though the man emulating it does not -- just like how a NES emulator can execute NES CPU instructions even though the PC it runs on doesn't implement them.

Somehow Searle just doesn't comprehend this. I guess he's not a systems engineer.

As to whether a lookup table can be intelligent: I assert that a lookup table that responds intelligently to every possible query is, in fact, intelligent. Of course, such a lookup table would be infinite, and thus physically impossible to construct.

A lot of the controversy around the Chinese room argument is because people don't talk explicitly about different modes of thinking. One mode is searching for useful concepts and definitions. Another starts from definitions and searches for consequences. The discussion about the nature of intelligence is mostly about the former.

Intelligence, as we commonly understand it, is something humans have but we currently can't define. Turing proposed a definition based on the observable behavior of a system. We take an aspect of human behavior people consider intelligent and test the behavior of other systems against that. If we can't tell the difference between human behavior and the behavior of an artificial system, we consider the artificial system intelligent.

Searle used a thought experiment to argue that Turing's definition was not useful. That it did not capture the concept of intelligence in the way people intuitively understand it. If it turns out there was a person speaking Chinese answering the questions, the behavior is clearly intelligent. But if there was only a simple mechanism and a precomputed lookup table, it doesn't feel intelligent.

Maybe we need a better definition of intelligence. Maybe intelligence in the sense people intuitively understand it is not a useful concept. Or maybe something else. We don't know that, because we don't really understand intelligence.

> But if there was only a simple mechanism and a precomputed lookup table, it doesn't feel intelligent.

I think a flaw of the argument is that the way it is framed makes it sound like the system is simple (like a "lookup table") which tricks people's intuitions into thinking it doesn't sound "intelligent". But the actual algorithm implemented by the "Chinese room" would in fact be insanely complex. In any case, I think Searle's intuition here is simply wrong. The system is in fact intelligent, even if it's just a series of lookup tables.

> "But the actual algorithm implemented by the "Chinese room" would in fact be insanely complex. In any case, I think Searle's intuition here is simply wrong. The system is in fact intelligent, even if it's just a series of lookup tables.

What this sounds like to me is that you don't place much value on the system actually understanding what it is doing. The system does not understand the input or the output, it is just a series of lookup tables

If you ask it about the input you just gave it, can it remember that input?

If you ask it to explain your previous input, and explain the output, can it do that? Do those have to be made into new entries in the lookup table first? Does it have the ability to create new entries in the lookup table without being told to do so?

It seems to me you consider "intelligence" a very low bar

> The system does not understand the input or the output, it is just a series of lookup tables

What? Why? Of course it understands.

> If you ask it about the input you just gave it, can it remember that input?

The system Searle describes has memory, yes.

Perhaps you are getting at the fact that LLMs performing inference don't have memory, but actually they can be given memory via context. You might argue that this is not the same as human memory, but you don't know this. Maybe the way the brain works is, we spend each day filling our context, and then the training happens when we sleep. If that is true, are humans not intelligent then?

If you ask it to explain your previous input, and explain the output, can it do that?

yes. Searle's fundamental misunderstanding is that "syntax is insufficient for semantics", but this is just nonsense that could be only believed by someone that has never actually tried to derive meaning from syntactic transformation (e.g. coding/writing a proof)

I used to amuse myself thinking up attacks against the Chinese room. One was to have an actual Chinese professor feed answers into the room but force the conclusion that there was no intelligence. Another was to simplify the Chinese room experiment to apply to a Turing machine instead, requiring a very large lookup table which would surely give the game away.

I think ultimately I decided the Chinese room experiment was actually flawed and didn't reveal what it purported to reveal. From a neurophysiological viewpoint: The chinese room is very much the cartesian theater, and Searle places himself as the little man watching the screen. Since the cartesian theater does not exist, he's never going to see a movie.

I might be missing a more subtle point of Searle's though; maybe the chinese room experiment should be read differently?

No, the Chinese Room is essentially the death knell for the Turing Test as a practical tool for evaluating whether an AI is actually intelligent.

The Chinese Room didn't show anything. It's a misleading intuition pump that for some reason is being brought up again and again.

Just think about it. All the person in the room does are mechanical manipulations. The person's understanding or not understanding of Chinese language is causally disconnected from everything including functioning of the room. There's zero reasons to look at their understanding to make conclusions about the room.

The second point is that it's somehow about syntactic manipulation specifically. But why? What would change if the person in the room is solving QM-equations of your brain quantum state? Would it mean that the perfect model of your brain doesn't understand English language?

The Chinese Room argument is silent on the question of the necessary and sufficient conditions for intelligence, thinking, and understanding. It’s an argument against philosophical functionalism in the theory of mind which states that it is sufficient to compare inputs and outputs of a system to infer intelligence.

The Chinese Room is also an argument that mere symbolic manipulation is insufficient to model a human mind.

As for the QM-equations, the many-body problem in QM is your enemy. You would need a computer far larger than the entire universe to simulate the quantum states of a single neuron, never mind a human brain.

Again. It's not an argument. It's a misleading intuition pump. Or a failure of philosophy to filter away bullshit, if you will.

Please, read again what I wrote.

Regarding "larger than Universe": "the argument" places no restrictions on runtime or space complexity of the algorithm. It's just another intuitive notion: syntactic processing is manageable by a single person, other kinds of processing aren't.

I'm sorry for the confrontational tone, but I really dismayed that this thing keeps floating and keeps being regarded as a foundational result.

Only if you buy into the whole premise, which is dubious to say the least, and is a good example of begging the question.

What exactly is dubious about faking an AI with a giant lookup table and fooling would-be Turing Test judges with it? Or did you mean the Turing Test is dubious? Because that’s what the Chinese Room showed (back in 1980).

The dubious part is claiming that a large enough lookup table is not intelligent. It's basically asserted on the grounds "well of course it isn't", but no meaningful arguments are presented to this effect.

Is it just me, or would a giant lookup table fails much weaker tests that you can throw against it. (for instance: just keep asking it to do sums until it runs out)

Well presumably the lookup table can have steps you go through (produce this symbol, then go to row 3568), with state as well, so it’s more like a Turing machine than a single-shot table lookup.

That's a motte-and-baily really. If one starts out with a LUT and then retreats to a turing machine when challenged, that is. If our friend in the Chinese room is in fact permitted to operate some sort of Turing machine, I'd make a very different set of inferences!

The Chinese Room is a sophisticated way for humans to say they don't understand systematic systems and processes.

No, I think the Chinese Room is widely misunderstood by non-philosophers. The goal of the argument is not to show that machines are incapable of intelligent behaviour.

Even a thermostat can show intelligent behaviour. The issue for the thermostat is that all the intelligence has happened ahead of time.

I mean that is just talking about probabilistic systems where the probability is either zero or one. When you get in to probabilistic systems with a wider range of options than that, can you can feed back new data into the system you start getting systems that look adaptively intelligent.

There's nothing inherent to the Chinese Room thought experiment that prohibits the operator inside from using a random number source combined with an arbitrarily sophisticated sequence of lookup tables to produce "stochastic parrot" behaviour. Anyone who has played as a Dungeon Master in D&D has used dice and tables for this.

Similarly for feedback. All the operator needs to do is log each input in a file marked for that user and then when new input arrives the old input is used as context in the lookup table. Ultimately, arbitrarily sophisticated intelligent behaviour can be produced without the operator ever having any understanding of it.