It's important to understand the argument in context.

Theoretical computer science established early that input-output behavior captures the essence of computation. The causal mechanism underlying the computation does not matter, because all plausible mechanisms seem to be fundamentally equivalent.

The Chinese room argument showed that this does not extend to intelligence. That intelligence is fundamentally a causal rather than computational concept. That you can't use input-output behavior to tell the difference between an intelligent entity and a hard-coded lookup table.

On one level, LLMs are literally hard-coded lookup tables. But they are also compressed in a way that leads to emergent structures. If you use the LLM through a chat interface, you are interacting with a Chinese room. But if the LLM has other inputs beyond your prompt, or if it has agency to act on its own instead of waiting for input, it's causally a different system. And if the system can update the model on its own instead of using a fixed lookup table to deal with the environment, this is also a meaningful causal difference.

> The Chinese room argument showed that this does not extend to intelligence.

Searle's argument doesn't actually show anything. It just illustrates a complex system that appears intelligent. Searle then asserts, without any particular reasoning, that the system is not intelligent, simply because, well, how could it be, it's just a bunch of books and a mindless automaton following them?

It's a cyclic argument: A non-human system can't be intelligent because, uhh, it's not human.

This is wrong. The room as a whole is intelligent, and knows Chinese.

People have, of course, made this argument, since it is obivous. Searle responds by saying "OK, well now imagine that the man in the room memorizes all the books and does the entire computation in his head. Now where's the intelligence???" Ummm, ok, now the man is emulating a system in his head, and the system is intelligent and knows Chinese, even though the man emulating it does not -- just like how a NES emulator can execute NES CPU instructions even though the PC it runs on doesn't implement them.

Somehow Searle just doesn't comprehend this. I guess he's not a systems engineer.

As to whether a lookup table can be intelligent: I assert that a lookup table that responds intelligently to every possible query is, in fact, intelligent. Of course, such a lookup table would be infinite, and thus physically impossible to construct.

A lot of the controversy around the Chinese room argument is because people don't talk explicitly about different modes of thinking. One mode is searching for useful concepts and definitions. Another starts from definitions and searches for consequences. The discussion about the nature of intelligence is mostly about the former.

Intelligence, as we commonly understand it, is something humans have but we currently can't define. Turing proposed a definition based on the observable behavior of a system. We take an aspect of human behavior people consider intelligent and test the behavior of other systems against that. If we can't tell the difference between human behavior and the behavior of an artificial system, we consider the artificial system intelligent.

Searle used a thought experiment to argue that Turing's definition was not useful. That it did not capture the concept of intelligence in the way people intuitively understand it. If it turns out there was a person speaking Chinese answering the questions, the behavior is clearly intelligent. But if there was only a simple mechanism and a precomputed lookup table, it doesn't feel intelligent.

Maybe we need a better definition of intelligence. Maybe intelligence in the sense people intuitively understand it is not a useful concept. Or maybe something else. We don't know that, because we don't really understand intelligence.

> But if there was only a simple mechanism and a precomputed lookup table, it doesn't feel intelligent.

I think a flaw of the argument is that the way it is framed makes it sound like the system is simple (like a "lookup table") which tricks people's intuitions into thinking it doesn't sound "intelligent". But the actual algorithm implemented by the "Chinese room" would in fact be insanely complex. In any case, I think Searle's intuition here is simply wrong. The system is in fact intelligent, even if it's just a series of lookup tables.

> "But the actual algorithm implemented by the "Chinese room" would in fact be insanely complex. In any case, I think Searle's intuition here is simply wrong. The system is in fact intelligent, even if it's just a series of lookup tables.

What this sounds like to me is that you don't place much value on the system actually understanding what it is doing. The system does not understand the input or the output, it is just a series of lookup tables

If you ask it about the input you just gave it, can it remember that input?

If you ask it to explain your previous input, and explain the output, can it do that? Do those have to be made into new entries in the lookup table first? Does it have the ability to create new entries in the lookup table without being told to do so?

It seems to me you consider "intelligence" a very low bar

> The system does not understand the input or the output, it is just a series of lookup tables

What? Why? Of course it understands.

> If you ask it about the input you just gave it, can it remember that input?

The system Searle describes has memory, yes.

Perhaps you are getting at the fact that LLMs performing inference don't have memory, but actually they can be given memory via context. You might argue that this is not the same as human memory, but you don't know this. Maybe the way the brain works is, we spend each day filling our context, and then the training happens when we sleep. If that is true, are humans not intelligent then?

If you ask it to explain your previous input, and explain the output, can it do that?

yes. Searle's fundamental misunderstanding is that "syntax is insufficient for semantics", but this is just nonsense that could be only believed by someone that has never actually tried to derive meaning from syntactic transformation (e.g. coding/writing a proof)

I used to amuse myself thinking up attacks against the Chinese room. One was to have an actual Chinese professor feed answers into the room but force the conclusion that there was no intelligence. Another was to simplify the Chinese room experiment to apply to a Turing machine instead, requiring a very large lookup table which would surely give the game away.

I think ultimately I decided the Chinese room experiment was actually flawed and didn't reveal what it purported to reveal. From a neurophysiological viewpoint: The chinese room is very much the cartesian theater, and Searle places himself as the little man watching the screen. Since the cartesian theater does not exist, he's never going to see a movie.

I might be missing a more subtle point of Searle's though; maybe the chinese room experiment should be read differently?