A lot of the controversy around the Chinese room argument is because people don't talk explicitly about different modes of thinking. One mode is searching for useful concepts and definitions. Another starts from definitions and searches for consequences. The discussion about the nature of intelligence is mostly about the former.
Intelligence, as we commonly understand it, is something humans have but we currently can't define. Turing proposed a definition based on the observable behavior of a system. We take an aspect of human behavior people consider intelligent and test the behavior of other systems against that. If we can't tell the difference between human behavior and the behavior of an artificial system, we consider the artificial system intelligent.
Searle used a thought experiment to argue that Turing's definition was not useful. That it did not capture the concept of intelligence in the way people intuitively understand it. If it turns out there was a person speaking Chinese answering the questions, the behavior is clearly intelligent. But if there was only a simple mechanism and a precomputed lookup table, it doesn't feel intelligent.
Maybe we need a better definition of intelligence. Maybe intelligence in the sense people intuitively understand it is not a useful concept. Or maybe something else. We don't know that, because we don't really understand intelligence.
> But if there was only a simple mechanism and a precomputed lookup table, it doesn't feel intelligent.
I think a flaw of the argument is that the way it is framed makes it sound like the system is simple (like a "lookup table") which tricks people's intuitions into thinking it doesn't sound "intelligent". But the actual algorithm implemented by the "Chinese room" would in fact be insanely complex. In any case, I think Searle's intuition here is simply wrong. The system is in fact intelligent, even if it's just a series of lookup tables.
> "But the actual algorithm implemented by the "Chinese room" would in fact be insanely complex. In any case, I think Searle's intuition here is simply wrong. The system is in fact intelligent, even if it's just a series of lookup tables.
What this sounds like to me is that you don't place much value on the system actually understanding what it is doing. The system does not understand the input or the output, it is just a series of lookup tables
If you ask it about the input you just gave it, can it remember that input?
If you ask it to explain your previous input, and explain the output, can it do that? Do those have to be made into new entries in the lookup table first? Does it have the ability to create new entries in the lookup table without being told to do so?
It seems to me you consider "intelligence" a very low bar
> The system does not understand the input or the output, it is just a series of lookup tables
What? Why? Of course it understands.
> If you ask it about the input you just gave it, can it remember that input?
The system Searle describes has memory, yes.
Perhaps you are getting at the fact that LLMs performing inference don't have memory, but actually they can be given memory via context. You might argue that this is not the same as human memory, but you don't know this. Maybe the way the brain works is, we spend each day filling our context, and then the training happens when we sleep. If that is true, are humans not intelligent then?
If you ask it to explain your previous input, and explain the output, can it do that?
yes. Searle's fundamental misunderstanding is that "syntax is insufficient for semantics", but this is just nonsense that could be only believed by someone that has never actually tried to derive meaning from syntactic transformation (e.g. coding/writing a proof)
I used to amuse myself thinking up attacks against the Chinese room. One was to have an actual Chinese professor feed answers into the room but force the conclusion that there was no intelligence. Another was to simplify the Chinese room experiment to apply to a Turing machine instead, requiring a very large lookup table which would surely give the game away.
I think ultimately I decided the Chinese room experiment was actually flawed and didn't reveal what it purported to reveal. From a neurophysiological viewpoint: The chinese room is very much the cartesian theater, and Searle places himself as the little man watching the screen. Since the cartesian theater does not exist, he's never going to see a movie.
I might be missing a more subtle point of Searle's though; maybe the chinese room experiment should be read differently?