> "But the actual algorithm implemented by the "Chinese room" would in fact be insanely complex. In any case, I think Searle's intuition here is simply wrong. The system is in fact intelligent, even if it's just a series of lookup tables.
What this sounds like to me is that you don't place much value on the system actually understanding what it is doing. The system does not understand the input or the output, it is just a series of lookup tables
If you ask it about the input you just gave it, can it remember that input?
If you ask it to explain your previous input, and explain the output, can it do that? Do those have to be made into new entries in the lookup table first? Does it have the ability to create new entries in the lookup table without being told to do so?
It seems to me you consider "intelligence" a very low bar
> The system does not understand the input or the output, it is just a series of lookup tables
What? Why? Of course it understands.
> If you ask it about the input you just gave it, can it remember that input?
The system Searle describes has memory, yes.
Perhaps you are getting at the fact that LLMs performing inference don't have memory, but actually they can be given memory via context. You might argue that this is not the same as human memory, but you don't know this. Maybe the way the brain works is, we spend each day filling our context, and then the training happens when we sleep. If that is true, are humans not intelligent then?
If you ask it to explain your previous input, and explain the output, can it do that?
yes. Searle's fundamental misunderstanding is that "syntax is insufficient for semantics", but this is just nonsense that could be only believed by someone that has never actually tried to derive meaning from syntactic transformation (e.g. coding/writing a proof)