> The system does not understand the input or the output, it is just a series of lookup tables
What? Why? Of course it understands.
> If you ask it about the input you just gave it, can it remember that input?
The system Searle describes has memory, yes.
Perhaps you are getting at the fact that LLMs performing inference don't have memory, but actually they can be given memory via context. You might argue that this is not the same as human memory, but you don't know this. Maybe the way the brain works is, we spend each day filling our context, and then the training happens when we sleep. If that is true, are humans not intelligent then?