I think these questions were addressed by Searle[1]. His argument is not that AI is impossible, it's that the existence of surprisingly human-like behavior doesn't turn a non-cognitive system into a cognitive one. The strong AI hypothesis is that you can make a computational system where, if its output is similar to what a human would produce by cognition, the system must model cognition. The Chinese room is an argument against that hypothesis.
The paper also provides some suggestions as to where cog sci needs to go to make AI possible.
From my viewpoint, if you really think that LLMs can model cognition, then you are also going to have to bring along a model of human cognition to compare it to, and you have to do it "under the hood" as it were. The external behavior is not enough. In my formulation, if a space alien showed up with vastly different biology but appeared to be cognitively conscious, we may or may not want to believe in its cognitive ability, but it's just whattaboutism to use this hypothetical alien to argue for consciousness of the LLM.
1. https://www.cambridge.org/core/services/aop-cambridge-core/c...