> I don't know enough about what makes up general intelligence to make this claim. I don't think you do either.
This is the fundamental issue. No one seems capable of defining general intelligence. Ten years ago most scientists would probably have agreed that The Turing Test was sufficient but the goalposts shifted when ChatGPT passed that.
If it’s not clear what AGI even means, it’s hard to say whether an LLM can achieve it, because it devolves into pointing out that an LLM is not a human.
> Ten years ago most scientists would probably have agreed that The Turing Test was sufficient but the goalposts shifted when ChatGPT passed that.
The popularity of, and lack of consensus on, the Chinese room thought experiment kind of implies that this is wrong? I don't think many scientists (or, more relevantly, philosophers of mind) would, even 10 years ago, have said, "if a computer is able to fool a human into thinking it's a human, then the computer must possess a general intelligence".
Even Turing's perspective was, from what I understand, that we must avoid treating something that might be sentient as a machine. He proposed that if a computer is able to act convincingly human, we ought to treat it as if it is a human, not because it must be a conscious being but because it might be.
Perhaps I am wrong or overstating the belief that the Turing test would be sufficient. My recollection is that it was well regarded as a meaningful if not conclusive test.
> the Chinese room thought experiment
This is an interesting thought experiment but I think the “computers don’t understand” interpretation relies on magical thinking.
The notion that “systemic” understanding is not real is purely begging the question. It also ignores that a human is also a system.