Will LLMs approach something that appears to be AGI? Maybe. Probably. They're already "better" than humans in many use cases.
LLMs/GPTs are essentially "just" statistical models. At this point the argument becomes more about philosophy than science. What is "intelligence?"
If an LLM can do something truly novel with no human prompting, with no directive other than something it has created for itself - then I guess we can call that intelligence.
How many people do you know who are capable of doing something truly novel? Definitely not me, I'm just an average phd doing average research.
Literally every single person I know that is capable of holding a pen or typing on a keyboard can create something new.
Something new != truly novel. ChatGPT creates something new every time I ask it a question.
adjective: novel
definition: new or unusual in an interesting way.
ChatGPT can create new things, sure, but it does so at your directive. It doesn't do that because it wants to which gets back to the other part of my answer.
When an LLM can create something without human prompting or directive, then we can call that intelligence.
What does intelligence have to do with having desires or goals? An amoeba can do stuff on its own but it's not intelligent. I can imagine a god-like intelligence that is a billion times smarter and more capable than any human in every way, and it could just sit idle forever without any motivation to do anything.
Does the amoeba does make choices?
Do you make that choice? Did I?
I did a really fun experiment the other night. You should try it.
I was a little bored of the novel I have been reading so I sat down with Gemini and we collaboratively wrote a terrible novel together.
At the start I was promoting it a lot about the characters and the plot, but eventually it starting writing longer and longer chapters by itself. Characters were being killed off left right and center.
It was hilariously bad, but it was creative and it was fun.
I'm a lowly high school diploma holder. I thought the point of getting a PhD meant you had done something novel (your thesis).
Is that wrong?
My phd thesis, just like 99% of other phd theses, does not have any “truly novel” ideas.
Just because it's something that no one has done yet, doesn't mean that it's not the obvious-to-everyone next step in a long, slow march.
AI manufacturers aren't comparing their models against most people; they now say its "smarter than 99% of people" or "performs tasks at a PhD level".
Look, your argument ultimately reduces down to goalpost-moving what "novel" means, and you can position those goalposts anywhere you want depending on whether you want to push a pro-AI or anti-AI narrative. Is writing a paragraph that no one has ever written before "truly novel"? I can do that. AI can do that. Is inventing a new atomic element "truly novel"? I can't do that. Humans have done that. AI can't do that. See?
Isn't the human brain also "just" a big statistical model as far as we know? (very loosely speaking)