I'd say yes, by at least one old definition made by someone who was at the time in a position to have a definition.
When deepmind was founded (2010) their definition was the following: AI is a system that learns to perform one thing; AGI is a system that learns to perform many things at the same time.
I would say that whatever we have today, "as a system" matches that definition. In other words, the "system" that is say gpt5/gemini3/etc has learned to "do" (while do is debateable) a lot of tasks (read/write/play chess/code/etc) "at the same time". And from a "pure" ML point, it learned those things from the "simple" core objective of next token prediction (+ enhancements later, RL, etc). That is pretty cool.
So I can see that as an argument for "yes".
But, even the person who had that definition has "moved the goalposts" of his own definition. From recent interviews, Hassabis has moved towards a definition that resembles the one from this paper linked here. So there's that. We are all moving the goalposts.
And it's not a recent thing. People did this back in the 80s. There's the famous "As soon as AI does something, it ceases to be AI" or paraphrased "AI is everything that hasn't been done yet".
> AGI is a system that learns to perform many things at the same time.
What counts as a "thing"? Because arguably some of the deep ANNs pre-transfomers would also qualify as AGI but no one would consider them intelligent (not in the human or animal sense of intelligence).
And you probably don't even need fancy neural networks. Get a RL algorithm and a properly mapped solution space and it will learn to do whatever you want as long as the problem can be mapped.
Can you cite the Deepmind definition? No Google results for that.
It's from a documentary that tracks Hassabis' life. I c/p from an old comment of mine (the quotes are from the documentary, can probably look up timestamps if you need, but it's in the first ~15 minutes I'd say, when they cover the first days of Deepmind):
----
In 2010, one of the first "presentations" given at Deepmind by Hassabis, had a few slides on AGI (from the movie/documentary "The Thinking Game"):
Quote from Shane Legg: "Our mission was to build an AGI - an artificial general intelligence, and so that means that we need a system which is general - it doesn't learn to do one specific thing. That's really key part of human intelligence, learn to do many many things".
Quote from Hassabis: "So, what is our mission? We summarise it as <Build the world's first general learning machine>. So we always stress the word general and learning here the key things."
And the key slide (that I think cements the difference between what AGI stood for then, vs. now):
AI - one task vs. AGI - many tasks
at human level intelligence.
It doesn't play chess? Just can parrot it very well
Yeah, maybe. But what matters is the end result. In the kaggle match, one of the games from the finals (grok vs o3) is rated by chesscom's stockfish as 1900vs2500. That is, they played a game at around those ratings.
For reference, the average chesscom player is ~900 elo, while the average FIDE rated player is ~1600. So, yeah. Parrot or not, the LLMs can make moves above the average player. Whatever that means.
I believe it will make illegal moves (unaided by any tools ofc). It will also make mistakes doing things like not being able to construct the board correctly given a fen string. For these reasons I consider long strings of correct moves insufficient to say it can play the game. If my first two statements, about a propensity for illegal moves and other fails on "easy for humans" tasks were untrue then I would reconsider.
In the kaggle test they considered the match forfeit if the model could not produce a legal move after 3 tries (none of the matches in the finals were forfeited, they all ended with checkmate on the board). Again, chesscom's interface won't let you make illegal moves, and the average there is 900. Take that as you will.