That's the trick right? What do they really mean by AGI. Depending on how narrow you go, it sounds like we've already achieved it. However, if they keep saying they'll achieve it and not defining it before making such statements that determine what it is, they can keep saying it endlessly to create hype.
One key thing I've heard about AGI which I think would be the most determining factor for me is a model that learns on the fly. Which could be done one way or another, but when you consider that LLMs basically run like "ROM" files, it makes it a little complicated.
I think we need to re-imagine how LLMs are built, train, and run. But also, figure out how to drastically lower the cost of running them.
I think they would not be LLMs then.