> a literal reading suggests agi is here. any claim to the negative is either homocentrism or just vibes.
Or disagreeing with your definition. AGI would need to be human-level across the board, not just chat bots. That includes robotics. Manipulating the real world is even more important for "human-level" intelligence than generating convincing and useful content. Also, there are still plenty of developers who don't think the LLMs are good enough to replace programmers yet. So not quite AGI. And the last 10% of solving a problem tends to be the hardest and takes the longest time.
That's moving the goalposts.
ChatGPT would easily have passed any test in 1995 that programmers / philosophers would have set for AGI at that time. There was definitely no assumption that a computer would need to equal humans in manual dexterity tests to be considered intelligent.
We've basically redefined AGI in a human centric way so that we don't have to say ChatGPT is AGI.
Any test?? It's failing plenty of tests not of intelligence, but of... let's call it not-entirely-dumbness. Like counting letters in words. Frontier models (like Gemini 2.5 pro) are frequently producing answers where one sentence is directly contradicted by another sentence in the same response. Also check out the ARC suite of problems easily solved by most humans but difficult for LLMs.
yeah but a lot of those failures fail because of underlying architecture issues. this would be like a bee saying "ha ha a human is not intelligent" because a human would fail to perceive uv patterns on plant petals.
The letter-counting, possibly could be excused on this ground. But not the other instances.
That's just not true. Star Trek Data was understood in the 90s to be a good science fiction example of what an AGI (known as Strong AI back then) could do. HAL was even older one. Then Skynet with it's army of terminators. The thing they all had common was the ability to manipulate the world as well or better than humans.
The holodeck also existed as a well known science fiction example, and people did not consider the holodeck computer to be a good example of AGI despite how good it was at generating 3D worlds for the Star Trek crew.
i think it would be hard to argue that chatgpt is not at least enterprise-computer (TNG) level intelligent.
I was around in 1995 and have always thought of AGI as matching human intelligence in all areas. ChatGPT doesn't do that.
Many human beings don’t match “human intelligence” in all areas. I think any definition of AGI has to be a test that 95% of humans pass (or you admit your definition is biased and isn’t based on an objective standard).
did you miss the "homocentrism" part of my comment?