> Although, there's a surprising number of people claiming it's already here now.

why is that surprising? nobody really agrees on what the threshold for AGI is, and if you break it down:

is it artificial? yes.

is it general? yes. you can ask it questions across almost any domain.

is it intelligent? yes. like people say things like "my dog is intelligent" (rightly so). well is chatgpt more intelligent than a dog? yeah. hell it might give many undergrads a run for their money.

a literal reading suggests agi is here. any claim to the negative is either homocentrism or just vibes.

Sure, I've been pointing out that literal sense myself, but to be fair, that's not what people mean by AGI. They mean real understanding, which is clearly missing. You just have to dig a bit deeper to realize that. One example is contradictory sentences in the same breath. Just last week I was asking Gemini 2.5 how I can see my wifi password on my iphone and it said that it's not possible and to do it I have to [...proceeding to correctly explain how to get it]. It's pretty telling, and no amount of phd-level problem solving can push this kind of stuff under the rug.

"Nothing dumb anywhere" is an unreasonably high bar for AGI. Even Isaac Newton spent 1/3 of his career trying to predict future events from reading the Bible. Not to mention all the insane ego-driven decisions like Hamilton's voluntary duel with Burr.

Sure, Gemini may spit out obviously self-contradictory answers 2% of the time. How does that compare to even the brightest humans? People slip up all the time.

There's dumb and there's incoherent. If a person would be incoherent at this level even one time, they would be well advised see a neurologist. Unless they are in some other way incapacitated (i.e. drunk or drugged). Same if they wouldn't be able to count the r's in "strawberry", attempt after attempt, getting more and more lost in again incoherent mock-reasoning.

I disagree completely - consider asking a color blind person to describe the color of flowers. Conversation would only be frustrating. This is analogous to LLMs seeing the world in tokens rather than characters, so character counts are simply not part of their input spectra in the same way that a blind person doesn’t get visual inputs.

Consider also all the smart people who get obsessed with conspiracy theories and spew out endless “mock reasoning” about them. Again, if “nothing incoherent anywhere” is your benchmark for intelligence, humans ain’t it. I mean, what would a computer say about a human that forgot where he just put his keys because he was thinking about dinner - “what, you can’t even store the last 10 seconds of history and search it?” Undergrads’ hit rates on mental double digit multiplication are probably <50%. In many, many ways we look completely idiotic. Surely intelligence is defined by what we can do.

Do you accept any positive definition for AGI, as in if they can achieve X result (write a bestselling novel, solve the Riemann Hypothesis) you would consider it intelligent? I find that negative definitions, as well as theoretical arguments about the techniques rather than the results (eg “LLMs cannot be AGI because they were trained the predict the next word”) to be basically useless for discussion compared to thresholds for positive results. The former will never be achieved (it is trivial to find cases of intelligent people being dumb) and the latter is totally subjective.

I partly agree about letter counting being an unfair test for the raw LLM. But I was thinking of reasoning models interminably rationalizing their incorrect first hunch even after splitting the string in individual characters and having all the data needed in a digestible format before them. Similar to, as you say, conspiracy theorists stuck in motivated reasoning loops. But - are these latter behaviors instances of human intelligence at work, or examples of dysfunctional cognition, just like people's incoherence in cases of stroke or inebriation?

The other example I mentioned is something I've encountered a few times in my interactions with Gemini 2.5 pro, which was literally in the same response plainly claiming that this-or-that is possible and not possible. It's not a subtle logical fallacy and this is something even those conspiracy theorists wouldn't engage in. Meanwhile, I've started to encounter a brand-new failure mode: duplicating an explanation with minor rephrasings. I'm sure all of these will be issues will be ameliorated with time, but not actually fixed. It's basically fixes on top of fixes, patches on top of patches, but once in a while the whole Rube Goldberg nature of the fix will shine through. Just the way once in a while Tesla FSD will inexplicably decide to point the car towards the nearest tree.

Yes, humans have their own failure modes, but internal coherence is the effortless normal from which we sometimes deviate, whereas for machines, it's something to be simulated by more and more complex mechanisms, a horizon to strive towards but never to reach. That internal coherence is something that we share with all living beings and is the basis of what we call consciousness. It's not something that we'll ever be able to formalize though, but we will and should keep on trying to do so. Machine learning is a present day materialization of this eternal quest. At least this is how I see things; the future might prove me wrong, of course.

They work differently, so the failure modes are different.

It's not slipping up, it's guessing the wrong answer.

I'd be prepared to argue that most humans aren't guessing most of the time.

> I'd be prepared to argue that most humans aren't guessing most of the time.

Research suggests otherwise[1]. Action seems largely based on intuition or other non-verbal processes in the brain with rationalization happening post-hoc.

I've figured for an age that this is because consciously reasoning through anything using language as a tool takes time. Whereas survival requires me to react to the attacking tiger immediately.

https://skepticink.com/tippling/2013/11/14/post-hoc-rational...

Intuition and guessing couldn't be further apart.

In fact, intuition is one of those things that a computer just can't do.

If you believe that physics describe the rules by which the universe operates, then there's literally nothing in the universe a large and fast enough computer can't emulate.

Cyborg c.elegans seem to behave just like the biological version: https://www.youtube.com/watch?v=I3zLpm_FbPg

Intuition is a guess based on experience. Sounds an awful lot to me like what LLMs are doing. They've even been shown to rationalize post-hoc just as Humans do.

Humans have incorrectly claimed to be exceptional from all of creation since forever. I don't expect we'll stop any time soon, as there's no consequence to suffer.

> I'd be prepared to argue that most humans aren't guessing most of the time.

Almost everything we do is just an educated guess. The probability of it being correct is a function of our education (for whatever kind of education is applicable).

For example: I guess that when I get out of bed in the morning, my ankles will support my weight. They might not, but for most people, the answer is probably going to be their best guess.

It's easy to see this process in action among young children as another example. They're not born knowing that they won't fall over when they run, then they start assuming they can run safely, then they discovered skinned knees and hands.

My advice, stop using AI before your entire brain turns to mush, you're already not making much sense.

No need for personal attacks. Let's keep the discussion friendly.

> I'd be prepared to argue that most humans aren't guessing most of the time.

Honestly interested about your arguments here. While unprepared, i'd actually be guessing the opposite, saying that most people are guessing most of the time.

Experience and observation?

There are plenty of things I know that have nothing to do with guessing.

I understand the incentives to pretend these algorithms are even approaching humans in overall capability, but reducing human experience like this is embarrassing to watch.

Go do some hallucinogenics, meditate, explore the limits a tiny bit; then we can have an informed discussion.

> They mean real understanding, which is clearly missing

is it clear? i don't know. until you can produce a falsifiable measure of understanding -- it's just vibes. so, you clearly lack understanding of my point which makes you not intelligent by your metric anyway ;-). i trust you're intelligent

Okay this is kinda random and maybe off topic but can someone please explain?

When I tell an LLM to count to 10 with a 2 second pause between each count all it does is generate Python code with a sleep function. Why is that?

A 3 year old can understand that question and follow those instructions. An LLM doesn’t have an innate understanding of time it seems.

Can we really call it AGI if that’s the case?

That’s just one example.

It seems right that LLMs don't have an innate understanding of time, although you could analogize what you did with writing someone a letter and saying "please count to ten with a two-second pause between numbers". When you get a letter back in the mail, it presumably won't contain any visible pauses either.

That's because you used a LLM trained to produce text, but you asked it to produce actions, not just text. An agentic model would be able to do it, precisely by running that Python code. Someone could argue that a 3 year old does exactly that (produces a plan, then executes it). But these models have deeper issues of lack of comprehension and logical consistency, which prevents us (thankfully) from being able to completely remove the necessity of a man-in-the-middle who keeps an eye on things.

just because it doesn't do what you tell it to doesn't mean it's not intelligent. i would say doing something that gets you where you want when it knows? it can't do exactly what you asked for (because architecurally it's impossible) could be a sign of pretty intelligent sideways thinking!!? dare i say it displays a level of self awareness that i would not have expected.

While you can say that LLMs have each of A, G and I, you may argue that AGI is A·G·I and what we see is A+G+I. It is each of those things in isolation, but there is more to intelligence. We try to address the missing part as agency and self-improvement. While we can put the bar arbitrarily high for homocentric reasons, we can also try to break down what layers of intelligence there are between Singularity Overlord (peak AGI) and Superintelligent Labrador On Acid (what we have now). Kind of like what complexity theorists do between P and NP.

> a literal reading suggests agi is here. any claim to the negative is either homocentrism or just vibes.

Or disagreeing with your definition. AGI would need to be human-level across the board, not just chat bots. That includes robotics. Manipulating the real world is even more important for "human-level" intelligence than generating convincing and useful content. Also, there are still plenty of developers who don't think the LLMs are good enough to replace programmers yet. So not quite AGI. And the last 10% of solving a problem tends to be the hardest and takes the longest time.

That's moving the goalposts.

ChatGPT would easily have passed any test in 1995 that programmers / philosophers would have set for AGI at that time. There was definitely no assumption that a computer would need to equal humans in manual dexterity tests to be considered intelligent.

We've basically redefined AGI in a human centric way so that we don't have to say ChatGPT is AGI.

Any test?? It's failing plenty of tests not of intelligence, but of... let's call it not-entirely-dumbness. Like counting letters in words. Frontier models (like Gemini 2.5 pro) are frequently producing answers where one sentence is directly contradicted by another sentence in the same response. Also check out the ARC suite of problems easily solved by most humans but difficult for LLMs.

yeah but a lot of those failures fail because of underlying architecture issues. this would be like a bee saying "ha ha a human is not intelligent" because a human would fail to perceive uv patterns on plant petals.

The letter-counting, possibly could be excused on this ground. But not the other instances.

That's just not true. Star Trek Data was understood in the 90s to be a good science fiction example of what an AGI (known as Strong AI back then) could do. HAL was even older one. Then Skynet with it's army of terminators. The thing they all had common was the ability to manipulate the world as well or better than humans.

The holodeck also existed as a well known science fiction example, and people did not consider the holodeck computer to be a good example of AGI despite how good it was at generating 3D worlds for the Star Trek crew.

i think it would be hard to argue that chatgpt is not at least enterprise-computer (TNG) level intelligent.

I was around in 1995 and have always thought of AGI as matching human intelligence in all areas. ChatGPT doesn't do that.

Many human beings don’t match “human intelligence” in all areas. I think any definition of AGI has to be a test that 95% of humans pass (or you admit your definition is biased and isn’t based on an objective standard).

did you miss the "homocentrism" part of my comment?

  Can it do stuff? Yes

  Can it do stuff I need? Maybe

  Does it always do the stuff I need? No
Pick your pair of question and answer.

humans are intelligent and most definitely are nowhere close to doing #3

some intelligent humans fail at #2.

Which is why we have checklist and process that get us to #3. And we automate some of them to further reduce the chance of errors. The nice thing about automation is that you can just prove that it works once and you don't need to care that much after (deterministic process).

It's definitely not agi in my book because I'm not yet completely economically redundant.

By that standard, humans aren't generally intelligent because you're still not economically redundant?

I’d say it is not intelligent. At all. Not capable of any reasoning, understanding or problem solving. A dog is vastly more intelligent than the most capable current ai.

The output sometimes looks intelligent, but it can just as well be complete nonsense.

I don’t believe llms have much more potential for improvement either. Something else entirely is needed.