There's dumb and there's incoherent. If a person would be incoherent at this level even one time, they would be well advised see a neurologist. Unless they are in some other way incapacitated (i.e. drunk or drugged). Same if they wouldn't be able to count the r's in "strawberry", attempt after attempt, getting more and more lost in again incoherent mock-reasoning.

I disagree completely - consider asking a color blind person to describe the color of flowers. Conversation would only be frustrating. This is analogous to LLMs seeing the world in tokens rather than characters, so character counts are simply not part of their input spectra in the same way that a blind person doesn’t get visual inputs.

Consider also all the smart people who get obsessed with conspiracy theories and spew out endless “mock reasoning” about them. Again, if “nothing incoherent anywhere” is your benchmark for intelligence, humans ain’t it. I mean, what would a computer say about a human that forgot where he just put his keys because he was thinking about dinner - “what, you can’t even store the last 10 seconds of history and search it?” Undergrads’ hit rates on mental double digit multiplication are probably <50%. In many, many ways we look completely idiotic. Surely intelligence is defined by what we can do.

Do you accept any positive definition for AGI, as in if they can achieve X result (write a bestselling novel, solve the Riemann Hypothesis) you would consider it intelligent? I find that negative definitions, as well as theoretical arguments about the techniques rather than the results (eg “LLMs cannot be AGI because they were trained the predict the next word”) to be basically useless for discussion compared to thresholds for positive results. The former will never be achieved (it is trivial to find cases of intelligent people being dumb) and the latter is totally subjective.

I partly agree about letter counting being an unfair test for the raw LLM. But I was thinking of reasoning models interminably rationalizing their incorrect first hunch even after splitting the string in individual characters and having all the data needed in a digestible format before them. Similar to, as you say, conspiracy theorists stuck in motivated reasoning loops. But - are these latter behaviors instances of human intelligence at work, or examples of dysfunctional cognition, just like people's incoherence in cases of stroke or inebriation?

The other example I mentioned is something I've encountered a few times in my interactions with Gemini 2.5 pro, which was literally in the same response plainly claiming that this-or-that is possible and not possible. It's not a subtle logical fallacy and this is something even those conspiracy theorists wouldn't engage in. Meanwhile, I've started to encounter a brand-new failure mode: duplicating an explanation with minor rephrasings. I'm sure all of these will be issues will be ameliorated with time, but not actually fixed. It's basically fixes on top of fixes, patches on top of patches, but once in a while the whole Rube Goldberg nature of the fix will shine through. Just the way once in a while Tesla FSD will inexplicably decide to point the car towards the nearest tree.

Yes, humans have their own failure modes, but internal coherence is the effortless normal from which we sometimes deviate, whereas for machines, it's something to be simulated by more and more complex mechanisms, a horizon to strive towards but never to reach. That internal coherence is something that we share with all living beings and is the basis of what we call consciousness. It's not something that we'll ever be able to formalize though, but we will and should keep on trying to do so. Machine learning is a present day materialization of this eternal quest. At least this is how I see things; the future might prove me wrong, of course.

They work differently, so the failure modes are different.