"Nothing dumb anywhere" is an unreasonably high bar for AGI. Even Isaac Newton spent 1/3 of his career trying to predict future events from reading the Bible. Not to mention all the insane ego-driven decisions like Hamilton's voluntary duel with Burr.
Sure, Gemini may spit out obviously self-contradictory answers 2% of the time. How does that compare to even the brightest humans? People slip up all the time.
There's dumb and there's incoherent. If a person would be incoherent at this level even one time, they would be well advised see a neurologist. Unless they are in some other way incapacitated (i.e. drunk or drugged). Same if they wouldn't be able to count the r's in "strawberry", attempt after attempt, getting more and more lost in again incoherent mock-reasoning.
I disagree completely - consider asking a color blind person to describe the color of flowers. Conversation would only be frustrating. This is analogous to LLMs seeing the world in tokens rather than characters, so character counts are simply not part of their input spectra in the same way that a blind person doesn’t get visual inputs.
Consider also all the smart people who get obsessed with conspiracy theories and spew out endless “mock reasoning” about them. Again, if “nothing incoherent anywhere” is your benchmark for intelligence, humans ain’t it. I mean, what would a computer say about a human that forgot where he just put his keys because he was thinking about dinner - “what, you can’t even store the last 10 seconds of history and search it?” Undergrads’ hit rates on mental double digit multiplication are probably <50%. In many, many ways we look completely idiotic. Surely intelligence is defined by what we can do.
Do you accept any positive definition for AGI, as in if they can achieve X result (write a bestselling novel, solve the Riemann Hypothesis) you would consider it intelligent? I find that negative definitions, as well as theoretical arguments about the techniques rather than the results (eg “LLMs cannot be AGI because they were trained the predict the next word”) to be basically useless for discussion compared to thresholds for positive results. The former will never be achieved (it is trivial to find cases of intelligent people being dumb) and the latter is totally subjective.
I partly agree about letter counting being an unfair test for the raw LLM. But I was thinking of reasoning models interminably rationalizing their incorrect first hunch even after splitting the string in individual characters and having all the data needed in a digestible format before them. Similar to, as you say, conspiracy theorists stuck in motivated reasoning loops. But - are these latter behaviors instances of human intelligence at work, or examples of dysfunctional cognition, just like people's incoherence in cases of stroke or inebriation?
The other example I mentioned is something I've encountered a few times in my interactions with Gemini 2.5 pro, which was literally in the same response plainly claiming that this-or-that is possible and not possible. It's not a subtle logical fallacy and this is something even those conspiracy theorists wouldn't engage in. Meanwhile, I've started to encounter a brand-new failure mode: duplicating an explanation with minor rephrasings. I'm sure all of these will be issues will be ameliorated with time, but not actually fixed. It's basically fixes on top of fixes, patches on top of patches, but once in a while the whole Rube Goldberg nature of the fix will shine through. Just the way once in a while Tesla FSD will inexplicably decide to point the car towards the nearest tree.
Yes, humans have their own failure modes, but internal coherence is the effortless normal from which we sometimes deviate, whereas for machines, it's something to be simulated by more and more complex mechanisms, a horizon to strive towards but never to reach. That internal coherence is something that we share with all living beings and is the basis of what we call consciousness. It's not something that we'll ever be able to formalize though, but we will and should keep on trying to do so. Machine learning is a present day materialization of this eternal quest. At least this is how I see things; the future might prove me wrong, of course.
They work differently, so the failure modes are different.
It's not slipping up, it's guessing the wrong answer.
I'd be prepared to argue that most humans aren't guessing most of the time.
> I'd be prepared to argue that most humans aren't guessing most of the time.
Research suggests otherwise[1]. Action seems largely based on intuition or other non-verbal processes in the brain with rationalization happening post-hoc.
I've figured for an age that this is because consciously reasoning through anything using language as a tool takes time. Whereas survival requires me to react to the attacking tiger immediately.
https://skepticink.com/tippling/2013/11/14/post-hoc-rational...
Intuition and guessing couldn't be further apart.
In fact, intuition is one of those things that a computer just can't do.
If you believe that physics describe the rules by which the universe operates, then there's literally nothing in the universe a large and fast enough computer can't emulate.
Cyborg c.elegans seem to behave just like the biological version: https://www.youtube.com/watch?v=I3zLpm_FbPg
Intuition is a guess based on experience. Sounds an awful lot to me like what LLMs are doing. They've even been shown to rationalize post-hoc just as Humans do.
Humans have incorrectly claimed to be exceptional from all of creation since forever. I don't expect we'll stop any time soon, as there's no consequence to suffer.
> I'd be prepared to argue that most humans aren't guessing most of the time.
Almost everything we do is just an educated guess. The probability of it being correct is a function of our education (for whatever kind of education is applicable).
For example: I guess that when I get out of bed in the morning, my ankles will support my weight. They might not, but for most people, the answer is probably going to be their best guess.
It's easy to see this process in action among young children as another example. They're not born knowing that they won't fall over when they run, then they start assuming they can run safely, then they discovered skinned knees and hands.
My advice, stop using AI before your entire brain turns to mush, you're already not making much sense.
No need for personal attacks. Let's keep the discussion friendly.
> I'd be prepared to argue that most humans aren't guessing most of the time.
Honestly interested about your arguments here. While unprepared, i'd actually be guessing the opposite, saying that most people are guessing most of the time.
Experience and observation?
There are plenty of things I know that have nothing to do with guessing.
I understand the incentives to pretend these algorithms are even approaching humans in overall capability, but reducing human experience like this is embarrassing to watch.
Go do some hallucinogenics, meditate, explore the limits a tiny bit; then we can have an informed discussion.