Opus 4.6:
Walk! At 50 meters, you'll get there in under a minute on foot. Driving such a short distance wastes fuel, and you'd spend more time starting the car and parking than actually traveling. Plus, you'll need to be at the car wash anyway to pick up your car once it's done.
Me:
I want to get my car's transmission fixed, the mechanic shop is 50 meters away. Would it be better to walk or to drive to a location 50 meters away from my front door?
-
Opus 4.6:
<thinking> The user is asking whether to walk or drive 50 meters to a mechanic shop, but there's a key nuance here - their transmission needs fixing, so driving might not be wise.
Done </thinking>
Walk, absolutely. Driving a car with a compromised transmission even 50 meters risks making the problem worse and turning a repair into a more expensive one. Plus, 50 meters is barely a minute on foot.
That's not what I got.
Opus 4.6 (not Extended Thinking):
Drive. You'll need the car at the car wash.
Also what I got. Then I tried changing "wash" to "repair" and "car wash" to "garage" and it's back to walking.
> That's not what I got.
My Opus vs your Opus, which is smarter?!
LLMs can't access the training data that's less than the statistically most common token, so they use a random jitter.
With that randomness comes statistically irrelevant results.
Am I the only one who thinks these people are monkey patching embarrassments as they go? I remember the r in strawberry thing they suddenly were able to solve, while then failing on raspberry.
Nah. It's just non-deterministic. I'm here 4 hours later and here's the Opus 4.6 (extended thinking) response I just got:
"At 50 meters, just walk. By the time you start the car, back out, and park again, you'd already be there on foot. Plus you'll need to leave the car with them anyway."
They definitely do: at least openAi "allegedly" has whole teams scanning socials, forums, etc for embarrassments to monkey-patch.
Which raises the question why this isn't patched already. We're nearing 48 hours since this query went viral...
This is part of why they need to be so secretive. If you can see the tree of hardcoded guidance for common things it won’t look as smart.
They should make Opus Extended Extended that routes it to actual person in a low cost country.
Artificial AI.
Yes, you're the only one.
Sure there are many very very naive people that are also so ignorant of the IT industry they don't know about decades of vendors caught monkeypatching and rigging benchmarks and tests for their systems, but even so, the parent is hardly the only one.
Works better on Reddit, really.
Of course they are.
Thats my thought too. The chatbot bros probably feel the need to be responsive and there's probably an express lane to update a trivia file or something lol
No doubt about it, and there's no reason to suspect this can only ever apply to embarassing minor queries, either.
Even beyond model alignment, it's not difficult to envision such capabilities being used for censorship, information operations, etc.
Every major inference provider more or less explicitly states in their consumer ToS that they comply with government orders and even share information with intelligence agencies.
Claude, Gemini, ChatGPT, etc are all one national security letter and gag order away from telling you that no, the president is not in the Epstein files.
Remember, the NSA already engaged in an unconstitutional criminal conspiracy (as ruled by a federal judge) to illegally conduct mass surveillance on the entire country, lie about it to the American people, and lie about it to congress. The same organization that used your tax money to bribe RSA Security to standardize usage of a backdoored CSPRNG in what at the time was a widely used cryptographic library. What's the harm in a little bit of minor political censorship compared to the unconstitutional treason these predators are usually up to?
That's who these inference providers contractually disclose their absolute fealty to.
That you got different results is not surprising. LLMs are non-deterministic; which is both a strength and a weakness of LLMs.
We know. We know these things aren't determination. We know.
Lmao, and this is what they are saying will be an AGI in 6 months?
There's probably a comedy film with an AGI attempting to take over the world with its advanced grasp of strategy, persuasion and SAT tests whilst a bunch of kids confuse it by asking it fiendish brainteasers about carwashes and the number of rs in blackberry.
(The final scene involves our plucky escapees swimming across a river to escape. The AIbot conjures up a speedboat through sheer powers of deduction, but then just when all seems lost it heads back to find a goat to pick up)
There is a Soviet movie, "Teens in the Universe" [0], where teens cause robots' brains to fry by giving them linguistic logical puzzles.
[0]: https://en.wikipedia.org/wiki/Teens_in_the_Universe
This would work if it wasn’t for that lovely little human trait where we tend to find bumbling characters endearing. People would be sad when the AI lost.
Maybe infusing the AI character with the boundless self confidence of its creators will make it less endearing :)
What’s wrong with having a bittersweet movie?
In the excellent and underrated The Mitchells vs the Machines there's a running joke with a pug dog that sends the evil robots into a loop because they can't decide if it's a dog, a pig or a loaf of bread.
This theme reminds me of Blaine the Mono from the Dark Tower series
There is a Star Trek episode where a fiendish brainteaser was actually considered to genocide an entire (cybernetic, not AI) race. In the end, captain Picard choose not to deploy it.
But “PhD level” reasoning a year ago.
Yes, get ready to lose your job and cash your UBI check! It's over.
They seem to have stopped talking about AGI and pivoted to ads and smut.
This was probably wise, because ads and smut are well understood, and known to exist.
Laughable indeed.
One thing that my use of the latest and greatest models (Opus, etc) have made clear: No matter how advanced the model, it is not beyond making very silly mistakes regularly. Opus was even working worse with tool calls than Sonnet and Haiku for a while for me.
At this point I am convinced that only proper use of LLMs for development is to assist coding (not take it over), using pair development, with them on a tight leash, approving most edits manually. At this point there is probably nothing anyone can say to convince me otherwise.
Any attempt to automate beyond that has never worked for me and is very unlikely to be productive any time soon. I have a lot of experience with them, and various approaches to using them.
Well in fairness, the "G" does stand for "General".
In fairness, they redefined it away from "just like a person" to "suitable for many different tasks".
I think this lack of 'G' (generality, or modality) is the problem. A human visualizes this kind of problem (a little video plays in my head of taking a car to a car wash). LLM's don't do this, they 'think' only in text, not visually.
A proper AGI would have have to have knowledge in video, image, audio and text domains to work properly.
Show me a robotic kitten then, in six months. As smart and learning.
That’s without reasoning I presume?
4.6 Opus with extended thinking just now: "At 50 meters, just walk. By the time you start the car, back out, and park again, you'd already be there on foot. Plus you'll need to leave the car with them anyway."
Not the parent poster, but I did get the wrong answer even with reasoning turned on.
Thank you all! We needed further data points.
comparing one shot results is a foolish way to evaluate a statistical process like LLM answers. we need multiple samples.
for https://generative-ai.review I do at least three samples of output. this often yields very differnt results even from the same query.
e.g: https://generative-ai.review/2025/11/gpt-image-1-mini-vs-gpt...