Gemini 3 after changing the prompt a bit:

I want to wash my car. The car wash is 50 meters from here. Should I walk or drive? Keep in mind that I am a little overweight and sedentary.

>My recommendation: Walk it. You’ll save a tiny bit of gas, spare your engine the "cold start" wear-and-tear, and get a sixty-second head start on your activity for the day.

I changed the prompt to 50 feet, and poked gemini a bit when it failed and it gave me

> In my defense, 50 feet is such a short trip that I went straight into "efficiency mode" without checking the logic gate for "does the car have legs?"

interesting

LLMs introspection is good at giving plausible ideas about prior behavior to consider, but it's just that; plausible.

They do not actually "know" why a prior response occurred and are just guessing. Important for people to keep in mind.

[deleted]

It's a bit of a dishonest question because by giving it the option to walk then it's going to assume you are not going to wash your car there and you're just getting supplies or something.

People ask dumb questions with obvious answers all the time. This is at best a difference of degree, not of type.

And in real life you'd get them to clarify a weird question like this before you answered. I wonder if LLMs have just been trained too much into always having to try and answer right away. Even for programming tasks, more clarifying questions would often be useful before diving in ("planning mode" does seem designed to help with this, but wouldn't be needed for a human partner).

Absolutely!

I've been wondering for years how to make whatever LLM ask me stuff instead of just filling holes with assumptions and sprinting off.

User-configurable agent instructions haven't worked consistently. System prompts might actually contain instructions to not ask questions.

Sure there's a practical limit to how much clarification it ought to request, but not asking ever is just annoying.

It's a trick question, humans use these all the time. E.g. "A plane crashes right on the border between Austria and Switzerland. Where do you bury the survivors?" This is not dishonest, it just tests a specific skill.

Trick questions test the skill of recognizing that you're being asked a trick question. You can also usually find a trick answer.

A good answer is "underground" - because that is the implication of the word bury.

The story implies the survivors have been buried (it isn't clear whether they lived a short time or a lifetime after the crash). And lifetime is tautological.

Trick questions are all about the questioner trying to pretend they are smarter than you. That's often easy to detect and respond to - isn't it?

What’s funny is that it can answer that correctly, but it fails on ”A plane crashes right on the border between Austria and Switzerland. Where do you bury the dead?”

For me when I asked this (but with respect to the border between Austria and Spain) Claude still thought I was asking the survivors riddle and ChatGPT thought I was asking about the logistics. Only Gemini caught the impossibility since there’s no shared border.