This is really just a meme. People don't know how to use these tools.
Here is the response from Gpt-5.2 using my default custom instructions in the mac desktop app.
OBJECTIVE: Decide whether to drive or walk to a car wash ~50 meters from home, given typical constraints (car must be present for wash).
APPROACH: Use common car-wash workflows + short-distance driving considerations (warm engine, time, parking/queue). No web sources needed (stable, local decision).
OUTPUT: • Default choice: Drive. A car wash generally requires the car on-site, and 50 meters of driving is trivial. • When walking makes sense: • You’re going to drop the car off (hand-wash/detail) and pick it up later. • You want to avoid moving the car now (ice, tight parking, you’ve been drinking, etc.) and someone else will handle the car. • Micro-guidance: • If it’s an automatic wash you stay in, drive. • If it’s detail/hand wash with a wait, drive there, then walk home (or just wait).
CONFIDENCE: 92/100 Assumptions: It’s a normal car wash that requires your car to be there; conditions are safe to drive 50 meters.
Your objective has explicit instruction that car has to be present for a wash. Quite a difference from the original phrasing where the model has to figure it out.
That's the answer of his LLM which has decomposed the question and built the answer following the op prompt obviously. I think you didn't get it.
> Your objective has explicit instruction that car has to be present for a wash.
Which is exactly how you're supposed to prompt an LLM, is the fact that giving a vague prompt gives poor results really suprising?
In this case, with such a simple task, why even bother to prompt it?
The whole idea of this question is to show that pretty often implicit assumptions are not discovered by the LLM.
Interesting, what were the instructions if you don't mind sharing?
"You're holding it wrong."