It proves LLMs always need context. They have no idea where your car is. Is it already there at the car wash and you simply get back from the gas station to wash it where you went shortly to pay for the car wash? Or is the car at your home?
It proves LLMs are not brains, they don't think. This question will be used to train them and "magically" they'll get it right next time, creating an illusion of "thinking".
It proves that this is not intelligence. This is autocomplete on steroids.
Humans make very similar errors, possibly even the exact same error, from time to time.
We make the model better by training it, and now that this issue has come up we can update the training ;)
It proves LLMs always need context. They have no idea where your car is. Is it already there at the car wash and you simply get back from the gas station to wash it where you went shortly to pay for the car wash? Or is the car at your home?
It proves LLMs are not brains, they don't think. This question will be used to train them and "magically" they'll get it right next time, creating an illusion of "thinking".
> They have no idea where your car is.
They could either just ask before answering or state their assumption before answering.
For me this is just another hint on how careful one should be in deploying agents. They behave very unintuitively.