> Your objective has explicit instruction that car has to be present for a wash.
Which is exactly how you're supposed to prompt an LLM, is the fact that giving a vague prompt gives poor results really suprising?
> Your objective has explicit instruction that car has to be present for a wash.
Which is exactly how you're supposed to prompt an LLM, is the fact that giving a vague prompt gives poor results really suprising?
In this case, with such a simple task, why even bother to prompt it?
The whole idea of this question is to show that pretty often implicit assumptions are not discovered by the LLM.