The problem is, I ask it a basic question, it confidently feeds me bullshit, I correct it twice, and only then it does an actual search.
The problem is, I ask it a basic question, it confidently feeds me bullshit, I correct it twice, and only then it does an actual search.
I use GPT-5 thinking and say "use search" if I think there's any chance it will decide not to.
This is what I have in my custom instructions:
Do I have to repeat this every time I suspect the answer will be incorrect?