ChatGPT is exceptionally good at using search now, but that's new this year, as of o3 and then GPT-5. I didn't trust GPT-4o and earlier to use the search tool well enough to be useful.

You can see if it's used search in the interface, which helps evaluate how likely it is to get the right answer.

The problem is, I ask it a basic question, it confidently feeds me bullshit, I correct it twice, and only then it does an actual search.

I use GPT-5 thinking and say "use search" if I think there's any chance it will decide not to.

This is what I have in my custom instructions:

    Stay brief. Do not use emoji.
    Check primary sources, avoid speculation.
    Do not suggest next steps.
Do I have to repeat this every time I suspect the answer will be incorrect?