Because in this case it held the opposite stance to my prompt and explained where I had misunderstood. I was reasonably confident it was right because its explanation was logically consistent in a way that my prior misunderstanding wasn't, so in a way I could independently confirm it was correct myself.

But this is also again the danger of having an advanced bullshit generator - of course it sounds reasonable and logical, that's what it is designed to output. It's not designed to output actually reasonable and logical text.

I do appreciate that it's not a hard rule: things can be cross referenced and verified, etc. but doesn't that also kind of eliminate (one of) the point(s) in using an LLM when you still have to google for information or think deeply about the subject.

> But this is also again the danger of having an advanced bullshit generator - of course it sounds reasonable and logical, that's what it is designed to output. It's not designed to output actually reasonable and logical text.

Always easier to produce bullshit than to verify it. Just had it produce a super elegant mathematical proof, for it to claim that n + 1 =0 for only positive n. Right. o3 mode, thought for 10 minutes btw.

If you want to use LLM's you have to use it in a targeted manner. This means having mental loads not encodable in the LLM's space.

Even when I'm learning on my own I'll frequently spin up new context and/or work out things in my own notes, not revealing it to the LLM, because I've found too many times if you push the LLM too hard it will make up bullshit on the spot.

Advanced, really good google search. That's what it is right now.