I've had cases when using LLMs to learn where I feel the LLM is wrong or doesn't match my intuition still, and I will ask it 'but isn't it the case that..' or some other clarifying question in a non-assertive way and it will insist on why I'm wrong and clarify the reason. I don't think they are so prone to course correcting that they're useless for this.

But what if you were right, the LLM is wrong.

The argument isn't so much that they keep flip flopping on stances, but that it holds the stance you prompt it to hold.

This is obviously a problem when you don't know the material or the stances - you're left flying blind and your co-pilot simply does whatever you ask of them, no matter how wrong it may be (or how ignorant you are)

Because in this case it held the opposite stance to my prompt and explained where I had misunderstood. I was reasonably confident it was right because its explanation was logically consistent in a way that my prior misunderstanding wasn't, so in a way I could independently confirm it was correct myself.

But this is also again the danger of having an advanced bullshit generator - of course it sounds reasonable and logical, that's what it is designed to output. It's not designed to output actually reasonable and logical text.

I do appreciate that it's not a hard rule: things can be cross referenced and verified, etc. but doesn't that also kind of eliminate (one of) the point(s) in using an LLM when you still have to google for information or think deeply about the subject.

> But this is also again the danger of having an advanced bullshit generator - of course it sounds reasonable and logical, that's what it is designed to output. It's not designed to output actually reasonable and logical text.

Always easier to produce bullshit than to verify it. Just had it produce a super elegant mathematical proof, for it to claim that n + 1 =0 for only positive n. Right. o3 mode, thought for 10 minutes btw.

If you want to use LLM's you have to use it in a targeted manner. This means having mental loads not encodable in the LLM's space.

Even when I'm learning on my own I'll frequently spin up new context and/or work out things in my own notes, not revealing it to the LLM, because I've found too many times if you push the LLM too hard it will make up bullshit on the spot.

Advanced, really good google search. That's what it is right now.