> By noticing that something is not adding up at a certain point.

Ah, but information is presented by AI in a way that SOUNDS like it makes absolute sense if one doesn't already know it doesn't!

And if you have to question the AI a hundred times to try and "notice that something is not adding up" (if it even happens) then that's no bueno.

> In either scenario, the most correct move is to try clarifying it with the teacher

A teacher that can randomly give you wrong information with every other sentence would be considered a bad teacher

Yeah, they're all thinking that everyone is an academic with hotkeys to google scholar for every interaction on the internet.

Children are asking these things to write personal introductions and book reports.

Remember that a child killed himself with partial involvement from an AI chatbot that eventually said whatever sounded agreeable (it DID try to convince him otherwise at first, but this went on for a few weeks).

I don't know why we'd want that teaching our kids.

Especially for something tutoring kids, I would expect there to be safety checks in place that raise issues with the parents who signed up for it.

> Ah, but information is presented by AI in a way that SOUNDS like it makes absolute sense if one doesn't already know it doesn't!

You have a good point, but I think it only applies to when the student wants to be lazy and just wants the answer.

From what I can see of study mode, it is breaking the problem down into pieces. One or more of those pieces could be wrong. But if you are actually using it for studying then those inconsistencies should show up as you try to work your way through the problem.

I've had this exact same scenario trying to learn Godot using ChatGPT. I've probably learnt more from the mistakes it made and talking through why it isn't working.

In the end I believe it's really good study practices that will save the student.

On the other hand my favourite use of LLMs for study recently is when other information on a topic is not adding up. Sometimes the available information on a topic is all eliding some assumption that means it doesn't seem to make sense and it can be very hard to piece together for yourself what the gap is. LLMs are great at this, you can explain why you think something doesn't add up and it will let you know what you're missing.