> Should we trust the information at face value without verifying from other sources? Of course not, that's part of the learning process.
People who are learning a new topic are precisely the people least able to do this.
A friend of mine used chatgpt to try to learn calculus. It gave her an example...with constants changed in such a way that the problem was completely different (in the way that 1/x^2 is a totally different integration problem than 1/(x^2 + 1)). It then proceeded to work the problem incorrectly (ironically enough, in exactly the way that I'd expect a calculus student who doesn't really understand algebra to do it incorrectly), produced a wrong answer, and merrily went on to explain to her how to arrive at that wrong answer.
The last time I tried to use an LLM to analyze a question I didn't know the answer to (analyze a list of states to which I couldn't detect an obvious pattern), it gave me an incorrect answer that (a) did not apply to six of the listed states, (b) DID apply to six states that were NOT listed, even though I asked it for an exclusive property, (c) miscounted the elements of the list, and (d) provided no less than eight consecutive completely-false explanations on followup, only four of which it caught itself, before finally giving up.
I'm all for expanding your horizons and having new interfaces to information, but reliability is especially important when you're learning (because otherwise you build on broken foundations). If it fails at problems this simple, I certainly don't trust it to teach me anything in fields where I can't easily dissect bullshit. In principle, I don't think it's impossible for AI to get there; in practice, it doesn't seem to be.