Modern day Cliff's Notes.

There is no way to learn without effort. I understand they are not claiming this, but many students want a silver bullet. There isn't one.

But tutors are fine. The video is suggesting that this is an attempt to automate a tutor, not replace Cliff's Notes. Whether it succeeds, I have no idea.

Good tutors are fine, bad tutors will just give you the answer. Many students think the bad tutors are good ones.

Yep, this is a marketing problem. Your users' goal is to learn, but they also want to expend as little effort as possible. They'll love it if you just tell them the answers, but you're also doing them a disservice by doing so.

Same problem exists for all educational apps. Duolingo users have the goal of learning a language, but also they only want to use Duolingo for a few minutes a day, but also they want to feel like they're making progress. Duolingo's goal is to keep you using Duolingo, and if possible it'd be good for you to learn the language, but their #1 goal is to keep you coming back. Oddly, Duolingo might not even be wrong to focus primariliy on keeping you moving forward, given how many people give up when learning a new language.

> Today we’re introducing study mode in ChatGPT—a learning experience that helps you work through problems step by step instead of just getting an answer.

So, unless you have experience with this products that contradicts their claims, it's a good tutor by your definition.

Cliff notes with a near-infinite zoom feature.

The criticism of cliff's notes is generally that it's a superficial glance. It can't go deeper, it's basically a summary.

The LLM is not that. It can zoom in and out of a topic.

I think it's a poor criticism.

I don't think it's a silver bullet for learning, but it's a unified, consistent interface across topics and courses.

> It can zoom in and out of a topic.

Sure, but only as long as you're not terribly concerned with the result being accurate, like that old reconstruction of Obama's face from a pixelated version [1] but this time about a topic for which one is, by definition, not capable of identifying whether the answer is correct.

[1] https://www.theverge.com/21298762/face-depixelizer-ai-machin...

I'm capable of asking it a couple of times about the same thing.

It's unlikely to make up the same bullshit twice.

Usually exploring a topic in depth finds these issues pretty quickly.

Except it generally is shallow, for any advanced enough subject, and the scary part is you don't know when it's reached the limit of its knowledge because it'll come up with some hallucination to fill in those blanks.

If LLM's got better at just responding with: "I don't know", I'd have less of an issue.

I agree, but it's a known limitation. I've been duped a couple times, but I mostly can tell when it's full of shit.

Some topics you learn to beware and double check. Or ask it to cite sources. (For me, that's car repair. It's wrong a lot.)

I wish it had some kind of confidence level assessment or ability to realize it doesn't know, and I think it eventually will have that. Most humans I know are also very bad at that.

this basically functions as a switch you can press that says "more effort please". after every response it makes you solve a little comprehension check problem before moving on. you can try to weasel out of it but it does push back a bit.

unavoidably, people who don't want to work, won't push the "work harder" button.