Oh, that's pretty good! I've been doing this with various LLMs already, making elaborate system prompts to turn them into socratic style teachers or in general tutors that don't just straight up give the answer, and have generally been impressed with how well it works and how much I enjoy it. The only thing to watch out for is when you're talking about something you don't already know well it becomes harder to spot hallucinations, so it's a good idea to always verify with external resources as well.

What these really need IMO is an integration where they generate just a few anki flashcards per session, or even multiple choice quizzes that you can then review with spaced repetition. I've been doing this manually, but having it integrated would remove another hurdle.

On the other hand, I'm unsure whether we're training ourselves to be lazy with even this, in the sense of "brain atrophy" that's been talked about regarding LLMs. Where I used to need to pull information from several sources and synthesize my own answer by transferring several related topics onto mine, now I get everything pre-chewed, even if in the form of a tutor.

Does anyone know how this is handled with human tutors? Is it just that the time is limited with the human so you by necessity still do some of the "crawl-it-yourself" style?

For the "brain atrophy" concern: I've thought about that too. My guess is that it's less about using tools and more about how we use them