It would be good if there were a mode where AI trained the human operator as it worked, to reduce future reliance. Instead of just writing a document or editing a document, it would explain in a good amount of detail what it was doing, and tailor the information to the understanding level of the operator. It might even quiz the operator to assure understanding.
This would take more time in the short run, but in the long run it would result in more well-rounded humans.
When there are power/internet/LLM outages, some people are going to be rendered completely helpless, and others will be more modestly impacted — but will still be able to get some work done.
We should aim to have more people in the latter camp.
> some people are going to be rendered completely helpless,
This I don't have any hope about. Tech companies have been trying to make their customers ignorant (and terrified) of the functioning of their own computers as a moat to prevent them from touching anything, and to convince them to allow every violation or imposition. They've convinced many that their phones aren't even computers, and must operate by different, more intrusive and corporate-friendly guidelines. Now, they're being taught that their computers are actually phones, that mere customers aren't responsible enough to control those as well, and that the people who should have control are the noble tech CEOs and the government. The companies can not be shamed out of doing this, they genuinely think people are just crops to be harvested. You being dumber means that you have to pay them as a necessity rather than a convenience.
In 1985, they would teach children who could barely tie their shoes what files and directories were on computers, and we'd get to program in LOGO. These days? The panicked look I often see on normal people's faces when I ask them where they saved the file that their life depends on and is missing or broken makes me very sad. "Better to trick them into saving to OneDrive," the witches cackle. "Then if they miss a monthly payment, we can take their files away!"
Claude cli has this, called learning mode, and you can make custom modes to tweak it more
Can you describe how to access this feature? I can't find anything about this online, and I asked Claude on command line and nothing comes up for "learning mode"
Seems very cool.
update: perhaps it is the verbose flag, e.g. `claude --verbose` you were talking about?
I like this a lot...
Ah, cool. Is this meant to be used for learning specifically, or just something that can be toggled whenever you're using Claude to help you with anything?
https://openai.com/index/chatgpt-study-mode/
Openai released study mode. I don't think it's anything special beyond a custom prompt that tells it to act as a teacher. But it's a good example of what these bots can do if you prompt them right.
The bots as they stand seem to be sycophantic and make a lot of assumptions (hallucinations) rather than asking for more clarification or instruction. This isn't really a core truth to bot behaviour, and is more based around adhering to American social norms for corporate communication - deference to authority etc. You can prompt the bots to behave more usefully for coding - one of my tricks is to tell the bot to ask me clarifying questions before writing any code. This prevents it from making assumptions around functionality that I haven't specified in the brief.
For non-coding use cases, I like exploring ideas with the bot, and then every now and then prompting it to steel-man the opposing view to make sure I'm not getting dragged down a rabbit hole. Then I can take the best of these ideas and form a conclusion - hegelian dialectics and all that.