Man, such a difference between a human whoops and an AI. Had a junior dev hork all environments, when the script they thought worked in nonprod... did not modify an index like they expected, they were quickly able to wipe out everything else in every environment and every data center. It was such a teachable moment. She was my very first hire when I was asked to build a team. Crazy careful with trust, but verify on things that have blast radius.
The AI? Nothing learned, I suspect. Not in a meaningful way anyhow.
This is something I really hope can be solved.
I long for a “copilot” that can learn from me continuously such that it actually helps if I teach it what I like somehow.
And what will your role be, then?
I’m not sure what you mean? I have goals that I want to achieve; lil ai buddy comes along and helps me, over time buddy becomes better able to help me do stuff.
What do you mean role? Person who does stuff I guess, same as it is now.
Teacher.
Why you, of all the other possible teachers? Models don't need individual teachers.
Because I'm the one employing it? A model which makes a "delete production database" mistake clearly needs to be taught not to do that, and the person whose production database was deleted ought to be able to teach them not to do that. This seems quite reasonable to me.
And it’s not the junior’s fault when they do it either.
Have some controls in place. Don’t rely on nobody being dumb enough to do X. And that includes LLMs.