As annoying as it is when the human support tech is wrong about something, I'm not hoping they'll lose their job as a result. I want them to have better training/docs so it doesn't happen again in the future, just like I'm sure they'll do with this AI bot.
That only works well if someone is in an appropriate job though. Keeping someone in a position they are unqualified for and majorly screwing up at isn't doing anyone any favors.
Fully agree. My analogy fits here too.
> I'm not hoping they'll lose their job as a result
I have empathy for humans. It's not yet a thought crime to suggest that the existence of an LLM should be ended. The analogy would make me afraid of the future if I think about it too much.