But isn't part of the point of this that you want people who are eager to learn about AI and how to use it responsibly? You probably shouldn't want employees who, in their rush to automate tasks or ship AI powered features, will expose secrets, credentials, PII etc. You want people who can use AI to be highly productive without being a liability risk.

And even if you're not in a position to hire all of those people, perhaps you can sell to some of them.

Honestly, it seems worse than web3. Yes, companies throw up their hands and say "well, yeah the original inventors are probably right, our safety teams quit en masse or we fired them, the world's probably gonna go to shit, but hey there's nothing we can do about it, and maybe it'll all turn out ok!" And then hire the guy who vibecoded the clawdbot so people can download whatever trojan malware they can onto their computers.

I've seen Twitter threads where people literally celebrate that they can remove RLHF from models and then download arbitrary code and run it on their computers. I am not kidding when I say this is going to end up far worse than web3 rugpulls. At least there, you could only lose the magic crypto money you put in. Here, you can not even participate and still be pwned by a swarm of bots. For example it's trivially easy to do reputational destruction at scale, as an advanced persistent threat. Just choose your favorite politician and see how quickly they start trying to ban it. This is just one bot: https://www.reddit.com/r/technology/comments/1r39upr/an_ai_a...