Putting yourself in a situation where this could happen is kinda insane, right? Could be something I'm missing.

I can't think of any specific example where I would let any agent touch a production environment, the least of which, data. AI aside, doing any major changes makes sense to do in a dev/staging/preview environment first.

Not really sure what the lesson would be here. Don't punch yourself in the face repeatedly?

As the tool gets better, people trust it more. It's like Tesla's self-driving: "almost" works, and that's good enough for people to take their hands off the wheel, for better or for worse.

The "almost" part of automation is the issue + the marketing attached to it of course, to make it a product people want to buy. This is the expected outcome and is already priced in.

Exactly, Waymo were talking about this a few year back, they found that building it up gradually will not work, because people would stop paying attention when it's "almost" there, until it isn't and it crashes. So they set out on having their automation good enough to operate on its own without a human driver before starting to deploy it.

I would say the opposite here. The perpetrator has rejected multiple Claude's warnings about bad consequences, and multiple Claude's suggestions to act in safer ways. It reminds me of an impatient boss who demands that an engineer stopped all this nonsense talk about safety, and just did the damn thing quick and dirty.

Those guys who blew up the Chernobyl NPP also had to deliberately disable multiple safety check systems which would have prevented the catastrophe. Well, you get what you ask for.

I view it more as "I crashed my car, I should have been wearing my seat belt, wear yours!"

Source: had codex delete my entire project folder including .git. Thankfully I had a backup.

But that's the "promise" of AI (that management believes), isn't it? That it can replace an engineer because it's as good or better -- so why wouldn't you allow it to touch your production database? (I agree with you, just pointing out the difference between what's being sold and reality.)

Yep, you're not insane, they were amateur.

I wonder if Iran is considered a “production environment”?

Why are you writing in this defensive manner? The post isn't an anti-AI screed, it's a "I screwed up, here's what I did and how to avoid it."

You say "Not really sure what the lesson would be here", but the entire contents of the blogpost is a lesson. He's writing about what he changed to not make the same mistake.

There is a total mismatch between what's written and how you're responding. We don't normally call people idiots for trying to help others avoid their mistakes.

The culture war around AI is obliterating discourse. Absolutely everything is forced through the lens of pro-AI or anti-AI, even when it's a completely neutral, "I deleted my data, here's what I changed to avoid doing it again", where the tool in question just happens to be AI.

I didn't take it to be defensive. A bit tongue in cheek, but not defensive. I think the person you're responding to has a good point though. AI or not, you probably shouldn't futz around with prod before doing so in a lower env. Guardrails for both AI and humans are important.