I cannot wrap my head around the anecdote that opens the article:

> Lately I’ve heard a lot of stories of AI accidentally deleting entire codebases or wiping production databases.

I simply... I cannot. Someone let a poorly understood AI connected to prod, and it ignored instructions, deleted the database, and tried to hide it. "I will never use this AI again", says this person, but I think he's not going far enough: he (the human) should be banned from production systems as well.

This is like giving full access to production to a new junior dev who barely understands best practices and is still in training. This junior dev is also an extraterrestrial with non-human, poorly understood psychology, selective amnesia and a tendency to hallucinate.

I mean... damn, is this the future of software? Have we lost our senses, and in our newfound vibe-coding passion forgotten all we knew about software engineering?

Please... stop... I'm not saying "no AI", I do use it. But good software practices remain as valid as ever, if not more!

The common story getting shared all over is from a guy named Jason Lemkin. He’s a VC who did a live vibe-coding experiment for a week on Twitter where he wanted to test if he, a non-programmer, could build and run a fake SaaS by himself.

The AI agent dropped the “prod” database, but it wasn’t an actual SaaS company or product with customers. The prod database was filled with synthetic data.

The entire thing was an exercise but the story is getting shared everywhere without the context that it was a vibe coding experiment. Note how none of the hearsay stories can name a company that suffered this fate, just a lot of “I’m hearing a lot of stories” that it happened.

It’s grist for the anti-AI social media (including HN) mill.

Claude has dropped my dev database about three times at this point. I can totally see how it would drop a prod one if connected to it.

OK, ok, I read the Twitter posts and didn't get the full context that this was an experiment.

I'm actually relieved that nobody (currently) thinks this was a good idea.

You've restored my faith in humanity. For now.

I generally agree with you, but I think a lot of people are thinking about Steve Yegge, in addition to Jason Lemkin. And it did lock him out of his real prod database.

Is there a need for a wiki to collect instances of these stories? Having a place to check what claims are actually being made rather than details drift like an urban legend.

I tried to determine the origin of a story about a family being poisoned by mushrooms that an AI said were edible. The nation seemed to change from time to time and I couldn't pin down the original source. I got the feeling it was an imagined possibility from known instances of AI generated mushroom guides.

There seems to cases of warnings of what could happen that change to "This Totally Happened" behind a paywall followed by a lot of "paywalled-site reported this totally happened".

Its a matter of priorities. Its cheap and fast and there is a chance that it will be OK. Even just OK until I move on. People often make risky choices for those reasons. Not just with IT systems - the crash of 2008 was largely the result of people betting (usually correctly) that the wheels would not fall off until after they had collected a few years of bonuses.

OK, if it's a matter of priorities, let's just ignore all hard learned lessons in software engineering, and vibe-code our way through life, crossing fingers and hoping for the best.

Typing systems? Who needs them, the LLM knows better. Different prod, dev, and staging environments? To hell with them, the LLM knows better. Testing? Nope, the LLM told me everything's sweet.

(I know you're not saying this, I'm just venting my frustration. It's like the software engineering world finally and conclusively decided engineering wasn't necessary at all).

I do not know who is doing the math, but deleting production data does not sound very cheap to me...

No, but the decision is taken on the basis that it probably will not happen, and if it does there is a good chance that the person taking the decision will not be the one to bear the consequences.

That is why I chose to compare it to the 2008 crash. The people who took the decisions to take the risks that lead to it came out of it OK.

99% agree, but:

>(the human) should be banned from production systems as well.

The human may have learnt the lesson... if not, I would still be banned ;)[0]

[0] I did not delete a database, but cut power to the rack running the DB

I cut the power... but I did not drop the database.

I don't think it's the same. I'm not arguing you must not make mistakes, because all of us do.

I mean: if you're a senior, don't connect a poorly understood automated tool to production, give it the means to destroy production, and (knowing they are prone to hallucinations) then tell it "but please don't do it unless I tell you to". As a fun thought experiment, imagine this was Skynet: "please don't start nuclear war with Russia. We have a simulation scenario, please don't confuse it with reality. Anyway, here are the launch codes."

Ignoring all software engineering best practices is a junior-level mistake. If you're a senior, you cannot be let off the hook. This is not the same as tripping on a power cable or accidentally running a DROP in production when you thought you were in testing.

I mean everyone breaks prid at least once, ai is just one that doesn’t learnt from the mistake

To be clear, I'm not saying "don't make mistakes", because that's impossible.

I'm merely saying "don't play (automated) Russian roulette".