The most aggravating fact here is not even AI blunder. It's how deleting a volume in Railway also deletes backups of it.
This was bound to happen, AI or not.
> Because Railway stores volume-level backups in the same volume — a fact buried in their own documentation that says "wiping a volume deletes all backups" — those went with it.
Yup, this is bizarre. A top use case for needing a backup is when you accidentally delete the original.
You need to be able to delete backups too, of course, but that absolutely needs to be a separate API call. There should never be any single API call that deletes both a volume and its backups simultaneously. Backups should be a first line of defense against user error as well.
And I checked the docs -- they're called backups and can be set to run at a regular interval [1]. They're not one-off "snapshots" or anything.
[1] https://docs.railway.com/volumes/backups
Plus backups should be time gated, where the software physically blocks you from removing backups for X days.
This is one of those things that seems like a good idea on the surface but is rife with problems.
Does the company hosting the backups do it for free? Or do they charge their customers to keep holding onto backups they no longer want?
Is “my DB company refuses to delete the data” a valid legal response to a copyright enforcement or a GDPR demand?
I have no idea about the former but yes, it is a valid excuse for latter. Ok, maybe not that specific one but in general backups are going to be excluded, especially those stored on tapes or WORM media - no one expects company to remove offending record here and now, as long it is inaccessible for all practical purposes.
The GDPR says:
> The data subject shall have the right to obtain from the controller the erasure of personal data concerning him or her without undue delay and the controller shall have the obligation to erase personal data without undue delay
"Undue delay" is subjective, but "we'll keep backups of your data for a week in case you change your mind" seems easy to justify in court.
Azure SQL Database did this too for a while until enough companies complained about losing their data and their backups with a single action.
With the difference that best practices in Azure SQL have always been to store your own copies of backups and run the database in some HA/GEO-redundancy mode that blocks deletion.
Which sounds great, except that Azure SQL -- like many cloud services -- was carefully designed to be a tarpit into which you can import your data, but can't get your data back out.
For example, for at least a few years its "external" backups were simply the bacpac export function, which wasn't transactionally consistent and had all sorts of fun limits.
Yeah, still some fun limits in Azure SQL. Like you can't take the databases offline or pause the service.
Especially in combination with not having scoped api keys at all, if I understand the article correctly. If I read it correctly, any key to the dev/staging environment can access their prod systems. That's just insane.
I'd never feel comfortable without a second backup at a different provider anyway. A backup that isn't deleteable with any role/key that is actually used on any server or in automation anywhere.
Yeah I'm not sure why this fact is buried. Yes the author is blaming cursor and railway and doesn't seem to be taking responsibility. But at the same time, many people are OK with LLMs going wild on their codebase because they know they can restore from backups. Wise idea? Probably not. But that's why they're called backups and not snapshots.
It's a mistake I'll certainly learn from. Don't believe when a cloud provider says it has backups of your shit.
If your backup is inside the same thing you backed up, you don't have a backup. You have an out of date copy.
All my backups are inside the same universe as what is being backed up. A boundary must be drawn somewhere and this is one of many reasonable boundaries. As I understand it, the backup isn't "inside" the volume but is attached to it so that deleting the volume deletes the backups.
>All my backups are inside the same universe as what is being backed up.
Unless the commenter was backing up their entire universe, this comment is a non sequitur.
Can we at least agree to draw the line so that if a single call can delete the live data AND all backups, they shouldn't be called "backups", but rather snapshots?
I would also say that if your backup is controlled by the same third party as the primary, it's not a backup.
Did you back up the universe inside the universe? Otherwise your comment doesn't seem related to what I wrote.
Yes, that is insane. Or said in another way, they simply didn't had any working backup strategy!
To be 100% fair, having only one provider for backups is really risky. A minimum 3-2-1 would be better
Is that why they call it S3?
Principle of most surprise.
The most aggravating fact is that the AI slopper that got owned by his dumbness and AI just post an AI generated post that will generate nothing but schadenfreude
its much more aggravating that it looks like they're learning nothing by pushing blame onto everything else except themselves.
Exactly! I have very little sympathy...
> This isn't a story about one bad agent or one bad API. It's about an entire industry building AI-agent integrations into production infrastructure faster than it's building the safety architecture to make those integrations safe.
Are they really so clueless that they cannot recognise that there is no guardrail to give an agent other than restricted tokens?
Through this entire rant (which, by the way, they didn't even bother to fucking write themselves), they point blank refuse to acknowledge that they chose to hand the reins over to something that can never have guardrails, knowing full well that it can never have guardrails, and now they're trying to blame the supplier of the can't-have-guardrails product, complaining that the product that literally cannot have guardrails did not, in actual fact, have guardrails.
They get exactly the sympathy that I reserve for people who buy magic crystals and who then complain that they don't work. Of course they don't fucking work.
Now they're blaming their suppliers for not performing the impossible.
Sympathy?? I’m glad it happened and I hope it happens again lmao
I'm glad that I'm not the only person who felt this! It does feel like the post is missing some deserved self-reflection.
AI slopper here :) Kind words from a human. The irony is, there is tremendous truth in the post but you used big words so good for you bud.
[dead]
This is a huge issue.
A lot of VPSes operate this way as well, delete the VM, lose your backups.
A "backup" like that should be called a "snapshot".
"The author's confession is above..."