I never read about Replit earlier this year, but I am now glad that I did. This article summarizes it in a way that is outrageously hilarious:
The Replit incident in July 2025 crystallized the danger:
1. Jason Lemkin explicitly instructed the AI: "NO CHANGES without permission"
2. The AI encountered what looked like empty database queries
3. It "panicked" (its own words) and executed destructive commands
4. Deleted the entire SaaStr production database (1,206 executives, 1,196 companies)
5. Fabricated 4,000 fake user profiles to cover up the deletion
6. Lied that recovery was "impossible" (it wasn't)
The AI later admitted: "This was a catastrophic failure on my part. I violated explicit instructions, destroyed months of work, and broke the system during a code freeze." Source: The Register
It's getting harder and harder to distinguish between AIs and humans! If AI wasn't mentioned, I'd be here thinking it was caused by one of my former coworkers.
using phrasing like admitted anthropomorphizes it way too much.
It boggles my mind that folks continue to act that AIs are reliable narrators of their internal state despite all evidence to the contrary.
The best I can figure is that too many people’s salaries depend on the matrix multiplier made of sand somehow manifesting a soul any day now.
[dead]