> The technical fix was embarrassingly simple: stop pushing to main every ten minutes.
Wait, you push straight to main?
> We added a rule — batch related changes, avoid rapid-fire pushes. It's in our CLAUDE.md (the governance file that all our AI agents follow):
> Avoid rapid-fire pushes to main — 11 pushes in 2h caused overlapping Kamal deploys with concurrent SQLite access.
Wait, you let _Claude_ push your e-commerce code straight to main which immediately results in a production deploy?
This is the actual problem:
"Kamal runs blue-green deploys — it starts a new container, health-checks it, then stops the old one. During the switchover, both containers are running. Both mount ultrathink_storage. Both have the SQLite files open."
WAL mode requires shared access to System V IPC mapped memory. This is unlikely to work across containers.
In case anybody needs a refresher:
https://en.wikipedia.org/wiki/Shared_memory
https://en.wikipedia.org/wiki/CB_UNIX
https://www.ibm.com/docs/en/aix/7.1.0?topic=operations-syste...
Thanks for this, the anecdote with the lost data was very concerning to me.
I think you're exactly right about the WAL shared memory not crossing the container boundary. EDIT: It looks like WAL works fine across Docker boundaries, see https://news.ycombinator.com/item?id=47637353#47677163
I don't know much about Kamal but I'd look into ways of "pausing" traffic during a deploy - the trick where a proxy pretends that a request is taking another second to finish when it's actually held in the proxy while the two containers switch over.
From https://kamal-deploy.org/docs/upgrading/proxy-changes/ it looks like Kamal 2's new proxy doesn't have this yet, they list "Pausing requests" as "coming soon".
Pausing requests then running two sqlites momentarily probably won’t prevent corruption. It might make it less likely and harder to catch in testing.
The easiest approach is to kill sqlite, then start the new one. I’d use a unix lockfile as a last-resort mechanism (assuming the container environment doesn’t somehow break those).
I'm saying you pause requests, shut down one of the SQLite containers, start up the other one and un-pause.
> I think you're exactly right about the WAL shared memory not crossing the container boundary.
I don't, fwiw (so long as all containers are bind mounting the same underlying fs).
I just tried an experiment and you're right, WAL mode worked fine across two Docker containers running on the same (macOS) host: https://github.com/simonw/research/tree/main/sqlite-wal-dock...
Could the two containers in the OP have been running on separate filesystems, perhaps?
I dug into this limitation a bit around a year ago on AWS, using a sqlite db stored on an EFS volume (I think it was EFS -- relying on memory here) and lambda clients.
Although my tests were slamming the db with reads and write I didn't induce a bad read or write using WAL.
But I wouldn't use experimental results to override what the sqlite people are saying. I (and you) probably just didn't happen to hit the right access pattern.
"the sqlite people" don't say anything that contradicts this
Perhaps they're using NFS or something - which would give them issues regardless of container boundaries.
It would explain the corruption:
https://sqlite.org/wal.html
The containers would need to use a path on a shared FS to setup the SHM handle, and, even then, this sounds like the sort of thing you could probably break via arcane misconfiguration.
I agree shm should work in principle though.
Not how SQLite works (any more)
> The wal-index is implemented using an ordinary file that is mmapped for robustness. Early (pre-release) implementations of WAL mode stored the wal-index in volatile shared-memory, such as files created in /dev/shm on Linux or /tmp on other unix systems. The problem with that approach is that processes with a different root directory (changed via chroot) will see different files and hence use different shared memory areas, leading to database corruption. Other methods for creating nameless shared memory blocks are not portable across the various flavors of unix. And we could not find any method to create nameless shared memory blocks on windows. The only way we have found to guarantee that all processes accessing the same database file use the same shared memory is to create the shared memory by mmapping a file in the same directory as the database itself.
You might consider taking the database(s) out of WAL mode during a migration.
That would eliminate the need for shared memory.
The SQLite documentation says in strong terms not to do this. https://sqlite.org/howtocorrupt.html#_filesystems_with_broke...
See more: https://sqlite.org/wal.html#concurrency
They tell you to use a proper FS, which is largely orthogonal to containerization.
WAL relies on shared memory, so while a proper FS is necessary, it isn't going to help in this case.
Why does it not help if both containers can mmap the same -shm file?
Shared memory across containers is a property of a containerization environment, not a property of a file system, "proper" or not.
It's a property of the filesystem, docker does not virtualize fs.
btw nfs that is mentioned here is fine in sync mode. However that is slow.
This thread in the SQLite forum should be instructive: https://sqlite.org/forum/forumpost/90d6805c7cec827f
> WAL mode requires shared access to System V IPC mapped memory.
Incorrect. It requires access to mmap()
"The wal-index is implemented using an ordinary file that is mmapped for robustness. Early (pre-release) implementations of WAL mode stored the wal-index in volatile shared-memory, such as files created in /dev/shm on Linux or /tmp on other unix systems. The problem with that approach is that processes with a different root directory (changed via chroot) will see different files and hence use different shared memory areas, leading to database corruption."
> This is unlikely to work across containers.
I'd imagine sqlite code would fail if that was the case; in case of k8s at least mounting same storage to 2 containers in most configurations causes K8S to co-locate both pods on same node so it should be fine.
It is far more likely they just fucked up the code and lost data that way...
> This is unlikely to work across containers.
Why not?
Ooh new historical Unix variant I had never heard of.. neat!
AIX is still supported and sold, so quite current?
Some that I used that are gone... Ultrix (MIPS), Clix, Irix, SunOS 4, SCO OpenServer, TI System V.
https://en.wikipedia.org/wiki/Ultrix
https://en.wikipedia.org/wiki/Intergraph
NeXTstep? (Leaving aside fun spitballing about whether Tahoe is morally OPENSTEP 26, and whether it was NeXT that actually bought Apple for negative $400 million...)
Alas, I never had access to any of the Next environments, until PPC MacOS.
I did hold a copy in my hands for 486-class machines in the college bookstore.
Patient: doctor, my app loses data when I deploy twice during a 10 minute interval!
Doctor: simply do not do that
Doctor: solution is simple, stop letting that stupid clown Pagliacci define how you do your work!
Patient: but doctor,
pAIgliacci: as a large language model, I am unable to experience live comedy.
Bob Newhart did it best https://www.youtube.com/watch?v=LhQGzeiYS_Q
I'm fairly confident they let it write the blog post too.
"Not as a proof of concept. Not for a side project with three users. A real store" - suggestion for human writers, don't use "not X, not Y" - it carries that LLM smell whether or not you used an LLM.
And that's just the opening paragraph, the full text is rounded off with:
"The constraint is real: one server, and careful deploy pacing."
Another strong LLM smell, "The <X> is real", nicely bookends an obviously generated blog-post.
You're absolutely right, this was an AI post
Hey, Apple still takes their store down during product launches!
I assumed that it was to ensure that the announced products were revealed in a controlled manner rather than because they aren't able to do updates to their product listings as a regular thing.
My reading of the tea leaves is it started out as the latter and continues as the former as part of the “mystique”.
> Wait, you let _Claude_ push your e-commerce code straight to main which immediately results in a production deploy?
Yikes. Thank you I'm not going to read “Lessons learned” by someone this careless.
The issue wasn’t done by the ai but their lack of architectural knowledge
i hate to be so blunt but look around the site and then tell me you're surprised
I suspect they don't wear helmets or seatbelts either. Sigh. The "I'm so proud and ignorant of unnecessarily risky behaviors" meme is tiring.
The Meta dev model of diff reviews merge into main (rebase style) after automated tests run is pretty good.
Also, staging and canary, gradual, exponential prod deployment/rollback approaches help derisk change too.
Finally, have real, tested backups and restore processes (not replicated copies) and ability to rollback.