I can recommend earlyoom (https://github.com/rfjakob/earlyoom). Instead of freezing or crashing your system this tool kills the memory eating process just in time (in this case duckdb). This allows you repeat with smaller chunks of the dataset, until it fits into your mem.
When I there is a specific program I want to run with a limit on how much memory it is allowed to allocate, I have found systemd-run to work well.
It uses cgroups to enforce resource limits.
For example, there’s a program I wrote myself which I run on one of my Raspberry Pi. I had a problem where my program would on rare occasions use up too much memory and I wouldn’t even be able to ssh into the Raspberry Pi.
I run it like this:
The only difficulty I had was that I struggled to find the right name to use in the MemoryMax=… part because they’ve changed the name of it around between versions so different Linux systems may or may not use the same name for the limit.In order to figure out if I had the right name for it, I tested different names for it with a super small limit that I knew was less than the program needs even in normal conditions. And when I found the right name, the program would as expected be killed right off the bat and so then I could set the limit to 5G (five gigabytes) and be confident that if it exceeds that then it will be killed instead of making my Raspberry Pi impossible to ssh into again.
Yeah memory and thread management is the job of the application, not me.
This looks amazing!
Have you used this in conjunction with DuckDB?
Yes, it works just fine.