I built something like this at work using plain Docker images. Can you help me understand your value prop a little better?
The memory forking seems like a cool technical achievement, but I don't understand how it benefits me as a user. If I'm delegating the whole thing to the AI anyway, I care more about deterministic builds so that the AI can tackle the problem.
So first MicroVM != Container, and container is not a secure isolation system. I would not run untrusted containers on your nodes without extra hardening.
The memory forking was originally invented because for AI App Builders and first response driven applications its extremely important that they are instant (difference between running bun dev and the dev server already being running).
However its much more generally applicable, Postgres is a great example of this. You can't fork the filesystem under postgres and get consistency. Same thing with a browser state, a weird server state, or anything that exists in memory. The memory forking gives a huge performance boost while snapshotting whats actually going on at one instant.
What does this protect you from that you’re exposed to by running a well-crafted rootless container on a system with SELinux or similar?
Generally kernel level attacks and neighbor performance impacts on the security side.
On the functional side without a kernel per guest you can't allow kernel access for stuff like eBPF, networking, nested virtualization and lots of important features.
Here is a good blog from docker explaining how even the best container is not as safe as a MicroVM https://www.docker.com/blog/containers-are-not-vms/
theoretically you can get to fairly complete security via containers + a gVisor setup but at the expense of a ton of syscall performance and disabling lots of features (which is a 100% valid approach for many usecases).