The choice of a pure-monolithic kernel is also interesting; I can buy that it's more secure, but having to recompile the kernel every time you change hardware sounds like it would be pretty tedious. Early days, though, so we'll see how that decision works out.

Why would you buy it’s more secure. Traditionally in windows in-kernel compositing was a constant source of security vulnerabilities. Sure rust may help the obvious memory corruption possibilities but I’m not convinced.

As opposed to the Unix way where a networked display server is used? Exposing something that doesn't need to be exposed over a network is oh so secure right? It must be because Linux does it and everyone knows Linux is the end all and be all of operating systems...

But seriously a lot of the design decisions Linux and other Unix like systems makes are horrible and poorly bolted on to a design from the 70s that aged very poorly. One of my goals with this project is to highlight that by showing how system with a more modern design derived from the metric ton of OS research that has been done since the 70s can be far better and show just how poorly designed and put together the million and one Unix clones actually are no matter how much lipstick Unix diehards try to put on that pig.

I could go for something like MINIX, i.e. the microkernel architecture. If a driver dies, it gets "resurrected", and so forth.

Why? Faulty drivers shouldn't be restarted. I never understood that argument in favor of microkernels.

https://wiki.minix3.org/doku.php?id=www:documentation:featur... seems pretty appealing to me.

Read more about it here: https://wiki.minix3.org/doku.php?id=releases:3.2.0:developer...

> In Minix as a microkernel, device drivers are separate programs which send and receive message to communicate with the other operating system components. Device drivers, like any other program, may contain bugs and could crash at any point in time. The Reincarnation server will attempt to restart device drivers when it notices they are abruptly killed by the kernel due to a crash, or in our case when they exit(2) unexpectedly. You can see the Reincarnation Server in the process list as rs, if you use the ps(1) command. The Reincarnation Server sends keep-a-live messages to each running device driver on the system periodically, to ensure they are still responsible and not i.e. stuck in an infinite loop.

The point is that when failures do occur, they can be isolated and recovered from without compromising system stability. In a monolithic kernel, a faulty driver can crash the entire system; in a microkernel design, it can be restarted independently, preserving uptime and isolating the fault domain.

Hardware glitches, transient race conditions, and unforeseen edge cases are unavoidable at scale. A microkernel architecture treats these as recoverable events rather than fatal ones.

This is conceptually similar to how the BEAM VM handles supervision in Erlang and Elixir; processes are cheap and disposable, and supervisors ensure that the system as a whole remains consistent even when individual components fail. The same reasoning applies in OS design: minimizing the blast radius of a failure is often more valuable than trying to prevent every possible fault.

In practice, the "driver resurrection" model makes sense in environments where high availability and fault isolation are critical, such as embedded systems, aerospace, and critical infrastructure. It's the same philosophy that systems like seL4 and QNX goes by.

Do you understand now?

Don't let me tell you how a unreadable CDROM could block the unit forever in OpenVMS just because.

I’m really not sure what I said that warranted this reaction.

I was literally talking about Microsoft moving the compositor that was inside the kernel in their old Windows 9x kernel architecture to outside the kernel in Windows NT.

That literally every other kernel (OSS and comercial, Unix and not) does this separation suggests this is a generally accepted good security practice.

I’m not aware of any kernel research that alters the fundamental fact that having compositing in-kernel compositing is a big security risk surface area and the OS you are proposing isn’t even pure Rust - it’s got C and assembly and unsafe Rust thrown in which suggests there’s a non trivial attack surface area that isn’t mitigated architecturally - AFAIK capability security won’t help here with a monolithic design and you need a microkernel design to separate concerns and blast areas to make the capabilities mean anything so that an exploit in one piece of the kernel can’t be a launching pad to broader exploits. This is also ignoring that even safe Rust has potential for exploit since there are compiler bugs around soundness in terms of generated code so even if you could write pure safe Rust code (which you can’t at the OS level) a monolithic kernel would present issues.

TLDR: claiming that there’s decades of OS research to improve on that existing kernels don’t take advantage of is fair. Claiming that a monolithic kernel doesn’t suffer architectural security challenges, particularly with respect to compositing in-kernel is a bold statement that would be better supported by explaining how that research solves the security risks rather than launching an ad hominem attack against a different kernel family than I even mentioned is just a weird defensive reaction.

What security risk exists in blitting together memory buffers and doing some alpha blending? Because that's all compositing is. And Linux, Windows and all the other popular OSes all use memory regions that are shared between the kernel and userspace in ways that are far worse than for putting together an image to display.Your supposed security concern is a total non-issue.

There's no possible way that data which will only ever be read as raw pixel data, Z tested, alpha blended, and then copied to a framebuffer can compromise security or allow any unauthorized code to run at kernel privilege level. It's impossible. These memory regions are never mapped as executable and we use CPU features to prevent the kernel from ever executing or even being able to access pages that are mapped as userspace pages and not explicitly mapped as shared memory with the kernel i.e. double mapped into the higher half. So there's literally an MMU preventing in kernel compositing from even possibly being a security issue.

I’m not an expert but if believe the challenge is when

* you try to do GPU compositing things get more complicated. You mention you have no interest in GPU compositing but that’s quite rare

* a lot of such exploits come from confusing the kernel about the buffer to use as input/output and then all sorts of mayhem ensues (eg giving it an input buffer from a differ process so the kernel renders to the screen a crypto key in another process or arranging it to clobber some kernel buffers)

* stability - a bug in the compositor panicks the entire machine instead of gracefully restarting the compositor.

But ultimately you’re the one claiming you’re the domain expert. You should be explaining to me why other OSes made the choices they did and why they’re no longer relevant.

Why would you need to recompile if hardware changes? Linux manages just fine as a monolithic kernel that ships with support for many devices in the same kernel build.

It's true that you can compile everything in but it's not really the standard practice. On a stock distro you have dozens of dynamic modules loaded.

OpenBSD removed support for loadable modules. Hardware today is big enough that compiling everything in is fine, and we don't need a ton of fiddly code to put a special-purpose linker into the kernel. Saving a bit of memory isn't worth the risk.

Even a fully loaded kernel with loads of drivers isn't that big. And not all of it has to be resident in memory at all times. Code in general is miniscule compared to data. And most of a kernel's data isn't baked into the executable. And this kernel in particular has very thin drivers that only abstract real devices to generic device class interfaces that userspace has to deal with directly. That's the part that's inspired by exokernels and hypervisor paravirtualization. That means that drivers for this kernel will be even smaller than those for other ones like Linux.

A monolithic kernel and resource locators that automatically mount network drives? That's just macOS.

(You don't have to recompile the kernel if you put all the device drivers in it, just keep the object files around and relink it.)

Incremental compilation makes that a lot less heavyweight than you would think and the idea is to automate the process so the average non-technical user doesn't need to know or care how it works.