This is probably a better introduction it seems, than specifically the kernel of the OS: https://github.com/charlotte-os/.github/blob/main/profile/RE...
> URIs as namespace paths allowing access to system resources both locally and on the network without mounting or unmounting anything
This is such an attractive idea, and I'm gonna give it a try just because I want something with this idea to succeed. Seems the project has many other great ideas too, like the modular kernel where implementations can be switched out. Gonna be interesting to see where it goes! Good luck author/team :)
Edit: This part scares me a bit though: "Graphics Stack: compositing in-kernel", but I'm not sure if it scares me because I don't understand those parts deeply enough. Isn't this potentially a huge hole security wise? Maybe the capability-based security model prevents it from being a big issue, again I'm not sure because I don't think I understand it deeply or as a whole enough.
The choice of a pure-monolithic kernel is also interesting; I can buy that it's more secure, but having to recompile the kernel every time you change hardware sounds like it would be pretty tedious. Early days, though, so we'll see how that decision works out.
Why would you buy it’s more secure. Traditionally in windows in-kernel compositing was a constant source of security vulnerabilities. Sure rust may help the obvious memory corruption possibilities but I’m not convinced.
As opposed to the Unix way where a networked display server is used? Exposing something that doesn't need to be exposed over a network is oh so secure right? It must be because Linux does it and everyone knows Linux is the end all and be all of operating systems...
But seriously a lot of the design decisions Linux and other Unix like systems makes are horrible and poorly bolted on to a design from the 70s that aged very poorly. One of my goals with this project is to highlight that by showing how system with a more modern design derived from the metric ton of OS research that has been done since the 70s can be far better and show just how poorly designed and put together the million and one Unix clones actually are no matter how much lipstick Unix diehards try to put on that pig.
I could go for something like MINIX, i.e. the microkernel architecture. If a driver dies, it gets "resurrected", and so forth.
Why? Faulty drivers shouldn't be restarted. I never understood that argument in favor of microkernels.
https://wiki.minix3.org/doku.php?id=www:documentation:featur... seems pretty appealing to me.
Read more about it here: https://wiki.minix3.org/doku.php?id=releases:3.2.0:developer...
> In Minix as a microkernel, device drivers are separate programs which send and receive message to communicate with the other operating system components. Device drivers, like any other program, may contain bugs and could crash at any point in time. The Reincarnation server will attempt to restart device drivers when it notices they are abruptly killed by the kernel due to a crash, or in our case when they exit(2) unexpectedly. You can see the Reincarnation Server in the process list as rs, if you use the ps(1) command. The Reincarnation Server sends keep-a-live messages to each running device driver on the system periodically, to ensure they are still responsible and not i.e. stuck in an infinite loop.
The point is that when failures do occur, they can be isolated and recovered from without compromising system stability. In a monolithic kernel, a faulty driver can crash the entire system; in a microkernel design, it can be restarted independently, preserving uptime and isolating the fault domain.
Hardware glitches, transient race conditions, and unforeseen edge cases are unavoidable at scale. A microkernel architecture treats these as recoverable events rather than fatal ones.
This is conceptually similar to how the BEAM VM handles supervision in Erlang and Elixir; processes are cheap and disposable, and supervisors ensure that the system as a whole remains consistent even when individual components fail. The same reasoning applies in OS design: minimizing the blast radius of a failure is often more valuable than trying to prevent every possible fault.
In practice, the "driver resurrection" model makes sense in environments where high availability and fault isolation are critical, such as embedded systems, aerospace, and critical infrastructure. It's the same philosophy that systems like seL4 and QNX goes by.
Do you understand now?
Don't let me tell you how a unreadable CDROM could block the unit forever in OpenVMS just because.
I’m really not sure what I said that warranted this reaction.
I was literally talking about Microsoft moving the compositor that was inside the kernel in their old Windows 9x kernel architecture to outside the kernel in Windows NT.
That literally every other kernel (OSS and comercial, Unix and not) does this separation suggests this is a generally accepted good security practice.
I’m not aware of any kernel research that alters the fundamental fact that having compositing in-kernel compositing is a big security risk surface area and the OS you are proposing isn’t even pure Rust - it’s got C and assembly and unsafe Rust thrown in which suggests there’s a non trivial attack surface area that isn’t mitigated architecturally - AFAIK capability security won’t help here with a monolithic design and you need a microkernel design to separate concerns and blast areas to make the capabilities mean anything so that an exploit in one piece of the kernel can’t be a launching pad to broader exploits. This is also ignoring that even safe Rust has potential for exploit since there are compiler bugs around soundness in terms of generated code so even if you could write pure safe Rust code (which you can’t at the OS level) a monolithic kernel would present issues.
TLDR: claiming that there’s decades of OS research to improve on that existing kernels don’t take advantage of is fair. Claiming that a monolithic kernel doesn’t suffer architectural security challenges, particularly with respect to compositing in-kernel is a bold statement that would be better supported by explaining how that research solves the security risks rather than launching an ad hominem attack against a different kernel family than I even mentioned is just a weird defensive reaction.
What security risk exists in blitting together memory buffers and doing some alpha blending? Because that's all compositing is. And Linux, Windows and all the other popular OSes all use memory regions that are shared between the kernel and userspace in ways that are far worse than for putting together an image to display.Your supposed security concern is a total non-issue.
There's no possible way that data which will only ever be read as raw pixel data, Z tested, alpha blended, and then copied to a framebuffer can compromise security or allow any unauthorized code to run at kernel privilege level. It's impossible. These memory regions are never mapped as executable and we use CPU features to prevent the kernel from ever executing or even being able to access pages that are mapped as userspace pages and not explicitly mapped as shared memory with the kernel i.e. double mapped into the higher half. So there's literally an MMU preventing in kernel compositing from even possibly being a security issue.
I’m not an expert but if believe the challenge is when
* you try to do GPU compositing things get more complicated. You mention you have no interest in GPU compositing but that’s quite rare
* a lot of such exploits come from confusing the kernel about the buffer to use as input/output and then all sorts of mayhem ensues (eg giving it an input buffer from a differ process so the kernel renders to the screen a crypto key in another process or arranging it to clobber some kernel buffers)
* stability - a bug in the compositor panicks the entire machine instead of gracefully restarting the compositor.
But ultimately you’re the one claiming you’re the domain expert. You should be explaining to me why other OSes made the choices they did and why they’re no longer relevant.
Why would you need to recompile if hardware changes? Linux manages just fine as a monolithic kernel that ships with support for many devices in the same kernel build.
It's true that you can compile everything in but it's not really the standard practice. On a stock distro you have dozens of dynamic modules loaded.
OpenBSD removed support for loadable modules. Hardware today is big enough that compiling everything in is fine, and we don't need a ton of fiddly code to put a special-purpose linker into the kernel. Saving a bit of memory isn't worth the risk.
Even a fully loaded kernel with loads of drivers isn't that big. And not all of it has to be resident in memory at all times. Code in general is miniscule compared to data. And most of a kernel's data isn't baked into the executable. And this kernel in particular has very thin drivers that only abstract real devices to generic device class interfaces that userspace has to deal with directly. That's the part that's inspired by exokernels and hypervisor paravirtualization. That means that drivers for this kernel will be even smaller than those for other ones like Linux.
A monolithic kernel and resource locators that automatically mount network drives? That's just macOS.
(You don't have to recompile the kernel if you put all the device drivers in it, just keep the object files around and relink it.)
Incremental compilation makes that a lot less heavyweight than you would think and the idea is to automate the process so the average non-technical user doesn't need to know or care how it works.
OP here.
The plan is to hand out panes which are just memory buffers to which applications write pixel data as they would on a framebuffer then when the kernel goes to actually refresh the display it composites any visible panes onto the back buffer and then swaps buffers. There is nothing unsafe about that any more so than any other use of shared memory regions between the kernel and userspace and those are quite prolific in existing popular OSes.
If anything the Unix display server nonsense is overly convoluted and far worse security wise.
Does this mean that window management has to be handled in the kernel? Or is there some process that tells the kernel where those panes should be relative to one another/the framebuffer?
That's going to tentatively be handled in kernel unless there good reason to do otherwise. The idea is to expose low level hardware interfaces across the board and this seemed to be the best way to multiplex actual hardware framebuffers while still keeping things low level.
From there each application can draw its own GUI and respond to events that happen in its panes like a mouse button down event while the cursor is at some coordinates and so forth using event capabilities. What any event or the contents of a pane mean to the application doesn't matter to the OS and the application has full control over all of its resources and its execution environment with the exception of not being allowed to do anything that could harm any other part of the system outside its own process abstraction. That's my rationale for why the display system and input events should work that way. Plus it helps latency to keep all of that in the kernel especially since we're doing all the rendering on the CPU and are thus bottlenecked by the CPU's memory bus having way lower throughput compared to that of a discrete GPU. But that's the way it has to be since there are basically no GPUs out there with full publicly available hardware documentation as far as I know and believe me I've looked far and wide and asked around. Eventually I'll want to port Mesa because redoing all the work develop something that complex and huge just isn't pragmatic.
In practice, the problem with URIs is that it makes parsing very complex. You don’t really want a parser of that complexity in the kernel if you can avoid it, for performance reasons if nothing else. For low-level resource management, an ad-hoc, much simpler standard would be significantly better.
Chuck Multiaddr in there (https://multiformats.io/multiaddr/), can be used for URLs, file paths, network addresses, you name it. Easy to parse as well.
You can use a subset of easily parseable URIs
Recompiling the whole kernel just to change drivers seems like a deal-breaker for wider adoption
Recompile (or at least relink) the kernel to change drivers (or even system configuration) is a bit of a blast from the past - in the 1960s thru 1980s it used to be a very common thing, it was called “system generation”. It was found in mainframe operating systems (e.g. OS/360, OS/VS1, OS/VS2, DOS/360); in CP/M; in Netware 2.x (3.x onwards dropped the need for it)
Most of these systems came with utilities to partially automate the process, some kind of config file to drive it, Netware 2.x even had TUI menuing apps (ELSGEN, NETGEN) to assist in it
Not just old stuff like that either. At least also all the SCO Xenix & Unix'es up to the technically current OSR5, OSR6 and Unixware. I don't know about other (commercial) unixes as much as SCO but given where they all come from I assume Solaris and most of the other commercial unix that still technically exist today have something at least somewhat similar.
The sys admin scripts would even relink just to merely change the ip address of the nic! (I no longer remember the details, but I think I eventually dug under the hood and figured out how you could edit a couple files and merely reboot without actually relinking a new kernel. But if you only followed the normal directions in the manual, you would use scoadmin and it would relink and reboot.) And this is not because SCO sux. Sure they did, but that was actually more or less normal and not part of why they sucked.
Change anything about which drives are connected to which scsi hosts on which scsi ids? fuggeddabouddit. Not only relink and reboot, but also pray and have a bootable floppy and a cheat sheet of boot: parameters ready.
Quite common on Linux early days.
Also the only approach for systems where people advocate for static linking everything, yet another reason why dynamic loading became a thing.
If this kernel ever gets big enough where this might matter, I'm sure they can change the design. Nothing is set in stone forever and for the foreseeable future it's unlikely to matter.
If there's enough demand for dynamic kernel modules they can be added later. That's not a feature that you have to build ypur whole kernel around from that start. Linux definitely didn't but it has it now so it's definitely that can revisited or even made an opt-in feature.
Why? It can be fully automated just like dynamic module download and loading are.
Incremental compilation means you don't have to recompile everything just compile the new driver as a library and relink the kernel and you're done. Keep the prior n number of working ones around in case the new one doesn't work.
Wish OP had put that as the main readme.
The intro page is currently useless.
To be fair, the submission URL goes to the kernel specifically, so the README is good considering the repository it's in. The link I put earlier I found via the GitHub organization, which does give you an overview of the OS as a whole (not just the kernel): https://github.com/charlotte-os/
In theory, wouldn't it be possible for the Linux kernel to also provide a URI "auto mount" extension too?
This looks like a very interesting project! Good luck to the team.
Thanks. And there isn't much of a permanent team so far so if anyone wants to help then I'd be happy to hear from them on our Discord, Matrix or by email at charlotte-os@outlook.com.
I believe redox is doing the same (the everything as an URI part)
Skimming https://doc.redox-os.org/book/scheme-rooted-paths.html and https://doc.redox-os.org/book/schemes.html , I think they've slightly reworked that to a more-unixy approach, but yeah still fundamentally more URI than traditional VFS
I don't think that's changed, it's just that /foo is an alias for /scheme/file/foo.
You could roughly emulate it on Unix by assuming every filename starting /scheme/bar/ is a bar-type (special) file, but nothing stops you creating (and you'd necessarily have) 'files' of any type outside that. In Redox, everything has that scheme prefix describing its type (and if omitted, it's implicitly /scheme/file/).
I’m working on one with a completely new hardware comms networking infra stack everything
[flagged]