"while sharing the underlying hardware resources"? At the risk of sounding too positive, my guess is that hell will freeze over before that will work reliably. Alternating access between the running kernels is probably the "easy" part (DMA and command queues solve a lot of this for free), but I'm thinking more of all the hardware that relies on state-keeping and serialization in the driver. There's no way that e.g. the average usb or bluetooth vendor has "multiple interleaved command sequences" in their test setup.
I think Linux will have to move to a microkernel architecture before this can work. Once you have separate "processes" for hardware drivers, running two userlands side-by-side should be a piece of cookie (at least compared to the earlier task of converting the rest of the kernel).
Will be interesting to see where this goes. I like the idea, but if I were to go in that direction, I would choose something like a Genode kernel to supervise multiple Linux kernels.
You just don't share certain devices, like Bluetooth. The "main" kernel will probably own the boot process and manage some devices exclusively. I think the real advantage is running certain applications isolated within a CPU subset, protected/contained behind a dedicated kernel. You don't have the slowdown of VMs, or have to fight against the isolation sieve that is docker.
That's fine for
but I don't think it works for since if the "main" kernel crashes or is supposed to get upgraded then you have to hand hardware back to it.> since if the "main" kernel crashes or is supposed to get upgraded then you have to hand hardware back to it.
Isn't that similar to starting up from hibernate to disk? Basically all of your peripherals are powered off and so probably can not keep their state.
Also you can actually stop a disk (member of a RAID device), remove the PCIe-SATA HBA card it is attached to, replace it with a different one, connect all back together without any user-space application noticing it.
I trust hardware to mostly be reasonable when starting from off, but we're discussing the case where it's on and stays on but gets handed from one kernel to another and I don't trust it nearly as well in that case. I think the comparison is kexec rather than hibernate, and while it often works, kexec can result in misbehaving hardware.
Many peripherals have a mechanism to reset the device, to get it back to a known good state. Generally device drivers will do this when they receive a message they don't understand from the device, or a command sent to the device times out without response.
Here's my graphics chip getting reset:
The old kernel boots the new kernel, possibly in a "passive" mode, performs a few sanity checks of the new instance, hands over control, and finally shuts itself down.
Is there anything that says that multiple kernels will be responsible for owning the drivers for HW? It could be that one kernel owns the hardware while the rest speak to the main kernel using a communication channel. That's also presumably why KHO is a thing because you have to hand over when shutting down the kernel responsible for managing the driver.
Think "cloud providers"
Today, you can grab a physical NIC and create some number of virtual NICs. Same for GPUs.
I guess the idea is that you have some hardware, and each kernel (read "virtual machine") will get:
Every kernel will mostly think it owns real hardware, while in fact it only deals with part of it (all of this due to virtualized hardware support that can be found in many places)This feature does not seem like a general-usage feature, which can be used in our laptop
This is something that was actually implemented and used on multiple platforms, and generally requires careful development of all interacting OSes. Some resources that have to be multiplexed are handled through IPC between running kernels, otherwise resources were set to be exclusively owned.
This allowed cheap "logical partitioning" of machines without actually using a hypervisor or special hardware support.