It doesn't seem like it is at a deep layer such that it could be used to test updates to kubernetes and CRDs in a cluster that isn't yet updated?

Hi everyone! I’m one of the maintainers of K3k at SUSE.

It’s really exciting to see this on the front page. The project actually started during a SUSE Hackweek by my colleague Hussein. It was initially envisioned as a "Kubernetes version of k3d," but it evolved into something more ambitious and eventually became a real product. We’ve always been big believers in the power of open source. For the current default "shared" mode, we even experimented with Virtual Kubelet, another CNCF project, during our development process.

I’ll be hanging around the thread today, so if you have any questions about the history, the tech stack, or where we're headed next, feel free to ask!

Missed the opportunity to call it Kink ...

This is, if I had to guess, a monument to a small team's stubborn insistence that such a thing could be done at all. If I can hope for a reward for them, may it be that they are allowed to hand off maintaining it to another team.

So this is basically vCluster[0] but Rancher branded?

[0] https://github.com/loft-sh/vcluster

Thanks. I knew I'd seen this idea before but couldn't remember the project name.

Closely related in pupopse, yes. Branded? no.

This type of approach carries a significantly higher operational risk compared to operating multiple Kubernetes clusters on separate VMs or physical hardware. If you eventually update the main Kubernetes cluster that manages the virtual clusters and something goes wrong, you could potentially bring down your entire fleet of Kubernetes clusters all at once.

I don't think this is intended for production

Hacker News sure does love posting links to random Github repos with no context for why it was posted, then a bunch of comments come along and basically ask why.

Since I do have context, the original Rancher labs CTO created k3s, one of the earliest severely stripped down versions of Kubernetes, which bundles all of the required executables into a single multi-call binary, in order to be able to run Kubernetes on a Raspberry Pi. Along the lines of kind, k3d was released to be able to run k3s in Docker containers instead of full Linux hosts. The main use case is testing. We used it extensivel in the early days of Air Force and IC cloud migrations that insisted we needed to rehost all systems in Kubernetes so developers could have local targets to work with. Rancher eventually rebuilt its Kubernetes engine when Docker fell out of favor and based rke2 on k3s, but with the Kubernetes components as static pods instead of embedded multi-call binaries and kubelet and containerd extracted from an embedded virtual filesystem to the host when rke2 is first run.

When KubeVirt came out, Rancher also released an HCI product that uses it, Harvester, running on top of rke2 and Rancher's storage project Longhorn. This runs a full virtual machine manager with virtualized networking and storage, a la something like ESXI, vSAN, and vSphere, with Multus and the bridge CNI plugin providing the networking (it now has KubeOVN as well).

Harvester relies on being imported to and managed by Rancher to have things like SSO and Rancher's multi-cluster RBAC and node provisioners for Harvester to run guest clusters. A whole lot of customers migrating off of VMWare since the Broadcom acquisition want all of that, but without necessarily having an external Rancher. Early on, Harvester offered an experimental vCluster addon that created a guest cluster with Rancher installed on it and that automatically managed Harvester.

This had a lot of problems. I'm not going to rehash them because I don't want to come across as bashing vCluster, but it was not a tenable long-term option that crashed hard on most who tried to use it. Since Rancher already had k3d, it was pretty natural step to just create their own virtualized Kubernetes that runs in Kubernetes by adapting k3d to become k3k, which runs k3s in Kubernetes rather than in Docker. Now you can get a guest cluster to install Rancher onto and get the full suite of Rancher features and a much better experience than the bare Harvester UI without needing to run full VMs.

Why not just install Rancher directly onto the same rke2 cluster that is running Harvester itself? Because it already has one, but that was considered an implementation detail that developers used to bootstrap and not have to duplicate work that was already done, but not meant to be exposed to users. If you try to install a second Rancher to actually use, you'll conflict with a whole bunch of resources that already exist and it won't work.

It's a tangled mess of confusing layers, but that's the world we live in. It's why we still have IPv4, VLAN, VXLAN, virtual terminals, discretionary access control for Linux. We build on top of what is already there instead of rebuilding from scratch in a saner way. This isn't just how software works. It's why city designs rarely make sense. It's why life itself has vestigial anti-features. Cruft rarely disappears. It just gets buried underneath whatever comes next.

Do Rancher side products generally make it into a stable state such that you would want to run mission-critical systems on?

(Former employee) They tend to either get enough traction very quickly and be supported for years, or not and be abandoned in weeks/months.

RKE (their Kubernetes deployment and management platform, mostly for various flavours of self managed environments) is pretty popular with the self-managed crowd that needs something to manage their on Orem Kubernetes clusters.

I don’t understand how they are separating security in the virtual mode as they only mention pods. It seems every workload still shares the underlying node, even when in virtual mode. Take for example the OCI cache on the nodes. What about cache poisoning?

In virtual mode, the only pods running directly on the host are the K3s servers and agents. All "virtual cluster pods" run within these components, meaning they do not appear as individual pods on the host cluster.

The only trade-off is that K3s currently requires privileged mode to operate. We are actively exploring ways to address this limitation and improve security, such as implementing user namespaces or microVMs.

Thank you for your feedback.

I understood from the host cluster perspective you won’t see the child cluster pods. And what is the perspective on nodes?

Can you have like a host cluster spawning on host nodes and the host cluster has control over spawning separate physical nodes which contain the child cluster (api server) + workload pods ?

As I understand it, the virtual cluster pods are treated as standard workloads by the host. This means if you scale the nodes up or down, they will be rescheduled accordingly. You can currently use node selectors to manage this behavior, though we are developing a more flexible approach using affinity rules.

Thank you

Aren't OCI caches content addressed?

I was thinking of people were to use an image…:$my_tag on the host cluster and some roughe pod on the child cluster (but same underlying physical nodes) somehow overwriting the local cached :my_tag, you could do something on the parent cluster.

But I don’t fully understand what you meant with content adressed :)

Maybe one has to ensure in the host cluster that the image pull policy is set to Always or all references to images have to be based on the shasum rather than Tags.

What does k3k stand for? Can we just put whatever number we want between 2 letters now?

Disclosure as I am working for SUSE on Rancher.

It's Kubernetes in Kubernetes and a reference in k3s which is also a project we are heavily contributing to, at SUSE.

I suspect it’s ‘kubernetes in kubernetes’

I suspect it's a play on another kubernetes variant, `k3s` ?

k in k

Can we go deeper than two level? (inception vibes..)

Nice, now we need K3Kind

[flagged]

[flagged]

Can someone explain what this even means? Explain it like I am a software engineer with 20 years experience who has not yet found a strong use case for running kubernetes outside of hand holding cloud provider options

K8s encourages thinking about workloads as "cattle not pets". App running in K8s falls over? Blow it away and let K8s recreate it, etc.

However clusers themselves often become the new pets. Many orgs do not reach a level of operational maturity where they can blow away and recreate whole clusters without downtime and toil.

A meta-pattern has emerged where higher order tooling managers a whole fleet of clusters. This is an implementation of that meta pattern which uses K8s itself as the higher order tool to manage other clusters.

It's not a new idea, just a new implementation of the pattern.

Thank you. Wow I had no idea this was a problem. Seems kind of nightmare territory. In a weird way it makes me respect elixir/erlang even more. It's not the exact same problem obviously but really had me thinking about beam etc

Imagine you are the developer of k8s hosted systems. Now imagine you want to test your systems in a repeatable fashion. You'd need some way to spin up a test k8s cluster, deploy your application, subject it to a test workload. That's simple and easy if you only need one physical cluster node: you can use k3s or perhaps kind. But if you want multiple physical nodes, not so easy. This solves that problem by leveraging an existing k8s cluster, which is a standard thing easily obtained. You might now ask why not just use that cluster (why the terduckin?) Answer: cost, time, hassle, you want a different version of k8s than the hosting provider gives you.

This is extremely niche. 99.9% of Kubernetes deployments will never need such nesting. It could be useful for testing tooling (I guess maybe operators?) without recreating the "top-level" cluster all the time.

Also it's a fun idea. Sandbox in a sandbox.

I've seen many bugs get to production for the lack of such testing.

Send the link to AI and ask :)