Could someone explains to me what the secret is here? Apart from the fancy marketing, is it the full integration? The hardware? It took me a while to find an actual picture of one of the modules.
Could someone explains to me what the secret is here? Apart from the fancy marketing, is it the full integration? The hardware? It took me a while to find an actual picture of one of the modules.
They’re players in a newish market segment called “hyperconverged,” basically “you buy a rack and it runs your workload, you don’t worry about individual systems/interconnect/networking etc because we handled it.”
Oxide seem to be the best and most thorough in their space because they have chosen to own the stack from the firmware upwards. For someone who cares in that dimension they are a clear leader already on that basis alone, for other buyers who don’t, hopefully it also makes their product superior to use as well.
Microsoft and Nutanix have had a hyperconverged architecture for over a decade. Oxide is mostly an alternative to Nutanix or other soup-to-nuts private clouds.
Oxide is a really nice platform. I keep trying to manipulate things at work to justify the buy in (I really want to play wiht their stuff), but they aren't going for it.
Afaik nutanix doesn't sell a custom rack, running custom firmware, preloaded with their software though.
The first attempts at hyperconverged were very hardware focused and kinda meh. Nutanix is the best example - they pioneered hyperconverged hardware but the firmware/software was extremely average. Oxide are the first to say "it should just feel like cloud, except you own it" and building for that.
Oxide hardware is very well put together
I'm a bit puzzled because this seems backwards from what I thought had been the evolution of things.
Didn't companies historically own their own compute? And then started offloading to so-called cloud providers? I thought this was a cost-cutting measure/entry/temporary solution.
Or is this targeting a scale well beyond the typical HPC cluster (few dozen to few hundred nodes)? I ask because those are found in most engineering companies as far as I know (that do serious numerical work) as well as labs or universities (that can't afford the engineers and technicians companies can).
Also, what is the meaning of calling an on-prem machine "cloud" anymore? I thought the whole point of the cloud was that the hardware had been abstracted (and moved) away and you just got resources on demand over the network. Basically I don't understand what they're selling if it's not what people already call clusters. And then if the machine is designed, set up and maintained by a third party, why even go through the hassle of hosting it physically, and not rent out the compute?
> Didn't companies historically own their own compute?
As group-of-cats racks, usually, which is a totally different thing. Way "back in the day" you'd have an IT closet with a bunch of individually hand-managed servers running your infrastructure, and then if you were selling really oldschool software, your customers would all have these too, and you'd have some badly made remote access solution but a lot of the time your IT Person would call the customer's IT Person and they'd hash things out.
Way, way, way back in the day you'd have a leased mainframe or minicomputer and any concerns would be handled by the support tech.
> I thought the whole point of the cloud was that the hardware had been abstracted (and moved) away and you just got resources on demand over the network.
This idea does that, but in an appliance box that you own.
> And then if the machine is designed, set up and maintained by a third party, why even go through the hassle of hosting it physically, and not rent out the compute?
The system is designed by a third party to be trivially set up and maintained by the customer, that's where the differentiation lies.
In the moderately oldschool way: pallets of computers arrive, maybe separate pallets of SAN hosts arrive, pallets of switches and routers arrive. You have to unbox, rack, wire, and provision them, configure the switches, integrate everything. If your system gets big enough you have to build an engineering team to deal with all kinds of nasty problems - networking, SAN/storage, and so on.
In the other really oldschool way: An opaque box with a wizard arrives and sometimes you call the wizard.
In this model: you buy a Fancy Box, but there's no wizard. You turn on the Fancy Box and log into the Deploy a Container Portal and deploy containers. Ideally, and supposedly, you never have to worry about anything else unless the Big Status Light turns red and you get a notification saying "please replace Disk 11.2 for me." So it's a totally different model.
> Didn't companies historically own their own compute?
Historically, companies got their compute needs supplied by mainframe vendors like IBM and others. The gear might have sat on premises in a computer room/data center, but they didn't really own it in any real sense.
> Basically I don't understand what they're selling if it's not what people already call clusters.
Is it really a cluster when the whole machine is an integrated rack and workloads are automatically migrated within the rack so that any impending failure doesn't disrupt operation? That's a lot closer to a single node.
So a bit like SeaMicro in the 00's but with more software?
Rack scale computing, on both the software and hardware side. That means building custom network switching, power management, etc, in a turn key solution that drops in to a customer's data center. Unbox it, plugin a few connections, make a few configuration settings, and start deploying. It's the on-prem response to the cloud for companies running things at scale.
Companies spend an eye watering amount of money on AWS relative the underlying hardware cost. There's definitely a market for something like a mainframe that runs K8s, Postgres, Redis, and the like where you buy once and then run forever.
I don't know if it's true or not but it seems like our AWS bill is something like paying the full purchase price of the underlying hardware every month.
AWS supplies a significant portion (was it something like 50%?) of Amazon's overall profits.
Turn key well designed onprem private cloud.
Yes and:
IIRC, Bryan Cantrill has compared the value proposition of an Oxide (rack?) to an IBM AS/400.
>Bryan Cantrill has compared the value proposition of an Oxide (rack?) to an IBM AS/400.
I've heard Bryan and Co. call it a "mainframe for Zoomers," but it's much closer to what Nutanix or VxRail is/was doing than it is to an AS/400.
It's not really a mainframe because the RAS story (Reliability, Availability, Servicing) story is sorely lacking compared to what a true mainframe gives you. So a midrange machine like AS/400 is probably a better comparison.
An AS/400 has a similar RAS story to mainframes than to Oxide/Dell. Oxide is closer to Dell (Oxide RAS is effectively the same as any sled hyperconverged) than they want to admit.
For those of us who are unaware of "the value proposition" of an "IBM AS/400," could someone spell it out for us?
When the AS/400 came out circa 1989 or whatever, you could replace an entire mainframe with a box not much bigger than a mini fridge. The hardware is built for high reliability, and the OS and application software stack have a lot of integration. If Unix is "everything is a file" then AS/400 is "everything is a persistent object in a flat 64 bit address space."
The result is a system that can handle years of operation with no downtime. The platform got very popular with huge retailers for this reason.
Then in later years the platform got the ability to run Linux or Windows VMs, so that they could benefit from the reliability features.
High capacity, super reliable box that you could run your entire business stack on, if you could afford it.
The money IBM made with the AS/400 is actually completely mind blowing when you compare it to the rest of the computing industry at the time.
Related question: Are services like AWS Outpost from public clouds the main competitor for Oxide?
I don’t know who they see as competitors in market positioning (ie, who is selling against them on their target buyer’s calendar). But the space is called hyperconverged computing and there are a few other players like Scale Computing building “racks you buy that run your VMs.”
More like Nutanix, Xen, IBM, Kubernetes... private cloud, colo, on-premise... etc. There's a ton of stuff (I'd bet the majority) of compute workload in business that is local/colo and not cloud.
From the podcasts they talk a little about their clients. It's people who want something like AWS Outpost but fully disconnected and independent from any cloud and running 100% local.
I don't think that is the 'main' competitor. But its certaintly 'a' competitor for companies that already have put a lot of their eggs into the AWS basket.
The selling point, from the looks of it, is an on-prem cloud where you own the hardware.
For the business guys they're focusing on price and sovereignty. Owning your business. For technical people they are focusing on quality. Not having to deal with integration bugs.
Owning instead of renting, for cost and control, without giving up the benefits of the cloud.