This article is about technical details of the product that aren't user-facing.
The business is fairly straightforward: we sell computers, a rack at a time. You as a customer can buy a rack, and put it in your data center. The rack offers an in-browser management console, built on top of an API you can use too. You use these tools to set up virtual machines. You can then use those VMs however you want. You get the cloud deployment model but with the "I buy the servers" ownership model.
There's a few different advantages depending on how you want to look at it.
Starting from a rack as the smallest unit rather than 1U brings a lot of advantages, but there aren't really vendors currently selling these sorts of things, instead "the hyperscalers" have internal teams building stuff like this. There are a lot of organizations who want hyperscale style servers but aren't going to start a division to begin making them themselves.
Another advantage is that everything is designed to work with the rest of it: you (or the OEM you're buying from) are not cobbling together a bunch of hardware, firmware, and software solutions from disparate vendors and hoping the whole thing works. Think "Apple," or "Sun," rather than "IBM PC Compatible." This is easier for users, as well as allows us to build systems we believe are more reliable.
There's also smaller things, like "as much as possible everything is open source/free software," which matters to some folks (and allows for interesting things like the above blog post to happen!) and is less important to others.
> There are a lot of organizations who want hyperscale style servers but aren't going to start a division to begin making them themselves.
How does this differ from what large players like Dell are offering under the "hyperconverged" moniker. For example, Dell's Vxrail[0] appears (from marketing speak, anyway) to be a single rack with integrated networking and storage that you can ask to "just start a vm".
So, "hyperscale" and "hyperconverged" are two different things. Names are hard.
"hyperconverged" is a term used by VMware to describe a virtualized all-in-one platform. You get compute, storage, and networking, all virtualized as one appliance rather than as individual ones. VxRail is basically Dell EMC's implementation of this idea: you get one of their servers, vSAN and vSphere all set up and ready to go.
"hyperscale infrastructure" describes an approach to designing servers to begin with. A lot of folks moved toward commodity hardware in the datacenter a decade or two ago. And then you get more and more of them. The hyperscale approach is kind of top-down as opposed to that bottom-up style: how would we design a data center, not just a server. Don't build one server and then stick thousands of them in a building; think about how to build a building full of servers. This is more of an adjective, like RESTful, rather than a standard, like HTTP 1.1. That being said, the Open Compute Project does exist, but I still think it's closer to a way of thinking about things than a spec.
Okay, so all of that is still a bit fuzzy. But it's enough background to start to compare and contrast, so hopefully it makes a bit more sense.
The first difference is the physical construction of the hardware itself. If you buy VxRail, you're still buying 1U or 2U at a time. With Oxide, you're buying an entire rack. The rack isn't built in such a way that you can just pull out a sled and shove it into another rack; the whole thing is built in a cohesive way. This means that not every organization will want to own Oxide; if you don't have a full rack of servers yet, you don't need something like we offer. But if you're big enough, there's advantages to designing for that scale from the start. This is also what I meant by there not being a place to buy these things; other vendors will sell you a rack, but it's made up of 1U or 2U servers, not designed as a cohesive whole, but as a collection of individual parts. The organizations that are doing it this way are building for themselves, and don't sell their hardware to other organizations. This is also one way in which, in a sense, Oxide and VxRail are similar: you're buying a full implementation of an idea from a vendor. Just the ideas are at different scales.
The other side would be software, which of course is tied into the hardware. With VxRail, you're getting the full suite of software from VMware. You may love that, you may hate it, but it's what you're getting. With Oxide, you're getting our own software stack, which the article is about the details of. You may love that, you may hate it, but it's what you're getting :). That being said, I haven't actually used a full enterprise implementation of the VMware stack, so I don't know to what degree you can mess with things, but our management software is built on top of an API that we offer to customers too, so you can build your own whatever on top of that if you'd like. Another thing here is that, well... the VMware stack is not open source. All our software will be. That may or may not matter to you.
The last bit about software though, is I think a bit more interesting: even though you are buying a full solution from Dell EMC, you're also sort of not. That is, Dell and VMware are two different organizations. Yes, part of what you're getting is that they say they have pre-tested everything in the factory to make sure it all works together well, but at the end of the day, it's still integrating two different organizations' (and probably more) software together. With Oxide, because we're building the whole thing, we can not only make sure things work well together, but really take responsibility for that. We can build deep integrations across the entire stack, and make sure that it not only works well, but is debug-able. Dell EMC isn't building the hypervisor and VMware isn't writing the firmware. Oxide is writing all of it. We think this really matters for both reliability and efficiency reasons.
So... yeah. That's a summary, even though it's already pretty long. Does that all help contextualize the two?
You're welcome! Sorry you're being downvoted, no idea what's up with that, it's a reasonable question. Sometimes our stuff can seem opaque, but that's because we're mostly focused on shipping right now, rather than marketing. Always happy to talk about stuff, though.
It's not Openstack. It's not VMware. It's not kubernetes. It's not proxmox. It's not Xen. It's not Anthos. It's not GCDE. It's not Outposts.
So who and what is it for? Where is the use case that none of these other products fit the bill?
Especially for an on premise use case.