Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not at all; they're building a cool tech stack, but the only thing they sell is super expensive hardware that no individual - and not even that many businesses! - is likely to be able to afford.


So, the only thing really inherent about our price point is that we're selling compute by the rack: as it turns out, a whole rack of server-class CPUs (and its accompanying DRAM, flash, NICs, and switching ASICs) is pretty expensive! But this doesn't mean that it's a luxury good: especially because customers won't need to buy software separately (as one does now for hypervisor, control plane, storage software, etc.), the Oxide rack will be very much cost competitive with extant enterprise solutions.

Cost competitive as it may be, it doesn't mean that it hits the price point for a home lab, sadly. One of the (many) advantages of an open source stack is allowing people to do these kinds of experiments on their own; looking forward to getting our schematics out there too!


It also turns out that not many people have 3-phase power and can support a heat/power load of 15kW in their homes ;)


I actually suspect it would be a lot easier to support 15kW of power in my home than 15 kW of cooling.

I know several people with 2x 240V 32A 3-phase in their garage, that's 20+ kW at any reasonable power factor. But a 15 kW cooler that would work in summer would annoy the hell out of any neighbours living closer than a mile.


Simple solution: Turn those neighbours into shareholders and they can sleep to the sound of money all summer long :)


Where does this leave companies that would like to take advantage of fully integrated software and hardware (yes, intentionally referring to your old project at Sun), but don't need a full rack's worth of computing power (and maybe never will), and don't have the in-house skills to roll their own? Or do you think that what you're selling really only has significant benefits at a large scale?


I think the intention is that those people are better served with consolidated cloud providers? -- or even single digits of physical colocated servers.

It would be nice to have a known pricepoint from a cloud provider which once exceed you ask the question: "Should we buy a rack and COLO it?" Even if the answer is "no" it's still good to have that option.

---

The thing is: Datacenter technology has moved on from 2011 (when I was getting into Datacenters), but only for the big companies. (Google, Facebook, Netflix); I think Oxide is bringing the benefits of a "hyperscale" deployment to "normal" (IE; single/double-digit rack) customers.

Some of those things such as much more efficient DC converters, so not every machine needs to do it's own AC/DC conversion.


What's kind of messed up, at least for tiny companies like mine, is that renting an ugly PC-based dedicated server from a company like OVH is currently cheaper than paying for the equivalent computing power (edit: and outgoing data transfer) from a hyperscale cloud provider like AWS, even though the hyperscalers are probably using both space and power more efficiently than the likes of OVH. My cofounder will definitely not get on board with paying more to get the same (or less) computing power, just for the knowledge that we're (probably) using less energy. I don't know what the answer is; maybe we need some kind of regulation to make sure that the externalities of running a mostly idle box are properly factored into what we pay?


> renting an ugly PC-based dedicated server from a company like OVH is currently cheaper than renting the equivalent computing power from a hyperscale cloud provider like AWS

That's not surprising, you're basically paying for scalability. An idle box doesn't even necessarily "waste" all that much energy if it's truly idle, since "deep" power-saving states are used pretty much everywhere these days.


Sure, the CPU may enter a power-saving state, but presumably for each box, there's a minimum level of power consumption for things like the motherboard, BMC, RAM, and case fan(s). The reason why AWS bare-metal instances are absurdly expensive compared to OVH dedicated servers is that AWS packs more computing power into each box. So for each core and gigabyte of RAM, I would guess AWS is using less power (edit: especially when idle), because they don't have the overhead of lots of small boxes. Yet I can have one of those small boxes to myself for less than I'd have to pay for the equivalent computing power and bandwidth from AWS.


Interestingly, I believe that unused DIMM modules could be powered down if the hardware bothered to support that. Linux has to support memory hotplug anyway because it's long been in use on mainframe platforms, so the basic OS-level support is there already. Since it's not being addressed in any way by hardware makers, my guess is that RAM power use in idle states is low enough that it basically doesn't matter.


RAM uses the same amount of power under high load as low load due to the way it is constantly refreshing the contents.

Each stick of DDR4 is going to consume on the order of 1.2w (idle CPUs can theoretically go lower than this).

I’d rather shut a whole machine down than go to the effort of offlining individual DIMMs, since the consumption is so low and quite static.


You’re amortising a lot of software developers and sysadmins with your AWS bill. It’s also in-trend so a bit premium.

They’re not reasonably equivalent. But I don’t doubt that Amazon is laughing to the bank still.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: