Dan Stroot

Facebook Saves a BILLION Dollars via Open Compute Designs

Hero image for Facebook Saves a BILLION Dollars via Open Compute Designs
2 min read

Last week, IBM announced that it was selling its low-end server business to Chinese hardware manufacturer Lenovo. The deal has been widely viewed as the logical result of the commoditization of x86-based servers, in much the same way PCs were commoditized a decade ago.

This week Microsoft announced it is joining the Open Compute Project and will “open source” its server designs, sharing them with the world at large. Microsoft joins Facebook, Google, Amazon, Bloomberg, Intel, Box and many others. Microsoft will be contributing designs for the servers that power global cloud services like Windows Azure, Office 365, and Bing.

It's clear that the server business is never going to be same. For Microsoft open source goes against it's DNA, and for IBM the server business was viewed as core not so long ago.

Billion Dollar Idea

Facebook started this movement in 2011, when it open sourced its first server designs and founded the Open Compute Project. The aim was to foster a community who would share hardware designs and bootstrap a more efficient means of getting these designs built.

Now, nearly three years later, the project lets Facebook crowdsource improvements to its infrastructure and allows multiple vendors to produce identical equipment which gives Facebook supply chain diversity. In addition there is no vendor "support" costs, and you self-insure against failure via redundancy. So your true investment is only the hardware and engineering talent necessary to operate it. This is a CIO's dream.

Today at the Open Compute Summit, CEO Mark Zuckerberg said that "In the last three years alone, Facebook has saved more than a billion dollars in building out our infrastructure using Open Compute designs."

A Million Dollars Isn't Cool. You Know What's Cool? A Billion Dollars.

— Sean Parker in "The Social Network"

Zuckerberg also noted that "In just the last year we’ve saved the equivalent amount of energy of 40,000 homes in a year, and saved the amount of emissions equivalent to taking 50,000 cars off the road for a year."

Next Up - Network Gear

In the recent past switches were black boxes with integrated data planes, control planes and feature sets from a single vendor. The cost for sufficient capacity and performance was prohibitive. The challenge of making changes made the network brittle and network engineers gun-shy.

Today however merchant silicon has surpassed custom Applications Specific Integrated Circuits (ASICs). This in turn enables enterprise grade networking hardware from Original Device Manufacturers (ODMs) - bare metal network hardware.

What becomes critical is the software layer to manage and operate the network. Google was an early driver of software defined networking on top of bare metal commodity hardware. They helped define OpenFlow and breathe life into this technology.

This area is rapidly maturing and today we see examples like Dell hoping to sell hardware and services through a partnership with Silicon Valley startup Cumulus Networks.

So what is going to happen to Cisco? Wonder why they were downgraded today by J.P. Morgan?

Trickle Down Effect

All of this innovation is benefiting corporate clients who will either have better, faster, less expensive cloud platforms to leverage, or by directly reducing cost and complexity in their own data centers. Saving energy and being "greener" is icing on the cake.

Sources:

Sharing is Caring

Edit this page