Facebook says they are well along the open source hardware path and are already reaping the cost benefits.
The project is going so well that they are making preparations to expand their open source hardware approach with a 290,00-square-foot data center in Sweden stocked entirely with servers designed through the Open Compute Project (OCP), an open-spec initiative aimed at “scaling massive compute infrastructure in the most efficient and economical way possible.”
The results of their open source hardware initiative have been designs that deliver a significant reduction in TCO. Facebook says that their Prinevelle, Oregon data center (which they used to test the OCP specs) uses 38 percent less energy to do the same work as Facebook’s previously existing facilities, while costing 24 percent less.
In a recent interview, Facebook VP of Hardware Design, Frank Frankovsky spoke about their move away from OEMs and towards their own open, but exacting standards. One of the keys towards achieving these TCO gains is the elimination of what he refers to as “gratuitous differentiation.”
Frankovsky explains that OEM machines come with their own set of user interfaces, APIs, and GUIs that become a hindrance for organizations like Facebook deploying compute power at scale. “It’s different in a way that doesn’t matter to me,” says Frankovsky. “That extra instrumentation on the motherboard, not only does it cost money to purchase it from a materials perspective, but it also causes complexity in operations.”
Included in this “gratuitous differentiation” are such simple things as the plastic bezel with a brand logo on a server that causes fans to work harder. Frankovsky noted that in one study, the plastic bezel on a particular unit caused the fan to draw 28 watts of power vs. the three watts used on an equivalent Open Compute server.
It’s noted that Facebook isn’t completely shutting out the OEM vendors. HP and Dell are reportedly making designs that conform to the Open Compute specifications and according to Brodkin, Facebook is in testing to see if the results come within their 5 percent performance net for deployment. In the meantime, the attractiveness of the Open Compute model to the broader market is not lost on either HP or Dell – both companies have announced new, clean-sheet server and storage designs (code-named “Project Coyote” and “Zeus,” respectively) that will be compatible with OCP’s Open Rack specifications.
So what do these open servers look like? Anyone can download specifications from the Open Compute Project website – they’ve got designs for everything from storage, motherboard & server design, racks, virtual IO, compliance & interoperability, hardware management, and data center design. System planners can download pdf spec sheets as well as CAD files. The OCP also keeps an archive of specs for projects that have been tabled.
Here’s an example of what the system assembly for their High Availability Server looks like:
Here is an example of what an Open Compute Project Motherboard (Windmill) using two next generation Intel® Xeon® processors from the E502600 product family looks like:
- 2 Intel® Xeon® E5-2600 (LGA2011) series processors up to 115W
- 2 full-width Intel QuickPath interconnect (QPI) links up to 8 GT/s/direction
- Up to 8 cores per CPU (up to 16 threads with Hyper-Threading Technology)
- Up to 20 MB last level cache
- Single Processor Mode
- DDR3 direct attached memory support on cpu0 and cpu1 with:
- 4 channel DDR3 registered memory interface on processors 0 and 1
- 2 DDR3 slots per channel per processor (total of 16 DIMMs on the motherboard)
- RDIMM/LV-RDIMM (1.5V/1.35V), LRDIMM and ECC UDIMM/LV-UDIMM(1.5V/1.35V)
- Single, dual, and quad rank DIMMs
- DDR3 speeds of 800/1066/1333/1600 MHz
- Up to maximum 512 GB memory with 32GB RDIMM DIMMs