Open Compute Project brings open source to server design

Tip

Open Compute Project brings open source to server design

The open source approach has been widely successful in software development, so the Open Compute Project, an independent consortium, is trying to apply the same concepts to data center hardware development. The group is developing specifications

Continue Reading This Article

Enjoy this article as well as all of our content, including E-Guides, news, tips and more.

and mechanical designs for motherboards, power supplies, server chassis and cabinets, and battery enclosures. The work has the potential to reduce server costs and ease deployments for new, large data centers.

Some users -- especially those on the cutting edge of new social media cloud services -- have been frustrated that the consumerization of enterprise IT is happening too slowly, and they want to take more control of their own futures.

John Abbott,
founder and chief analyst, 451 Research LLC

An industry behemoth is behind the Open Compute initiative. As Facebook was building one of the world's largest data centers, one that supports 1 billion users, it was having trouble mixing and matching different vendors' server components.

"Some users -- especially those on the cutting edge of new social media cloud services -- have been frustrated that the consumerization of enterprise IT is happening too slowly, and they want to take more control of their own futures," said John Abbott, founder and chief analyst at 451 Research LLC. "For some years now, the likes of Google, Amazon, Yahoo and Facebook have mostly bypassed the traditional server vendors when building out their giant Internet data centers and instead built their own," he said.

Facebook launched the Open Compute Project in April 2011. Frank Frankovsky, vice president of hardware design and supply chain at Facebook, took on the roles of president and chairman of the Open Compute board of directors. The company contributed some of its own proprietary hardware specifications, then asked for input from vendors and other large corporations. The consortium's goal is to drive down costs and improve the efficiency of data center hardware by open sourcing hardware design. The group thinks its work will help many enterprises; it claims its approach to building data centers is 38% more efficient and 24% less expensive to build and run than typical data centers.

The initiative is developing specifications in five areas. The Open Rack specification establishes a new standard for rack design for hyperscale data center environments. Open Vault, the storage specification for the Open Compute Project, has a modular I/O topology. For power supplies, the group has developed a 700 W-SH AC-to-DC power converter, which is a single-voltage 12.5 V DC power supply with a closed frame and self-cooling. Its battery cabinet is a standalone cabinet that provides backup power at 48 V DC to a pair of triplet racks. The group's hardware management specifications include a small set of tools that allow technicians to manage virtual machines remotely.

Much of this work is in a fledgling state. In October 2011, the Open Compute Project released the Open Rack 1.0 specification, which is the group's most fully developed element. It lays out the basic design for power distribution and cooling for server racks. The guide provides a 21-inch-wide slot for servers, expanding the 19-inch width that has been the standard for data center hardware. The wider form factor is designed to create more room for improved thermal management and provide better connections for power and cabling. One key innovation centers on its power distribution; here, the group created a power shelf to house power supplies rather than place them in server trays. It's expected that the Open Rack 1.0 spec will evolve over time and integrate additional features, such as rack-level power capping and I/O on the backplane.

The Open Compute initiative has shown other signs of progress since its founding. The consortium has held three summits where members have discussed the development of various standards. The group has garnered support from major vendors, including Advanced Micro Devices Inc. (AMD), ASUSTeK Computer Inc., Dell Inc., Hewlett-Packard Co., IBM, Intel Corp., Red Hat Inc., Salesforce.com Inc. and VMware Inc. The Open Compute Project still faces many challenges, however. Developing specs is only one step to ensuring product interoperability. Currently, no group is taking on the challenge of devising conformance testing suites, so it's unclear how easily businesses will be able to mix and match different vendors' devices.

Some individuals view the consortium as a self-serving Facebook initiative and are concerned about that company's central role. "Open Compute Project has been trying to evolve and turn it more into a communal than a company-driven process," said Bob Ogrey, cloud technical evangelist and a fellow in server platform architecture at AMD.

Open Compute's approach does not mesh with all data centers, however. "The Open Compute Project standards focus on new data centers built from the ground up to support items like cloud computing," Ogrey said. "While it is very helpful for companies with new data centers, it does not work as well for companies that already have invested in large data centers." In fact, the consortium stripped many of the management functions that enterprises rely on to oversee their data center equipment. As a result, these firms might need to put new management tools and processes in place to use Open Compute Project products.

In response, new voices are pushing the project in different directions: Fidelity Investments and Goldman Sachs have been sculpting standards geared to the financial services industry. The result could be that a series of inconsistent standards emerge that target different types of companies rather than a cohesive set of specifications that can be used in all data centers.

"Looking at how the Apache Software Foundation developed, it might turn out that OCP[the Open Compute Project] eventually becomes a high-level framework for a series of related projects," Abbott said. If that happens, vendors won't experience the economies of scale that often come from open source initiatives. Product pricing may be high, and the Open Compute Project's work could gain niche rather than widespread acceptance.

Another potential limitation is that the group's work is designed to ease concerns evident in very large corporations but might be too complex for small and medium-sized companies to deploy. "Using custom hardware has traditionally required a lot of in-house skills and expertise that has traditionally been supplied by support services from the vendor or channel partners," Abbott said. Right now, neither vendors nor the channel can deliver this support.

In sum, the Open Compute Project's work seems to be gaining momentum. It likely will have an impact on the design of new large data centers, but it's unclear at the moment whether its influence will trickle down to other data center market segments.

ABOUT THE AUTHOR: Paul Korzeniowski is a freelance writer who specializes in cloud computing and data center-related topics. He is based in Sudbury, Mass., and can be reached at paulkorzen@aol.com.

This was first published in September 2012

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.