Tip

The changing face of ASHRAE data center environmental standards

As IT evolves, data center environmental standards must also change. When mainframes ruled IT, the conventional wisdom was to keep them as cold as possible. Water cooling was the norm, and cryo-cooling through the use of pumped refrigerants was Hollywood’s preferred manner of showing supercomputers in use.

But the use of

Continue Reading This Article

Enjoy this article as well as all of our content, including E-Guides, news, tips and more.

distributed computing spread and the interdependencies between the data center facility and the computing equipment held within it became more complex. No longer was the main IT “engine” concentrated into one part of the facility, now the parts are spread around. Early tower systems still had to be kept cool, and many an IT manager has had servers collapse due to lack of cooling occurring when fans in such tower systems died and systems management software did not detect the failure.

As the need for more computing power grew, the use of rack systems began to replace the use of towers. Standard-sized racks drove commoditization of computer equipment into different multiples of height units (1U, 2U, 4U, etc.) within a 19-inch rack. Such equipment density made cooling even harder – radial fans gave way to axial fans, which can shift lower volumes of air.

The data center facility itself became more important. Computer room air conditioning (CRAC) units became the norm, chilling and treating air to ensure it could cool the equipment as required, without causing condensation through the moisture content of the air being too high, or the growth of dendrites which could cause electrical shorting through being too dry.

However, for many organizations, getting this right was a bit hit-and-miss, as no official guidelines existed for environmental conditions for cooling air within a data center. To this end, the American Society of Heating, Refrigeration and Air-conditioning Engineers (ASHRAE) produced a document in 2004 with a set of best-practice guidelines as to what the environmental parameters should be for running a data center.

The design parameters in the original ASHRAE document of IT equipment had to be quite prescriptive. ASHRAE also had to deal with predicted growth of equipment densities and thermal output from the equipment. ASHRAE could not depend on predictions of improvements in thermal and environmental envelopes of future equipment, however, which led to advised parameters well within the requirements of equipment launched even soon after the guidelines were produced.

In 2008, ASHRAE updated its data center standards to reflect that the pace of change in IT equipment was different than expected. The rise of blade servers and multi-core CPUs in multi-CPU chassis meant that equipment densities had massively increased. The chip manufacturers had done much to improve both the thermal performance and resiliency of their chips through the use of such upgrades as selective shutdown of parts not in use.

This second set of guidelines put the focus on maintaining high reliability of the equipment in a data center in the most energy efficient manner – a change from the 2004 guidelines which just focused on reliability. The increasing focus on energy usage within data centers means that measures such as power usage effectiveness (PUE) have become more important, and just maintaining reliability within a data center without ensuring low energy usage is no longer valid.

ASHRAE data center standards in the future
To this end, ASHRAE has now expanded its data center class definitions from two to four to provide a greater range of options to organizations where a better balance between reliability and energy efficiency could be gained while still following best practices. As well as high-end enterprise class data center guidelines, ASHRAE now covers server rooms and less mission critical environments. Provided that an organization understands where its technical and business risks reside on its IT equipment, having four different sets of guidelines creates greater flexibility in the environmental choices for different parts of a data center.

The 2008 document gave general guidelines of a dry-bulb temperature between 60 and 89° Fahrenheit (recommended 65 to 80°F) with an allowable relative humidity range of 20 to 80% and a maximum dew point of 62°F at a maximum elevation of 2 miles above sea level for a Class 1 data center. The allowable ranges for a Class 2 data center were marginally higher, but the recommended levels were pretty much the same. These guidelines extended the upper limits of recommended temperature limits by a few degrees, and the upper recommended limit for relative humidity by 5%. This could have had a major impact on data center cooling costs, except the majority of data center owners still preferred to use the “carbon life-form guidelines” – keeping the data center at a temperature more suited for employees or around 70°F.

The 2011 guidelines still retain the same overall recommended temperature ranges, relative humidity and elevation across the four data center classes, but the allowable operating ranges now cover 59 to 90°F for enterprise-class servers (A1 data center) through 41 to 113°F for volume servers, storage, PCs and workstations (A4 data center). Therefore, by “free air cooling or other lowest-cost approaches.

It is good to see that ASHRAE is keeping its data center standards dynamic and balanced between the needs for reliability and energy efficiencies. When combined with facilities best practices in building and use of low-cost cooling approaches, along with equipment best practices in the use of hot aisle/cold aisle and complex fluid dynamics (CFD) hot spot identification and eradication, an ASHRAE-class data center should enable IT and facilities management to ensure that the technical platform along with the physical data center meets the needs of the organization. However, data center managers will have to realize the data center is built for silicon-based systems, not carbon-based ones – and that cooling to 70°F is just throwing away good money.

ABOUT THE AUTHOR: Clive Longbottom is the co-founder and service director at Quocirca and has been an ITC industry analyst for more than 15 years. Trained as a chemical engineer, he worked on anti-cancer drugs, car catalysts and fuel cells before moving in to IT. He has worked on many office automation projects, as well as Control of Substances Hazardous to Health, document management and knowledge management projects.

This was first published in June 2012

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.