Data center design tips: What you should know about ASHRAE TC 9.9

Tip

Data center design tips: What you should know about ASHRAE TC 9.9

IT staff concerned with data center cooling and support may be able to address a lot of challenges through information available from the American Society of Heating, Refrigeration and Air-Conditioning Engineers. ASHRAE Technical Committee 9.9, which is titled Mission Critical Facilities, Technology Spaces and Electronic Equipment

Continue Reading This Article

Enjoy this article as well as all of our content, including E-Guides, news, tips and more.

offers a wide range of information to the IT industry, yet many data center managers, facilities professionals and even consulting engineers remain unaware of both TC 9.9 and the vast amount of material they have developed. This tip outlines key information available from ASHRAE, and how it can be useful to you.

The foundation ASHRAE TC 9.9 publications
TC 9.9 has now published 10 books and three white papers covering a wide range of data center design and operational issues. These ASHRAE works are also referenced in the TIA-942 Data Center Standard.

The first TC 9.9 publication, Thermal Guidelines for Data Processing Environments, addresses the critical issues of operating temperature, humidity, airflow through equipment, and where to measure environmental conditions–all of which are longtime subjects of debate and disagreement among IT professionals. Originally published in 2004, it has since been modified twice. The second edition in 2009 expanded the recommended operating temperatures for Class 1 and 2 hardware to a range from 18 to 27 degrees Celsius (or 56 to 80.6 degrees Fahrenheit). This expansion was primarily to enable more use of free cooling, but was also meant to reduce energy consumption when electrical refrigeration must be used.

It is important to recognize that the development of this thermal update was accomplished by a subcommittee of manufacturers who all agreed that the wider temperature range would apply to legacy hardware as well as to new equipment; with no effect on warranties or maintenance contracts. The work includes virtually every recognized maker of computer hardware, so there’s no need to verify the manufacturer’s support for TC 9.9 operational verification if your vendor is on the list.

ASHRAE’s newest white paper, Thermal Guidelines for Data Processing Environments, was published in May 2011 and will be incorporated into the next edition of the Thermal Guidelines book.  It adds two new equipment classifications with recommended maximum inlet temperatures as high as 40 degrees Celsius (104 degrees Fahrenheit). For the vast majority of data centers, only the unchanged Class A1 and A2 envelopes (formerly Classes 1 and 2) will be important for the foreseeable future.

The new Classes with greatly expanded thermal ranges (dubbed Class A3 and A4) are intended for future systems – actual equipment in these classes doesn’t exist yet, but establishing these new classes gives manufacturers a basis for developing hardware that runs year round, even in warm climates, with no mechanical refrigeration at all. The marketplace will determine whether there will be sufficient demand to justify the development and manufacture of technology that will remain reliable at these elevated temperatures. Remember that these newer classes do not apply to legacy equipment.

This latest thermal white paper also includes charts that show the relative power usage at a range of inlet temperatures, as well as the statistically expected change in failure rates if different operating temperatures are used. If you know your present power consumption, its cost, and your hardware failure history, you can now use this data to predict energy cost savings and the tradeoff, if any, in hardware reliability if you dial-up your air conditioners, or make more use of free cooling. The same major hardware manufacturers who agreed on the temperature and humidity envelopes developed this data. But, they also shared proprietary information on failure rates and power usage which was “normalized” to create these charts. Remember, warranties are not voided at the higher Class A1 and A2 operating temperatures.  If you experience a few additional failures, the equipment still has the same warranty coverage, but you now have the data with which to make informed decisions regarding operating cost, environmental responsibility, and reliability.

Digging into power and heating
The hardest thing for anyone in the IT industry to predict is a data center’s power and heat loads over the years ahead. Datacom Equipment Power Trends and Cooling Applications helps us do that.  The original book showed expected growth in power density for different types of computing equipment out to 2014.  An update to this book, expected to be out by early 2012, will extend the predictions to 2020.

The third publication in the series, Design Considerations for Datacom Equipment Centers, offers general guidelines for data center design, HVAC load development, a cooling overview, air distribution, liquid cooling, ancillary spaces, contamination, acoustical noise, structural and seismic design, fire detection and suppression, commissioning, availability and redundancy, and energy efficiency.

Collectively, these first three ASHRAE books were meant to provide fundamental information on modern data center design and operation that most everyone in the industry needs to know and may be all that most IT people really need to get a working understanding of the systems that support their equipment.

“In-depth” publications for data center design
Additional books in the ASHRAE TC 9.9 series address critical topics in significant depth, along with illustrations and detail that will be of interest to IT professionals and the engineering community.

For example, dust, dirt and gaseous contaminants have detrimental effects on computing hardware, so data center professionals need to know how contamination affects computing system performance and reliability. Particulate and Gaseous Contamination in Datacom Environments, which has now been supplemented by the Contaminants white paper, has become an increasingly important publication as air-side free cooling grows in usage.

The contaminants information in this book will make you think about how frequently you need to change air conditioner filters, where the dirt is coming from, and what it is doing to your computing hardware and power consumption. Dust accumulation in filters and on heat sinks reduces cooling efficiency and increases fan speed and power demands. If you’re in a dusty area, or are near construction or high pollen regions, you could be getting particulates in through your ventilating air, or even through doors or leaky windows. But more often you are bringing it in yourself. This is why equipment should always be unpacked outside the data center, and why cartons should never be stored inside it.

And if you happen to be in an area with abnormal levels of sulfur dioxide or other harmful gasses, or if you have exposure to chemical plants or businesses in your vicinity that use toxic chemicals, you may need to consider the potential for gaseous contamination.  Certain gasses, when combined with the humidity levels common to data centers, can develop acids that eat away solder joints and the copper lands on circuit boards. This has become of particular concern since the adoption of RoHS (Restriction of Hazardous Substances) in electronics manufacturing, which eliminated the use of lead solder. It has become known that silver solder joints can deteriorate rapidly when attacked by these contaminants. The subject is complex, but a simple test using silver and copper “coupons”, as described in the publications, can tell you if you’re at risk.  The publications can also help you and your engineers decide what to do about it.

As we manage to pack more compute power into smaller processor chips, and more equipment into cabinets, cooling these extreme densities becomes increasingly challenging. Properly handling these loads requires data center design techniques that are very different from what has been done in past years, and is unfortunately still being done on many projects. Two books address these issues. High Density Data Centers: Case Studies and Best Practices provides real-life examples of how the challenges of cooling these loads can be addressed, and the results that have actually been obtained. For anyone in doubt about the value, importance and effectiveness of “state-of-the-art” design techniques, this book should show that the theory really does work in practice.

However, the heat being generated by these increasingly high-density computers is rapidly exceeding the limits that air alone can cool. The only logical solution is a return to direct liquid cooling which is far more efficient than air cooling. The use of liquid cooling is admittedly frightening to many people in IT, but the topics covered in Liquid Cooling Guidelines for Datacom Equipment Centers, and the new Liquid Cooling whitepaper should alleviate much of the concern by showing why it is becoming necessary, and how it can be done both safely and effectively.

Implementing energy-efficient tactics into data center design
The drain on our electrical grid being imposed by the huge expansion in computing, and the resulting ecological effects, make it necessary to give serious consideration to energy efficiency in every data center design. Since cooling has historically been the least efficient part of the data center infrastructure, consuming 30% to 40% of the total facility power, ASHRAE TC 9.9 published Best Practices for Datacom Facility Energy Efficiency. This book concentrates on designing for the highest energy efficiency and provides information not available elsewhere on environmental criteria, economizers, controls and energy management, efficient electrical distribution, datacom equipment efficiency, total cost of ownership, and emerging technologies. This data can assist designers seriously interested in maximizing energy efficiency, or in assessing the practicality of incorporating particular approaches to achieving efficiency into their designs or data center upgrades.

Since it’s also important to know whether steps you are taking are improving or degrading your energy efficiency, the book Real-Time Energy Consumption Measurements In Data Centers is published in cooperation with The Green Grid, and explains how numbers like power usage effectiveness (PUE) should really be derived. PUE has become the de-facto standard for assessing data center efficiency, but PUE numbers can be easily skewed, so it is important to understand the several measurement methods and to be able to properly explain PUE numbers to management.

There has been so much emphasis on trying to make data centers “green,” and the corresponding methods and techniques have become more complex. ASHRAE TC 9.9 published Green Tips for Data Centers as a “quick reference” consolidation of the fundamentals of energy-efficient design and practice that can provide an understanding of important “dos” and “don’ts.” It describes a number of proven ways in which data centers can be designed to be more environmentally acceptable. For those who would just like to find some ways to do things better, but don’t have the luxury of building a highly sophisticated data center from scratch, this book should provide some good ideas.

High-density computing cabinets, and the equipment necessary to power and cool them, have added so much weight to data centers that more consideration must be given to the building structures that support them than ever before. It’s fine to run your data center efficiently, but if the equipment falls through the floor, it won’t matter much. There are numerous “rules of thumb” in the industry for specifying floor loading characteristics. But if you use those numbers, you will probably be asked to justify them because of the difficulty and cost of constructing high strength floors, and the greater problem of beefing up an existing floor to higher loading levels. The book Structural and Vibration Guidelines for Datacom Equipment Centers is geared more toward architects and engineers, but simply referring your designers to it could save you a world of headaches in explaining what is needed, and justifying it.

Ultimately, it’s essential to have a set of industry-standard “best practices” for modern data center design and operation available for reference; and it’s even more important for those best practices to be updated regularly. The ASHRAE TC 9.9 series has tried to fill that need, and can be a good place to start gaining critical knowledge and information. New publications will soon provide information on economizers and on the ASHRAE 90.1 Standard. These will be critically important since 90.1 is incorporated into many energy codes and is now removing previous exemptions for economizers in data centers.

About the expert: Robert McFarlane is a principal in charge of data center design for the international consulting firm Shen Milsom &Wilke LLC. McFarlane has spent more than 35 years in communications consulting, has experience in every segment of the data center industry and was a pioneer in developing the field of building cable design. McFarlane also teaches the data center facilities course in the Marist College Institute for Data Center Professional  program, is a data center power and cooling expert, is widely published, speaks at many industry seminars and is a corresponding member of ASHRAE TC 9.9 which publishes a wide range of industry guidelines.

This was first published in November 2011

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.