Manager's Guide

Green data center guide for managers

This green data center guide for managers is divided into four parts for easy readability and reference, as follows:

Requires Free Membership to View

    • What is a green data center?
    • Green data center design considerations
    • Challenges associated with green data center setup
    • Green data center certification
    • Further reading on Green Data Center


    • Download the PDF version of Manager’s Guide to Green Data Center for internal training, presentations, and future reference.
    • What is a green data center? 

      In recent times, there has been increasing awareness of the potentially detrimental effects of human activity on the environment. This has led governments around the world to actively define and enforce regulations to lessen inefficient use of natural resources and reduce hazardous waste disposal. Organizations too are expected to play their part, and corporate social responsibility policies today routinely include eco-friendliness as a key element. The “green data center” is a direct manifestation of such policies.
      In a green data center, efficient energy utilization is the prime goal, and all components, especially power and cooling systems, are designed to operate with minimum carbon footprint. The efficiency of a green data center is measured in terms of PUE (power utilization effectiveness), using the following formula:
      PUE = Total data center power / IT equipment power
      Thus, PUE is determined by dividing the amount of power entering a data center by the power used to run the computer infrastructure within it. Total power includes lighting, cooling and air movement equipment and inefficiencies in electricity distribution within the data center. The IT equipment portion is that equipment which performs computational tasks i.e. servers, networking equipment and storage systems. Globally, the desired average PUE is 1.83.
      While the impetus behind setting up a green data center may be regulatory and environmental in origin, the inevitable positive impact on the bottom line is added incentive for the greening of data centers, whether the data center already exists or is being set up anew.
    • Back to Index
    • Green data center design considerations 

      If a new data center is being constructed from scratch, then all the steps explained herein are relevant to accomplishing the green design goals. However, even with existing data centers, several of the optimization techniques are applicable and can be utilized to transform the data center into a green data center.
      a) Data center (building) architecture:
      • Building orientation.
      Aligning a building in the north-south direction minimizes the intensity of heat radiation from the sun.
      Low-emission building materials. For example, use aerated, autoclaved concrete blocks for walls as they have better thermal insulation than traditional brick walls.
      Low-emission carpets and paints. For example, use reflective paint to minimize heat entry into the building.
      Construct a liquid recycling mechanism. This is pertinent in case a liquid cooling mechanism is employed.
      Fire suppression systems. Ensure that ozone-depleting materials are not used.
      Landscaping and soil conditions. Eco-friendly materials must be used in all instances.
      Power and cooling buffers. The building should have pre-provision for additional spare capacity for power and cooling, without reduction in efficiencies, to facilitate seamless scaling up of future hardware requirements.
      Age of the building. The need to use eco-friendly construction material has been realized in the last decade or so. Older buildings may not adhere to such norms and hence may be unsuitable for housing green data centers.
    • Back to Index

    • b) Server and storage architecture:
      • Consider high-density of server and storage systems in the data center.
      High-density enables very high efficiency in a properly designed data center. In order to accomplish this, create a granular density specification document to include at least the following :
      a) Room layout with rack locations.
      b) Row-wise specifications.
      c) Inventory of planned server and storage systems.
      d) Placement of planned server and storage systems.
      e) Design tools to match the specifications to size of rack, power and cooling elements.
      In case of an existing data center, take an inventory of your server and storage systems with respect to the following:
      a) Application-to-server mapping.
      b) Actual usage of each application.
      c) Actual utilization of storage systems.

    • Virtualization. Use technologies that enable virtualization with blade servers, multiple processors, storage networks, etc. This helps in eliminating overly heated areas, thereby improving cooling efficiency. These local hot areas result from high levels of server utilization within a small footprint, often during peak usage periods. Virtualization reduces server utilization by moving usage to areas with excess cooling capacity within the green data center.
    • Back to Index

    • c) Power infrastructure:

      Create a real-time power measurement mechanism. Power consumption in the data center can be separated into two general categories. The first, equipment power, is power consumed by the server, storage devices and networking equipment. This power is converted to heat given off by data center equipment. As a result, the second category of power in the data center, cooling power, is needed to remove this heat from the data center environment.
      In case of a new green data center, both types of power need to be estimated in detail. As for an existing data center, both need to be measured effectively in an on-going manner. Without these measurements, it is difficult to establish the level to which greening of the data center has reduced power consumption.
      Maintain temperature of room and racks (top, middle and bottom) as per industry guidelines. For example, the guidelines of ASHRAE (American Society of Heating, Refrigerating and Air-Conditioning Engineers) may be used. Maintenance below the recommended thresholds leads to power wastage and overworked cooling equipment.
      Use energy efficient UPS systems. Electrical losses can be significant at the UPS level, so consider use of energy-efficient UPS systems.
      Air-conditioning. Automatic regulation of fan speeds in proportion to the amount of heat generated reduces the overall power consumption. Use a refrigerant system with zero ozone depletion potential.
      Backup systems. To save energy, run backups in standby mode.
      Lighting system. Choose an energy-efficient option.
    • Back to Index

    • d) Cooling infrastructure:

      Review current cooling capacity. The cooling improvement of an existing data center should commence with a review of current capacity. In order to accomplish this, determine the data center’s maximum cooling capacity and whether this parameter has been exceeded. To simplify this calculation, assume a one-to-one mapping between watts of power consumption and watts of cooling to be the maximum ratio for cooling capacity. Use the results to determine if changes are needed to existing capacity.
      Install blanking panels. Racks within a data center are not always completely used. The resultant open spaces can cause equipment to be surrounded by circulating hot air, thereby increasing possible overheating. This can be avoided with the use of blanking panels made of either metal or plastic to seal off these rack openings.
      Isolate hot aisles and cold aisles. Equipment fronts receive cool air and are set to face each other to create “cool aisles”. Equipment rears of each rack face the back of the servers placed in the next rack of the sequence thus creating “hot aisles”. Cooling units need to be aligned with hot aisles. Airflow is directed from front-to-back of the equipment racks. Gaps between and within racks need to be sealed so as to create a “pressure differential” between the two aisles. This prevents mixing of hot and cold air.
      Fix adaptable, perforated floors. Perforated floors augment proper airflow. However, the floor must be properly sealed to ensure that there are no leakages that would reduce the uniform flow of air beneath the floor. Also, the arrangement of tiles should be adaptable enough to accommodate addition or movement of equipment within the green data center.
      Remove obstructions to cool airflow. Create an unobstructed pathway from the cool air source to the intake of the servers. A raised floor configuration allows cool air to flow beneath it and rise through perforated tiles to cool the racks. But movement of components often leads to re-cabling and other modifications below the surface of the raised floor, and these obstructions to airflow need to be removed.
      Match cooling requirements to utility areas. A data center has multiple utility areas such as seating space, distribution area, UPS area, battery area, and fire suppression room. Each region has different cooling requirements. It makes financial sense to customize the level of cooling to each area’s requirement in the green data center.
    • Back to Index

    • e) Networking
      Networking equipment is not a primary consumer of data center power
      . However, its importance lies in the extent to which it can enable greater utilization rates for servers, which comprise the bulk of equipment power consumed in data centers. Factors to consider are:
      Use cables with smaller diameter. This ensures improved airflow and therefore reduced equipment power consumption.
      Select networking products geared towards virtualization. Install networking products that simplify or improve the ability to implement a virtualized environment. In case of network switches, prefer the ones that offer the capability to power off unused switch ports, thereby saving energy.
      Prefer end-to- end cable management solutions. Without end-to- end cable management, some of the problems encountered include cables stepped on and piled up in raceways; difficult connector access; and, hours wasted to trace cables. These issues not only increase the time required for changes but also block airflow, thereby increasing total cooling requirements.
      Place cables in overhead channels. This frees up floor space and reduces the pressure required to push cool air, thus improving airflow. Besides, horizontal cable trays installed above the racks create a protected pathway for cable traversal.

    • f) Plug-and-play components:

      • If new data center construction is being planned, consider use of plug-and-play technology that offers out-of-the-box green benefits. One such example is Microsoft’s IT pre-assembled components (ITPAC). This technology uses outside air for primary cooling, obviating the need for mechanical cooling devices. Such products can reduce the typical data center carbon footprint and consumption of construction materials, consequently making for a significant reduction in PUE.
    • Back to Index
    • Challenges associated with green data center setup 

      There are several challenges to be faced while setting up a green data center. These include the following:
      • The green data center must be designed in a scalable manner, with a view to meeting the organization’s requirements in the present as well as future.
      • Identification of a single vendor with all necessary solutions for a green data center may prove to be difficult.
      • Absence of an energy efficiency management system document containing a set of policies that define the necessary energy efficiency levels to be maintained for optimal power utilization effectiveness (PUE) ratios.
      • Absence of a real-time measurement system that monitors power consumption from various sections of the infrastructure in a green data center.
      • A high-density data center comes with its own cost implications. Nowadays, servers have more terminations leading to increased cable congestion. This restricts airflow to the cabinet. Floor space is still at a premium, thereby necessitating denser cabinets that have an associated cost that grows with density.
    • Back to Index
    • Green data center certification 

      Leadership in energy and environment design (LEED). This certification, awarded by the United States Green Building Council (USGBC), is a benchmark for the design, construction and operation of green buildings. Some of the factors it focuses on are:
      a) Location
      b) Use of local materials
      c) Landscaping and soil conditions
      d) Proximity to fire station, hospital
      e) Rainwater harvesting facilities
      f) Solar panels
      There’s a caveat though. LEED recognizes only buildings—that is, the premises. Being LEED certified implies an efficient building but not necessarily an efficient data center. That’s because the data center has a lot more to it than the building, including hardware, software, virtualization technologies, etc. All of these factors also play a big role in making a data center efficient.

    • Certified energy efficiency datacenter award (CEEDA). Launched in December 2010 by UK-based Chartered Institute for IT, this certification follows in the footsteps of the LEED standard. It independently verifies the effectiveness of green steps taken by organizations for green data centers.
      Ratings are awarded based on assessment results by third-party auditors, on the following areas:
      a) Data center building
      b) Data center utilization
      c) Power infrastructure
      d) Cooling infrastructure
      Each certification ranking requires an increasing level of PUE criteria up to a year before the audit.
    • Back to Index
  • Further reading on Green Data Center 

    Definition from WhatIs.com: What is Green Data center?

    Tutorial: Green Data Center

    Tip: Green Data Center best practices

    Tip: Indian data center and Green IT

    News: IDFC data center’s efficiency improvement

    News: CEEDA certification

    News: TUV Rheinland

  • Back to Index

 

This was first published in April 2011