Although 2010 budgets are still tight, many data center managers have found that the purse strings are starting to loosen. However, there is a catch -- data center managers must prove that a technology offers a measurable bang for the buck before the bean counters release any of those bucks for a technology purchase. Luckily, data center managers have found a host of technologies that enable them to modernize the data center while still meeting those cost savings objectives -- with power and cooling enhancements leading the list.
However, determining the benefits of power-saving technologies and the related reduction in cooling needs is not an easy science. What's more, the associated cost savings depends upon a number of factors, including the history of the existing data center as well as anticipated future needs. Regardless of the driving factors, data center managers need to approach upgrade concepts with a firm foundation of knowledge. Therefore, the path to an upgrade starts with an audit.
The audit process will uncover critical nuggets of information that will determine the feasibility of any data center redesign as well as the background information needed to make decisions on product selection. There are some critical details the audit should include -- namely, current loads (storage and CPU utilization), equipment in place, maintenance costs, minimum and maximum activity loads, and rack density.
Utilization proves to be one
In a low-utilization scenario, deploying virtualization accomplishes much of the same goals -- servers can be consolidated, the number of racks reduced and the overall power and cooling footprint reduced. Either situation proves that auditing can lead to increased ROI and reduced total cost of ownership (TCO) just by solving a common issue. Although that may be a simplified example, the logic of assessing, addressing and improving data center capabilities rings true.
The real secret behind power reduction and cooling efficiency ROI comes from a single concept: density. Simply put, increasing data center density using newer, more efficient technologies accomplishes two primary goals: reduced square footage requirements and reduced power consumption. That combination has a direct correlation with cooling needs. However, there are a few catches to be aware of behind the concept of increased density, including specific rack power and cooling needs. For example, if multiple racks are consolidated into single-rack solutions by using server or storage blades, the power consumption of that single rack may increase beyond the original operational envelop and the cooling needs for that individual rack may also increase. Although rack consolidation decreases the overall power and cooling needs of the data center, a single rack may have increased needs. With that in mind, it becomes critically important to baseline the original power consumption of the rack (as well as cooling demand) and then calculate the demand for the replacement equipment. That information will be used to size the reconfigured rack, making sure the rack does not place more demand on the external infrastructure (rack power and environmental demands) than originally designed for.
All of those elements contribute to ROI calculations, which, in the simplest form, amount to the costs of the upgrades verses the savings offered by the upgrades, both of which are measurable elements. However, it is critical to make sure that all of the representative data is collected to validate lowered TCO. Managers assembling the ROI argument will need to include equipment costs, downtime costs, personnel costs and other ancillary costs associated with an upgrade.
Calculating the anticipated savings proves to be a little more complex. Here, managers will need to measure the current electric loads of the equipment to be replaced and then assign a cost to those loads. To calculate the anticipated loads, managers will have to rely on information provided by the vendors. However, a simple way to judge the savings based upon a percentile is to look at vendor specifications for the old equipment's power demands and compare that to vendor specifications of the new equipment's power demands. Calculations based on those elements should result in a reasonably accurate savings percentage that can be applied to current costs to determine future savings.
Data center managers seeking to re-engineer for improved ROI on power and cooling will need to become familiar with the concept of power usage effectiveness (PUE). PUE is a ratio that measures the total power required for the facility divided by the power required for the IT equipment. A hypothetical value of 1.0 is perfect and unattainable, simply because that would mean that only the IT equipment, and nothing else in the facility (no lights, no environmental, etc.), would consume energy. Typically, most data centers see a PUE ranging from 2 to 3, meaning that the total power demand for a data center is 2 to 3 times what is needed solely for the IT equipment.
The reason that PUE has become so important is that it can indicate how little or how much a specific power-saving technology affects the bottom line. Ideally, data center managers will want to push the PUE ratio below a 2. However, if a PUE exceeds 3, power-saving IT equipment may only have a marginal effect on cost savings, since equipment other than IT is consuming most of the power. In that case, it may be more appropriate to focus on reducing the operational costs of non-IT equipment before approaching any IT technology engineering. When PUE ratios are lower, IT equipment savings have a larger impact on operational budget costs. Most data center managers will find that determining real-world IT power consumption and cooling demand has to be balanced against PUE rations to determine if re-engineering can deliver the savings needed in a reasonable amount of time to justify the initial costs.
What did you think of this feature? Write to SearchDataCenter.com's Matt Stansberry about
your data center concerns at email@example.com.
This was first published in June 2011