Today’s energy costs are a significant line item in the operating expense ledger. Data center operators need to pay close attention to reducing the amount of energy needed in daily operations and consider emerging technologies that can economically supplement energy demands when utility power is strained. These two quandaries can fray the nerves of even the most hardened IT administrator. This month, we’ve reached out to the
Continue Reading This Article
Enjoy this article as well as all of our content, including E-Guides, news, tips and more.
Bill Kleyman, virtualization architect, MTM Technologies Inc.
In terms of reducing energy use, server virtualization technology has taken data center consolidation to a new level. Another strategy that plays a big role in energy reduction is smart hardware purchases–buying technologies that help your data center use less power, cooling resources and space can help reduce costs both immediately and long term. The combination of a good integrated infrastructure and a virtualization solution can quickly consolidate several server racks into just a few blade chassis.
Data center resource consumption depends on the amount of equipment located at the site. As processor technologies allow our machines to run cooler and faster, administrators can deploy newer hardware solutions for help in reducing energy use. Older servers have high power and heat standards. Newer machines are better able to handle large workloads and a virtualized environment. As a hardware footprint decreases, so will data center power consumption.
Virtualization reduces a hardware presence inside a data center and helps it run more efficiently. Working with virtual machines (VMs) or virtual workloads allows for greater flexibility in management, recovery and disaster recovery planning. Some advancements to virtualization include support for greater VM density and better workflow automation. Even further improvements are being made with the actual hardware platforms.
Consider that hardware manufacturers are designing full systems for easy virtualization deployment and management. For example, Cisco’s Unified Computing System isn't only creating a blade environment. The underlying technology allows for hardware profile virtualization. Administrators can take full profiles of the blade and simply copy them to other blades in the environment. From there, engineers can quickly deploy VMs without having to configure the hardware as much.
This means greater usability of workload automation. So, if a company has locations in different time zones, it is able to use hardware and the virtual environment to accommodate users all over the globe. When one set of users finishes their shift, the servers can be quickly re-profiled and made available to remote workers in different parts of the world. This means less hardware purchased at remote sites and therefore less data center power demands.
When it comes to easing data center power demands, many technology manufacturers are taking power considerations into their own hands, and IT administrators can focus on integrated server technologies for help. For example, Intel's Power Management technologies go a long way in helping a server control its power consumption. Using the technology with Intel's Xeon processors, administrators can see up to a 20% reduction in server power use with no affect on system runtime. Data center power management can also give lower usage servers the ability to cap power consumption. In a test environment, when Power Management was put through its paces, servers saw a 200 W power reduction and an 18% increase in runtime.
When we talk about power cogeneration, it's no surprise that Bloom Energy is worth $3 billion, according to Fortune (as of September 2011). Not only is this technology making a lot of headway in the industry, it costs eight to 10 cents for a kilowatt per hour to use, making it cheaper than today's electricity costs in other parts of the country. Customers can buy this system to supplement their current infrastructure (eBay Inc. uses Bloom for 15% of its power demands). The initial capital expenditure can range from $700,000 to $800,000. But, for data centers looking to lower their costs and supplement their power, Bloom might be a good option to look at.
Chris Steffen, principal technical architect, Kroll Factual Data
I focus on the data center footprint and infrastructure optimization. Now that server virtualization seems to be well accepted, very few infrastructure architects are downsizing their data center footprint. So they are still chilling their entire 5,000 sq. ft. data center that is now only using one row of cabinets. There may be return on investment (ROI) when you turn off chillers and put up dry wall, but I also think that most data center managers are not really willing to give up the space.
Temperature is another source of data center power waste. Most server rooms are way too cold, or at least far colder than the recommended operating temperature. There is no reason that anyone walking into the room should need to wear a parka. If that’s the case, you are wasting enormous amounts of energy and freezing your IT staff.
We are working on a dynamic data center project now (and have been for a bit). The tools to do this are finally coming around. Microsoft’s System Center Virtual Machine Manager 2012, plus the new F5 Networks Inc.’s BigIPs will allow us to do some of the required migrations and background operations, and we are actively moving forward to decrease VMs available at non-peak times.
I don't see many cost-effective ways to supplement energy yet–the bang for the buck is not there right now, which isn't to say that it will not be eventually. You can make a corporate decision to run on completely wind-generated power, like we did, but that decision comes with a price tag. Using wind-generated power has been 30% more expensive compared to traditional utility power.
We all try to pay attention to the "new" thing, but in a data center environment where uptime is critical, power is one of those things that data center managers just depend on. There isn’t a ton that we can do to ease the problem without spending ridiculous amounts of money. Adding more variables or complexity to the process is not something that I would think most data center managers are interested in, especially given all of the operating variables that already exist in the environment.
Bill Bradford, senior systems administrator, SUNHELP.org
For energy reduction, I'm really looking forward to some of the upcoming server platforms that move away from a few big, beefy processors (like Intel Xeon) and toward many smaller processors, such as those based on the ARM architecture. Apart from virtualization, most server workloads, such as Web serving or email, are at a point where the main processor architecture is irrelevant.
So far, we have not implemented excess capacity to the extent that there are "spare" machines that can power down to save energy during the evening. Larger cloud computing farms may do this, but I've not run into it yet.
In supplementing data center power, I've seen movements towards warmer data centers, where the set point temperatures are in the mid-70s [degrees Fahrenheit], instead of the high 50s to low 60s. This saves energy and doesn't work the cooling equipment as hard, and is still well within the operating temperature envelope of most modern computing equipment.
When you need more power, fuel cells, flywheel storage, or for companies like Google that have the capital to invest and experiment with unproven technology, power conservation technologies are important. However, I don't see such technologies becoming common or in mainstream use for at least another 10 years.
Robert Rosen, CIO, mainframe user group leader
There are easy guidelines to reduce data center power use. Turn the lights off when no one is in the server room–you'd be amazed at how many people ignore this–or use motion detectors to automate the lights.
Also, increase the server room’s set point temperature. If you carefully control the inlet temperature to all levels of the rack (by using hot/cold aisles), you can actually go up to about 78 degrees Fahrenheit. Organizations can also easily turn off unused equipment, utilize a massive array of idle disks for storage, and consider ways to use waste heat elsewhere in the building.
A lot of organizations decide to build data centers where power is cheap, but the ROI on other supplemental cogeneration technologies isn't too good at this point. I still think solar will continue to expand and get more efficient, but I’m not sure where the tipping point is.
Chuck Goolsbee, data center manager and SearchDataCenter.com blogger
I see four principal ways to reduce data center power demands:
1. Use aisle containment. I'm always shocked at how many data centers have yet to install containment.
2. Raise your server room temperature set points.
3. Use as much fresh air (free cooling) as you can.
4. Aggressively replace inefficient machines/systems that are identified through a comprehensive power monitoring and management program.
It’s harder to talk about supplementing energy, because the grid itself is homogenized from so many sources that are hard to pin down and are nowhere as efficient or clean as they could be. As a result, the data center industry gets tarred with a broad brush as "dirty" and "inefficient"–despite being one of the only industries in the world aggressively pursuing efficiency and cleanliness! Cogeneration is going to be criticized if it is perceived as anything but squeaky clean, and the "clean" sources (solar and wind) lack both the capacity and consistency required in critical environments. Biomass is likely the best near-term compromise for on-site or near-site cogeneration.
Here in the rural Intermountain West region of the U.S., we have a great untapped power source in small- to medium-scale hydroelectric power from irrigation canals. There are literally hundreds of thousands of locations to place generation stations that can utilize moving/falling water in man-made canals. I'm starting to see people take an interest in this, as it is a clean energy source with zero affect on native fish stocks. The only downside is that most canals are dry in winter months, and the scale is quite small when compared to the big river dams here in the West.
Short-term and long-term solutions
Today, it’s much easier to reduce data center power demands than it is to create more power. Using virtualization to consolidate energy-efficient servers operating in a warmer data center server room will go a long way toward lowering the power bill. Cogeneration is gaining ground, but it will likely take years before wind, solar and biomass systems really make a dent in most facilities–with more exotic cogeneration technologies (like tidal) even further off.
This was first published in October 2011