Moore's Law dictates computing power will double every 18-24 months and it continues to hold. With the advent of virtualization, we’ve actually been harnessing that power by utilizing what once were idle clock cycles.
Continue Reading This Article
Enjoy this article as well as all of our content, including E-Guides, news, tips and more.
New CPUs Enabling Jumps in Performance
Today’s data center has a virtualization engine built atop dual and quad-core CPUs from Intel and AMD. The key hardware virtualization hooks in Intel and AMD processors allow for virtualization to integrate and control the hardware. Over the last couple years, Intel has enhanced its Virtualization Technology (VT) standard hardware virtualization hooks that have been the basis of server hypervisors with new capabilities. Intel has extended VT to include VT-x, which is designed to reduce round trip virtualization latencies; VT-c, which provides virtual machines more direct access to networking connectivity I/O devices; and VT-d, which enhances performance to data storage.
More on Hyper-V 3.0
Installing Hyper-V 3.0 on a test server
Shedding light on Hyper-V 3.0 resource monitoring
Hyper-V 3.0 to feature major virtualization, scalability improvements
Combine these virtualization performance enhancements with the design of the latest Intel E7 series CPUs that include up to 10 cores per socket and this can easily scale a four-socket server to 80 logical cores. Some comparisons put the E7 performance around 35% greater than the previous generation. AMD isn’t left out, with its 16-core Opteron 6200 server CPU and its own AMD-V virtualization technologies. The potential of these latest enterprise CPUs far surpasses what we had even a couple years ago.
The ability to put more per virtual machines per U of rack space continues to grow at a staggering rate. Having close to one hundred logical processors available in a single machine, you can easily run hundreds of servers on a single host. Unfortunately, Hyper-V 2.0 limits the number of logical processors to 64, and the maximum number of virtual machines per host at 384. This is a huge amount, especially through the lens of only a couple years ago, but Moore’s Law makes the ceiling the new middle ground. When considering moving large workloads such as business processing and decision support systems, which are processing large databases, into your virtual infrastructure, the current limit to 4 virtual CPUs per virtual machine becomes the issue. With today’s data warehousing and other big data processing, quad-core isn’t what it used to be.
The New Hyper-V Ceiling
To really take advantage of this year’s newest hardware, you’ll want to start planning on Hyper-V 3.0. The latest release will take the maximum number of logical processors from 64 to a maximum of 160 logical CPUs, allowing you to take full advantage of the newest high-end chips. In addition, the maximum memory is doubled to 2 TB. These specifications present the possibility for further consolidation of your existing virtual infrastructure into fewer hosts, providing more savings in the data center from reduced power consumption, physical space requirements and complexity.
If you look at Hyper-V 3.0’s increases for hardware per virtual machine, then you’ll see the possibilities of virtualizing larger workloads become very real. These opportunities have been limited because of the fear of virtualization’s efficiencies and limits on resources. It addressed efficiencies for the most part with the latest virtualization optimization in hardware and properly managed host environments. However, memory and CPU limits concern people like database administrators that have datasets that are growing at exponential rates. If you really want guaranteed scale-up at the virtual machine level, you go from four virtual CPUs to a maximum of 32 virtual CPUs for a virtual machine. In addition, maximum memory goes from 64 GB per virtual machine to 1 TB.
When you pair these significant gains with the expanded virtual hard disk format giving you up to 64 TB from the old 2 GB limit, and you have true scale-up capabilities appropriate for many of the biggest workloads on standard servers today.
The difficulty of scaling-out versus scaling-up has been an ongoing concern for some large workloads that just work better within a single server. Now, you can use standard servers and still consolidate systems that require scale-up in CPU, storage and memory. The new generation of hardware will require you look into the next generation of virtualization to take full advantage. In every iteration of hypervisor technology, the ability to squeeze even better performance out of virtual machines continues and turns virtualization naysayers into supporters when they realize the benefits of virtualizing their systems far outweigh the cost. This year, be prepared to move, because, once again, it’s time to let Moore’s Law take its course.
Eric Beehler has been working in the IT industry since the mid-'90s, and has been playing with computer technology since well before that. He currently provides consulting and training through his co-ownership in Consortio Services, LLC.
This was first published in April 2012