Four network latency gotchas of private cloud


Four network latency gotchas of private cloud

If you believe the hype of virtualization platform vendors, you’d think the cloud is a perfect host for every virtual machine. Whether you’re connecting local and remote assets using VMware vCloud Connector

Continue Reading This Article

Enjoy this article as well as all of our content, including E-Guides, news, tips and more.

By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

Safe Harbor

or clicking the “Create Cloud” button in Microsoft System Center Virtual Machine Manager 2012, moving VMs to a cloud has never been easier.

But the easy option isn’t always the best option. Before pushing any VMs to the cloud, IT admins need to determine whether it even makes sense. And such decisions mirror those of server virtualization -- determining what to move from physical to virtual (P2V). With cloud, P2V has become V2C (virtual to cloud).

Network latency: The efficiency killer
When determining whether a VM is a good match for the cloud, network latency becomes a major concern and stands to be the biggest cloud efficiency killer. Here are the top four network latency “gotchas” to keep in mind when you’re making your next V2C decision.

Gotcha #1: Your Internet connection. Offloading the processing of VM activities to a cloud provider can free up in-house resources. However, your network connection can create a bottleneck when trying to relay activity results back to the data center.

Keep in mind the amount of throughput each VM needs when building network capacity between your data center and the Internet. Network measurement tools are a must to ensure efficiency.

Gotcha #2: Your traffic patterns. In addition to measuring the aggregate network requirements of each cloud candidate, you must identify the locations and patterns of network traffic. A slow Internet connection becomes less critical when network traffic travels through mostly colocated VMs.

Network flow monitoring tools, such as SolarWinds’ NetFlow Traffic Analyzer, sFlow, J-Flow and IPFIX, make it easier to obtain traffic pattern details. Such tools can help isolate internal cloud traffic from external Internet traffic.

Until recently, tools for measuring network flow were available only for large enterprise customers with the budget for expensive equipment. Affordable flow monitoring tools from Ipswitch, SolarWinds and other vendors now are accessible to even very small IT shops. Open source monitoring tools are also available for those with limited budgets.

While the cloud effectively removes resource boundaries, it does so at the cost of pushing that processing back to local equipment. As a result, an investment in network monitoring technology is a good bet for future cloud builds.

Gotcha #3: Your usage patterns. While it may seem obvious, a business user and usage patterns can also affect a cloud-connected network. For example, hosted file services and cloud-based apps are becoming more prominent with the rise Microsoft Office 365 and Google Apps for Business.

While office applications in the cloud offload the administration of complex services, they do so by relocating storage into the cloud. Highly distributed businesses that aren’t structured around a brick-and-mortar office infrastructure are particularly suited for moving these services to a public cloud.

On the other hand, businesses with well-established data centers and a central location may want to think otherwise. The cost and time needed to upload and download documents from a cloud service will likely outweigh the benefits.

Gotcha #4: Your provider-to-provider networking. Companies hoping to completely eliminate the risk of a cloud provider outage affecting IT operations are interested in end-to-end high availability.

This cloud-to-cloud network latency can be the most challenging to characterize prior to implementation. There simply aren’t effective tools to characterize provider-to-provider throughput metrics short of throwing a few servers in each location and monitoring the traffic. Notwithstanding, IT shops with extreme high-availability requirements shouldn’t neglect network monitoring for monitoring connections among providers.

Resource-bound becomes network-bound
IT’s glacial shift from server virtualization to a cloud-friendly architecture has changed where bottlenecks exist. Early virtualization environments were largely resource bound, suffering from shortfalls in processor, memory and storage capacity but generally were well connected via the network.

While the cloud effectively removes resource boundaries, it does so at the cost of pushing that processing back to local equipment. As a result, an investment in network monitoring technology is a good bet for future cloud builds.


Greg Shields, Microsoft MVP, is a partner at Concentrated Technology. Get more of Greg's Jack-of-all-trades tips and tricks at

This was first published in April 2012

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.