LAS VEGAS – If enterprises want the agility of public cloud providers such as Amazon, Facebook and Google, they must completely rethink their IT infrastructure and operations.
In a keynote presentation at the Gartner Data Center Conference here, research vice president Ray Paquet outlined the key ways that cloud providers differ from enterprise IT – and by extension, what enterprises can do to be more cloud-like.
"By 2017, the major public cloud architectures will be common architecture in enterprises," Paquet said. But cloud architectures are "fundamentally different than those found in enterprises today."
Public clouds, unlike enterprise IT, are designed to scale horizontally. They adopt a shared-nothing architecture and use asynchronous communications. Applications are hardware fault tolerant, i.e., a software layer provides a protection against individual hardware failures; and absolutely everything is monitored. When it comes to cost, cloud providers' modus operandi is "to look for the cheapest possible hardware, not the best, and certainly not the most expensive."
"This focus on scale, intelligent software, and driving out cost, results in phenomenal levels of agility," Paquet said.
He also compared and contrasted public cloud and enterprise approaches to various data center components, including servers, storage, networks, virtualization, data center facilities, and applications.
Continue Reading This Article
Enjoy this article as well as all of our content, including E-Guides, news, tips and more.
"They're effectively buying last year's model because it already has all the cost driven out of it. Good enough and cheap – that's what drives their thinking."
Likewise, when it comes to storage architectures, public cloud providers also shun enterprise mainstays. The predominant storage architecture is direct attached storage (DAS) plus a distributed or object-based file system that provides a global namespace.
Public cloud providers tend not to use SANs or arrays like the enterprise does, and perhaps most shockingly to enterprise storage administrators, does not use RAID, Paquet said, which shaves 25% right off the top of storage costs. "But they replicate the data from one site to two locations," which takes care of availability.
In terms of vendors, Paquet said you will see no Tier 1 storage vendors in the public cloud.
"The major vendors have not cracked the hyperscale market at all," Paquet said. "Their preference is to throw x86 server hardware at everything they can."
Expanding on the theme, "public cloud providers do not use virtualization nearly as much as enterprises do," Paquet said. "If an application is scale-out, shared-nothing, hardware fault-tolerant, and it scales beyond a single server, why do I need a hypervisor? You don't."
However, public cloud providers do resell virtualization to enterprises as part of the Infrastructure as a Service (IaaS) offering. But there again, they do things differently than enterprises, staying away from commercial virtualization offerings from VMware, Inc. and opting for open source KVM and Xen.
Here too, public cloud providers and enterprises differ, with cloud providers focused on driving down costs.
To do so, they place data centers where the power is cheapest – often right next to the utility plants-- and make heavy use of free cooling, Paquet said. Architected for massive scale, many use 48 volt direct current to the servers, and forego uninterruptible power supplies (UPS) in favor of server-specific battery backups, although Paquet acknowledged that these are probably impractical for most enterprise users.
All the afore-mentioned innovations flow out of public cloud providers' profoundly different approach to architecting applications.
"The truly transformative layer of the cloud comes at the application layer. That's what will get us to the cloud, and will make us the most agile," said Paquet.
Cloud applications are fundamentally different than classic enterprise workloads. They are architected to scale out, share nothing, to be stateless and to communicate asynchronously, with consistency occurring "eventually." For improved performance, cloud designers use application tiering and caching, and will throw hardware such as SSDs and Flash to improve performance and latency.
But again, the idea is to use resources judiciously, so as not to over-engineer the environment.
Finally, enterprises and public cloud providers differ in the speed with which they deliver and update applications.
For developers, the goal is to go "faster, faster, faster," Paquet said. "But in [traditional] operations, we want things slow, because we know that if no one makes another change, things might stay running," Paquet said.
Public cloud providers have adopted DevOps as a way to fix that fundamental "Hatfields and McCoys" problem of development and operations. DevOps practices such as linking tools, making extensive use of automation, monitoring and configuration management, makes it possible to implement changes faster – "and to back out of those changes when they don't work out."
In part two of this article, learn more about how public cloud providers approach applications, "big data," equipment and IT staffing.