Tip

Patni shares server virtualization basics

At Patni, we looked at consolidating our infrastructure using server virtualization in 2008. One of our primary requirements was to accommodate legacy applications as well as host diverse customer environments using the virtualized server setup. After analyzing our need to implement server virtualization for applications or development and test environments, there were a few

Requires Free Membership to View

server virtualization basics that needed to be taken into account. These three server virtualization basics comprised of:

 

Server virtualization basic 1: Understanding the current applications and processing environments

The foremost server virtualization basic involved conducting a thorough audit of the application stack and processing environments to identify dependencies in the physical world. For instance, some legacy applications built over eight to 10 years ago had physical dependencies with assumptions on data and application locations, data processing, and transaction locations hardcoded into them. Access to these applications was based on their physical location and hence, they had to be reworked to suit the virtualized server environment.

The applications had to be tested before going ahead with the server virtualization deployment. Operating environments and scripts written to automate job processing had to be tested in the virtual environment and validated.

Ignoring this server virtualization basic would have resulted in applications dependent on the physical environment not running in the virtual infrastructure, or their response time being affected. The applications in the virtual environment would have been inaccessible to users due to dependencies on the physical infrastructure.

 

Server virtualization basic 2: Upgrade or changes to legacy hardware, firmware and applications

An important server virtualization basic was to consolidate the infrastructure, which implies that users accessing applications over a local area network would now probably access them over a wide area network. We had to determine if changes were required at the network level. Although minimal, these changes had to be understood and benchmarked to ensure that the user experience is not affected.

The network switches, storage boxes, operating systems, and server hardware were very old and not supported by the virtualization technologies that we were planning to implement. Anticipating that the future versions would not support these components, we upgraded them to match with the server virtualization technologies.

Not adhering to this server virtualization basic would have affected the application response times and/or some community of users would not have been able to access them, post implementation. We would have been unable to fully exploit the virtual infrastructure, as some parts would have remained physical, negating the very idea of virtualization.

 

Server virtualization basic 3: Sizing of resource capacities

This was perhaps the most critical server virtualization basic criterion. While we had a fairly good idea of the computing capacity for each server when running them as virtualized, the difficult part was estimating the capacity of the big physical server that would host the virtual infrastructure. Also, since we would simultaneously consolidate the server sprawl, if individual physical server capacities were not provisioned intelligently, the aggregate capacity available to host the virtual infrastructure would be inadequate. If the server capacities are low, the next virtual server cannot be provisioned because the infrastructure is running out of physical resources.

In a virtual server environment, all machines do not run on a single physical host. Groups of virtual machines are provisioned across each physical server to enable some level of redundancy and failover capability. This also gives the ability to migrate virtual server loads from one physical server to another in case the former reaches its peak capacity and thereby, perform effective load balancing.

Also, we would have been unable to migrate load from the physical server, which has reached its peak capacity to the next, which is undersized. This would lead to performance bottlenecks, and the performance of the entire virtualized infrastructure and the user experience could be potentially degraded.

 

About the author: Satish Joshi is the Executive Vice President and Global Head – Technology & Innovation at Patni, responsible for rendering specialized enterprise-wide technology services, besides developing, nurturing and managing innovation. He is also responsible for developing and driving R&D initiatives to provide cutting-edge business solutions using new and emerging technologies. Joshi has been with the company since 1983. Prior to Patni, he worked with the Tata Institute of Fundamental Research.

(As told to Harshal Kallyanpur)

This was first published in January 2011

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.