Storage virtualization technology and its associated groundwork


Storage virtualization technology and its associated groundwork


Continue Reading This Article

Enjoy this article as well as all of our content, including E-Guides, news, tips and more.

By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

Safe Harbor

storage virtualization technology becomes popular, enterprises should thoroughly complete the ground work required for migration. Any compromises on this front can cause serious issues down the line. Today, storage virtualization technology has two aspects to it: One is the virtualization of products from different vendors (or the same vendor) for a common storage pool. The second is a feature that most vendors offer (such as thin pools) enable enterprises to optimally utilize their storage resources.


Analyze: You should analyze your existing setup before opting for storage virtualization technology. Be aware of the serviced applications, growth (of storage) over the last few years, peak workload ratings, and benefits that you derive from the current storage infrastructure.

Evaluate: It’s essential to evaluate the various solutions in order to match offerings with your performance and storage requirements. This ensures that the storage virtualization technology does not become a performance bottleneck in your infrastructure. At the same time, it should be scalable enough to meet your future requirements. As you bring in new equipment within your storage virtualization infrastructure, it may limit some of the exclusive features offered by various vendors. So ensure that you opt for a solution which offers flexibility on this front.

An important point here is the storage virtualization technology’s fault tolerance, as any downtime will lead to a downtime of the whole infrastructure. So companies should ideally opt for redundant storage virtualization solutions that will give high availability levels of 99.99%.

Deployment: After opting for storage virtualization technology, companies should decide upon the migration strategy—as in what do you migrate first and what should be the sequence thereafter. Choosing these is a real challenge, so you can start the migration with boxes that are not that much utilized and host less critical applications. This will help you judge the storage virtualization solution’s performance impact. Depending on these aspects you can decide the storage box to move next in the chain of storage virtualization technology.

Future requirements: Gather inputs from various application owners regarding future storage requirements. Categorize the applications in order of their Input Output Operations (IOPS) requirements per second.

The other type of virtualization referred to earlier is thin virtualization, which most vendors offer using their respective brand names. In this type of storage virtualization technology, application owners are under the illusion that they have a specific amount of storage. In reality, the storage administrator allocates only a part of the required storage. This gives storage administrators the flexibility to optimally utilize storage, as application owners request more storage than they require (at times). For example, an Oracle database administrator may ask for a 1 TB LUN from the storage administrator, but may not use it. But for a storage administrator that storage has been utilized. The thin provision capability gives an illusion to the Oracle DBA that 1 TB has been allocated, whereas the reality is that the storage administrator has given only 500 GB.

Thin Provisioning brings with it several benefits as well as shortfalls. With this form of storage virtualization, the storage administrator should properly define threshold limits. This is essential so that even a sudden data burst operation can be serviced by the allocated capacity without affecting application performance and transparency to the application owner.

About the author: Anuj Sharma is an EMC Certified and NetApp accredited professional. Sharma has experience in handling implementation projects related to SAN, NAS and BURA. He also has to his credit, several research papers published globally on SAN and BURA technologies.

This was first published in August 2010

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.