What you'll learn: Unified storage reduces hardware and management requirements by enabling both block and file data to be stored in a single system and accessed through standards-based protocols such as CIFS and NFS (for file data) and SCSI or iSCSI ( for block data). This tip examines some considerations for implementing, deploying and managing a unified storage system.
Unified storage benefits
There are numerous benefits associated with deploying unified storage systems in a data storage infrastructure, including:
- The ability to plan overall storage capacity consumption: Deploying a unified storage system takes away the guesswork associated with planning for file and block storage capacity separately.
- Increased utilization, with no stranded capacity. Unified storage eliminates the capacity utilization penalty associated with planning for block and file storage support separately -- users don't need to worry about overbuying to support one protocol and under-buying to support another.
- Pooled storage flexibility. Users can allocate storage to support application requirements regardless of whether the application requires block or file data access.
- Increased support for server virtualization initiatives. Oftentimes, users deploy server virtualization environments that require block-based raw device mapping (RDM) for performance reasons. Unified storage gives users a choice regarding how they store virtual machine (VM) data without having to separately buy SAN and NAS capacity.
While it's likely that specialty storage systems will be used for tier one applications with specific performance needs, adoption trends indicate that over time unified storage will displace specialty SAN and NAS systems for many second-tier applications. ESG research conducted in late 2008 indicates users are adopting unified storage in droves, with nearly 70% of those surveyed either in the process of implementing or planning to implement unified storage solutions. The primary driver appears to be greater storage efficiency.
Implementing unified storage
You can implement unified storage in a number of ways. Users can buy systems that support both block and file data, or they can deploy a gateway approach. A gateway allows users to add a specialty file server to a storage system they already own, possibly extending the useful life of the system and better balancing asset usage. Gateways are available from well-known vendors such as EMC Corp., Hewlett-Packard (HP) Co. and NetApp.
While gateway deployments were pretty scarce a few years ago, the technology is now field-proven and mature. Forty-one percent of respondents (who have already deployed or plan to deploy unified storage) in ESG's last storage survey indicated they plan to use a gateway, and 40% plan to use a combination of a gateway for legacy storage and dedicated unified storage systems. Only 18% plan to deploy only unified systems.
Of course, there are environments where unified storage may not be a fit. While server virtualization environments are ideal for unified storage from a flexibility standpoint, capacity and performance requirements must be given careful consideration. Users often plan for capacity, but typically get quite far down the server virtualization road before they realize the aggregate performance requirements of all the virtual machines (VM) they plan to deploy outstrips the performance of the storage systems. Applications then start contending for storage processing cycles and everything slows.
Unified storage challenges
One challenge when implementing unified storage can be organizational inertia. Oftentimes, NAS and SAN capacity are managed by different groups, with SAN under the purview of a data storage administrator and NAS under a system or network administrator. Capacity planning is often performed separately because there's a fear of sharing resources and causing performance issues, as well as a lack of understanding about the nuances of managing file shares vs. disk capacity. These are all legitimate concerns. And consolidating on a shared platform does nothing to reduce the number of file shares that need to be managed, which in itself can be challenging. But it's simple math -- shared resources mean fewer systems used (lower OpEx) and flexible pooled capacity (responsiveness to the business).
Make sure you do the homework up front to ensure sufficient performance to serve both block and file data -- serving one large file can bring resources to their knees, so be aware of the characteristics of the data to be stored and the impact that will have on the performance of all systems that could be using the unified storage system.
Systems have many varied configurations and what users choose to deploy depends on the performance characteristics they need and the price they're willing to pay. There are a number of systems available that allow users to tier data to disk drives with varying price and performance characteristics. For example, dense 1 TB or 2 TB 7,200 rpm SATA drives are available for cost optimization, or there are 450 GB 15,000 rpm FC drives available for faster performance.
It's also important to note that when solid-state drive (SSD) technology is being considered, cost per I/O must be considered. On a price-per-GB basis, SSDs are much more expensive than hard disk drives (HDDs). But users buy SSDs for performance, so the cost/benefit equation should be based on a performance ratio, such as cost per I/O, to see the true impact SSD has.
BIO:Terri McClure is a senior analyst at the Enterprise Strategy Group
This was first published in July 2011