Tip

Storage design mistakes you should avoid: Part II

Storage design mistakes can severely affect the performance of your IT system. On this front, the first installment of this two-part tip explained how to evaluate the importance of capacity and performance during storage sizing.

<< Storage sizing mistakes you should avoid: Part I

Designing storage solutions needn't only be an IT affair; the overall business requirements should also be considered. The second part of this tip highlights the relevance of common log, binary, and operating system files to storage design, and common errors to avoid when designing storage infrastructure.

Continue Reading This Article

Enjoy this article as well as all of our content, including E-Guides, news, tips and more.

8. Avoid improper assessment

Storage designers frequently conduct poor assessments. Organizations generally use archiving for unstructured data, and a good storage design would place this archive data in Tier 2 storage, to free high performance storage for more active data sets. Placing infrequently accessed data in the path of active file systems can waste resources and increase operating costs.

Any storage media with response times higher than solid state drive (SSD) or serial attached SCSI (SAS) disk storage can be categorized as tier 2 storage. They can be near-line SAS or SATA drives with lower revolutions per minute (RPM). During the storage design stage, these media can be placed within the storage infrastructure or outside.

9. Don’t overlook OS and binary file storage requirements

The placement of operating system (OS) files is an important storage design consideration. OS files can reside either on local disks or on a storage area network (SAN).

 Solution providers tend to overlook the operating system and binary files that need to be loaded on SAN, when designing storage. According to me, the most critical applications should reside on the SAN.

10. Avoid neglecting log file requirements

Log files can be roughly classified into three groups:

  • Log files generated by future analysis applications, which need long-term preservation, but do not require high performance storage.
  • Static log files.
  • Log files from applications with high performance requirements.

It is critical to find out the kind of log files used in the system, and keep their performance needs in mind when designing the storage backend.

11. Don’t sideline the importance of block level storage

According to our analysis, 80% of the total work load or I/Ops generated in the application environment is from 20% of the data. It is advisable to opt for storage at the block level, and consider technologies like dynamic tiering and page-based tiering when designing storage solutions. You should then realign data placement on high performance storage based on axis patterns of the data.

Data on unaccessed blocks should be moved to SATA storage instead of high performance storage. In the storage design, high performance media should be reserved for frequently accessed blocks within the applications contained in the data itself. Some banking and financial sectors have incorporated this technology.

12. Avoid single points of failure with redundancy

No single point of failure (in terms of accessing application from the host environment) depends on the kind of allocations. Some tier 1 applications demand no single point of failure, while tier 2 or tier 3 applications may not require it.

In a storage solution designed with a single path, interruptions will cause application downtime and impact operations. Hence there’s a need for a dual path to the storage environment in the storage design. If one path goes down for any reason, the other can take over automatically, avoiding application downtime. Having dual channels for storage published to servers ensures there is no single point of failure for the paths, as well as for the storage and servers.

Strategic considerations for storage design

While designing storage, a purely IT-centric approach can be detrimental. The future needs of an organization in terms of both performance and capacity need to be taken into account. The optimal storage design takes business continuity and overall business objectives into consideration when designing a storage solution.

  • Overall business plan - While most organizations can state current requirements, business plans for the future must be considered when designing storage infrastructure. Storage solutions designed to meet currents may fill up in six months and later hinder application performance. When business grows, the backend may not be able to scale up capacity and performance. 
  • Business continuity plan - An organization's business continuity plans (BCP) for their tier 1 and tier 2 applications need to be assessed well. This component is often neglected when designing storage. It is important to understand the business continuity plan and incorporate those sizing parameters into the storage design.

 

About the author: Srinivas Rao is director, pre-sales and solutions, at Hitachi Data Systems, providing and managing professional pre-sales and services resources across India. With 17 years of technical experience, he holds a degree in electronics engineering from the University of Mysore.

(As told to Mitchelle R Jansen)

This was first published in September 2011

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.