The data that we deal with today is quite different from data a couple of decades ago, in terms of volumes, as well as types. International Data Corporation (IDC) estimates that the amount of digital data created and replicated will surpass 1.8 Zetabytes (1.8 trillion GB) by the end of this year. Almost every organization has to contend with enormous growth of data in their data centers. For this, appropriate design and sizing of the storage area network (SAN) infrastructure is key. This tip offers practical guidelines for optimum SAN sizing and also provides a downloadable SAN sizing template.
SAN design and implementation guidelines
- While designing the SAN, it is essential to address the following considerations:
- High Availability
- For the SAN fabric, consider core-edge topology.
- During SAN sizing, lay out the host and storage connectivity such that if a switch fails, not all of a particular host’s storage becomes inaccessible. While planning the SAN sizing, keep track of the number of host and storage pairs that would be utilizing the ISLs between domains. As a general best practice, if two switches are connected by ISLs, ensure a minimum of two ISLs between them, and no more than six initiator and target pairs per ISL.
- For optimum design and SAN sizing, stay with a single FC switch vendor to avoid compatibility, interoperability and performance issues. Also, single initiator zoning should be used, so that changes in the fabric least affect the other nodes.
- Connect the host and storage ports in such a way as to prevent a single point of failure from affecting redundant paths.
- Use the latest supported firmware version consistently throughout the fabric. In homogeneous switch vendor environments, all switch firmware versions inside each fabric should be equivalent, except during the firmware upgrade process.
- A zoneset can be managed and activated from any switch in the fabric, but it is recommended that it be managed from a single entry switch within a fabric to avoid complications with multiple users accessing different switches to make concurrent zone changes.
- While it is possible to see and share tapes and disks over the same HBA in a fiber channel fabric, doing so is not desirable, as tape devices send out a lot of SCSI rest commands on rewind and this can wreak havoc on disk data streams. Also, since tape traffic is usually one long continuous data stream, it will hog the bandwidth. If you attempt backups while production is on, performance will be impacted.
- For optimum design and SAN sizing, use a dedicated storage network for iSCSI traffic. If this is not possible, iSCSI traffic should be either separated onto a separate physical LAN, or separate LAN segments, or a virtual LAN (VLAN). With VLANs, you can create multiple virtual LANs, as opposed to multiple physical LANs in your Ethernet infrastructure. This allows more than one network to share the same physical network while maintaining a logical separation of information.
- Install the latest drivers for the HBA and update the firmware during SAN sizing.
Implement these SAN design guidelines using the attached SAN sizing template.
SAN sizing guidelines
- Gather data from all business units regarding the applications, data growth trends, IOPS requirements, application roadmap, data types and performance requirements. Use this data for the accompanying SAN sizing tool.
- Gather details of all servers (current and expected in the near future). This will help determine the switch model and number of switches required. Be sure to factor in the ISL ports while deciding the switch port requirements.
- Now decide upon the RAID types to match the application requirements gathered earlier, and complete the SAN sizing template accordingly, using the following table:
|RAID 3||For workloads characterized by large block sequential reads, RAID 3 delivers several MBs of higher bandwidth than the alternatives.|
|RAID 5||RAID 5 is favored for messaging, data mining, medium-performance media serving, and RDBMS implementations in which the DBA is effectively using read-ahead and write-behind.|
|RAID 6||RAID 6 offers increased protection against media failures and simultaneous double drive failures in a parity RAID group. It has similar performance to RAID 5, but requires additional storage for the additional parity calculated.|
|RAID 1/0||RAID 1/0 provides the best performance on workloads with small, random, write-intensive I/O. A write-intensive workload’s operations consist of greater than 30 percent random writes. Suitable for high transaction rate OLTP, large messaging installations and real-time data/brokerage records.|
- Since most storage vendors today support thin provisioning, take advantage of thin pools for applications with predictable workloads. Create two thin pools, one with FC disks (for high-performance applications) and the second with SATA disks (for other applications).
- For deciding on disk types and IOPS requirements, use the table below:
|Fiber channel 15k rpm||180|
|SAS 15k rpm||180|
|Fiber channel 10k rpm||140|
|SAT 7.2k rpm||80|
|SATA 5.4k rpm||40|
About the author: Anuj Sharma is an EMC Certified and NetApp accredited professional. Sharma has experience in handling implementation projects related to SAN, NAS and BURA. He also has to his credit several research papers published globally on SAN and BURA technologies.
This was first published in November 2011