News

Five questions on solid-state drive technology

Dennis Martin, Contributor

Everyone wants better storage performance and solid-state drive devices can deliver data at phenomenal speeds while also saving energy. But can your data center’s network handle the data equivalent

Requires Free Membership to View

of switching from a water bubbler to a fire hose before making such a costly investment? 

In this Q&A, storage expert Dennis Martin, founder and president of Demartek, a computer industry analyst organization, shares his insights on SSD technology and addresses the concerns every potential SSD adopter should consider.

Q. What are the strongest justifications/drivers for bringing SSDs into data center storage arrays?

Dennis Martin: Any data center application that needs improved performance or lower storage latency would be a good reason to consider SSD technology. For example, many database operations are really a sequence of several requests grouped together (such as queries and table scans) which are executed sequentially: The output of one request becomes the input of the next request, and so on. That database won’t return a reply to the application until all of these smaller requests comprising the entire transaction have been satisfied. In these cases, the significantly reduced latency of solid-state storage technology can make a huge difference in the overall performance of the application or the end-user experience.

Q. What about traffic saturation on iSCSI and FCoE. Would too many SSDs flood I/O on the SAN? How do I monitor and prevent this?

Dennis Martin: It takes a lot of sustained I/O to saturate a high-speed block storage interface such as iSCSI, FC, FCoE, SAS, SATA, etc. This is not too difficult to do with iSCSI on a 1 Gigabit Ethernet (GbE) network, but this can be difficult to achieve with traditional hard disk drives on 10 Gb iSCSI, 8 Gb FC, 10 Gb FCoE, 6 Gb SAS or 6 Gb SATA.

However, with SSDs, saturating a storage interface can be a concern, especially when there is an array of SSDs attached to a single interface. First, monitor your existing interfaces to determine how busy those interfaces are. Most operating systems provide tools for monitoring the performance of various devices and interfaces. There are also good third-party tools available, including products from Akorri, Tek-Tools and Virtual Instruments Corp. For example, Windows environments can use the native Performance Monitor (PerfMon) and observe the physical disk statistics to see if interfaces are reaching saturation.

Let’s say that we have a Fibre Channel or iSCSI disk array and have configured several physical disks (SSDs or HDDs) into one logical volume that gets presented to the host. From the host perspective, we can monitor the activity on the one “physical disk” that we’ve created. From there we can determine if we are saturating the interface. Before putting such a logical volume into production, we would create this volume with a small number of disks, observe the performance then add disks until the performance no longer increases or until we have reached the maximum performance the interface can handle. In the case of iSCSI, we can also observe the network interface statistics within PerfMon. This same technique of observing the physical disk activity also works for internal SAS and SATA arrays. For most of these storage interfaces, statistics can also be collected from the NIC, HBA or CNA (whichever is appropriate), and statistics can be collected from the switch.

Q. What architectural changes within the data center (e.g., the network) can help facilitate SSDs?

Dennis Martin: SSDs will be one of the drivers (along with server virtualization) that will push the need for higher speed networks such as Fibre Channel or Ethernet. The pace of 10 GbE adoption has increased in the last couple of years, as has the adoption pace of 8 Gb Fibre Channel. When these technologies need to connect devices that are separated by more than in-rack distances, multi-mode fiber-optic cabling should be upgraded to OM3 or OM4, preferably OM4. Fiber-optic cabling tends to remain in place as long as 10 to 15 years, so planning is important as we look to 16 Gb FC, 40 GbE and beyond. We go into more detail on this topic in our “Storage Networking Interface Comparison” reference page on the Demartek website.

Q. How is SSD reliability advancing, and what do adopters need to consider?

Dennis Martin: SSDs have low-level flash controllers that manage the NAND flash memory on the device. The newer flash controllers provide many of the enterprise-grade features for lower-cost flash media that previously were only available on expensive enterprise-grade NAND flash. These newer flash controllers extend the reliability of the lower-cost flash media, making it both reliable and less expensive.

Q. What about wear-leveling and limited SSD write cycles? How do I monitor and handle this?

Dennis Martin: It’s difficult to directly monitor wear leveling and the finite number of write cycles available for flash media. Most of the SSDs on the market today have utility programs – often provided by the SSD technology vendor – that provide the estimated remaining life of a flash device. There is work going on in the industry to make this data available in a standard way. Some solid-state storage devices allow the user to provision different amounts of flash on the device for user data, which adjusts the amount of extra flash capacity that can be used for wear-leveling. If the user is willing to sacrifice some usable storage capacity, they can get more of the flash media allocated to wear-leveling and sometimes better performance. By contrast, if the user allocates more of the flash media for usable storage capacity, then less is available for wear-leveling.