ComputerWeekly.com.com

Server SSD guide to form factors and implementations

By Chris Evans

The use of solid-state storage in enterprise-class storage arrays is well developed in the marketplace today. Flash memory in the form of SSD or bespoke memory cards is used in many vendor offerings, providing improved performance and high IOPS.

The use of flash, however, is not restricted to the array. There is a range of hardware and software offerings available that take solid-state into the server. In this article, we will discuss implementations of server SSD, what to look out for and what pitfalls to avoid.

Server SSD form factors

Server SSD comes in two form factors: actual solid-state drives and PCIe SSD adaptor cards. Both solutions can use single-level cell (SLC) or multi-level cell (MLC) technology; SLC is more expensive but lasts longer; MLC is cheaper but wears out quicker than SLC.

The SSD form factor looks and acts like a traditional hard drive in 2.5-inch or 3.5-inch sizes with SAS or SATA interconnects, although the drives are significantly lighter and consume much less power. The difference, of course, comes in performance. SSD performance is measured in hundreds of megabytes per second (MBps), with SSDs capable of high, sustained write throughput.

PCIe-based SSD devices consist of solid-state memory with custom ASIC control processors packaged onto a half- or full-height PCIe adaptor card. These devices are subtly different to SSDs as their connectivity is straight onto the server PCIe bus. This bypasses the need for storage protocol interfaces such as SAS and SATA, which are required for SSD connectivity into the server and so potentially gives higher performance.

Both hardware form factors require software components. SSD drives could be used directly, but higher performance is obtained by using caching solutions, as we will see later. PCIe SSD devices need bespoke software drivers for the server operating system to communicate with the device.

Server SSD implementations

Server SSD can be split into two types of implementation. In the first, server SSD merely acts as a local cache for I/O, providing faster servicing of read requests for data retained in the cache. Writes may be cached through the device but aren’t confirmed to the host until written to external disk. This scenario is similar to using large amounts of DRAM as cache, except the contents aren’t lost when the device is switched off and the capacities involved are in the hundreds rather than tens of gigabytes.

The second scenario is using the SSD or PCIe SSD as a persistent write device. Data resides permanently on the device and is not necessarily written to external storage. Whilst this accelerates write requests, it has implications for data availability.

Server SSD advantages

The most obvious benefit from using server SSD is the reduction in latency. Both solid-state drives and PCIe SSD are closer to the CPU in terms of I/O response time; there’s no SAN or IP network across which the I/O must travel before reaching the storage array itself. Cutting out the SAN provides for lower latency and reduces the risk of contention across the SAN and storage array that can be experienced in shared environments. However, the flash storage in the server is dedicated for use only to that server.

A significant reduction in latency directly translates to faster application response times. In time-critical applications, such as financial trading or online gambling websites, this reduction can produce significant financial savings, where every millisecond counts. Flash can be used within the storage array, as a tier or cache. However, as flash has such a high performance capability, getting it as close as possible to the compute infrastructure provides better benefits than could be achieved by an all-flash storage array.

Server SSD disadvantages

Server SSD flash usage does come with a number of disadvantages. Firstly, as mentioned above, the storage on the card is dedicated only to the server itself. As an expensive resource, any failure to fully utilise it represents money wasted. This isolation of storage from other servers means server-side flash has other availability issues. Should the server fail, the data on the card is trapped within the server and irretrievable unless the card is moved to another server or the server repaired. This means clustered server environments can’t benefit from server-side flash if shared storage is required. Worse still, should the server-side flash fail, then all data contained on it could be lost. PCIe SSD provides no RAID protection for data between servers although HyperCache, uses standard SSD devices usually contain some on-board RAID and cards could be mirrored within the server. Remember also that all solid-state devices have a finite lifetime and will fail at some point.

Server SSD vendor roundup

Fusion-io. The market leader for PCIe SSD today is Fusion-io. Now into the second generation, its ioDrive2 scales up to 2.4 TB of MLC or 1.2 TB of SLC capacity on a single card. Read and write throughput is 3 gigabytes per second (GBps) and 2.5 GBps, respectively, with each card, delivering more than 930,000 sequential write IOPS on either platform. Write latency is around 15 microseconds. Fusion-io also offers the high-capacity ioDrive Octal model. This scales to 10.24 TB with 45 microsecond access latency. To complement the hardware, Fusion-io offers a number of software solutions that integrate with the host operating system and virtual hypervisors.

LSI. LSI recently relaunched its SSD product offerings under the Nytro brand name. The second-generation LSI Nytro WarpDrive provides up to 3.2 TB of SLC or MLC flash, replacing the 300 GB first-generation cards. The WarpDrive can be used in conjunction with the company’s XD caching software as a method of accelerating I/O to and from traditional storage arrays.

EMC. EMC has recently entered the market with its VFCache offering. This provides up to 300 GB of PCIe storage on a half-height adaptor card, with typical latency of around 41 microseconds. VFCache provides caching only for read I/O, acting as a “write through” cache for write operations. This means writes are written to external storage before being confirmed to the host. EMC has also announced Project Thunder, which promises to provide a shared server cache of multiple VFCache devices.

STEC. STEC has been in the solid-state market for some time and has a range of drives that support SATA and SAS interfaces. It also offers the PCIe Solid State Accelerator range of products, which scale to 480 GB (SLC) or 960 GB (MLC) and up to 163,000 IOPS.

VeloBit. VeloBit has taken the software-only route to I/O acceleration. Its software plug-in, SSD devices in the server to improve I/O performance by caching and compressing data, whilst providing additional SSD management functionality.

14 May 2012

All Rights Reserved, Copyright 2000 - 2024, TechTarget | Read our Privacy Statement