UK storage pros share solid-state drive implementation tips

Despite being touted as a silver bullet for data latency problems, UK users are slow to implement solid-state drives. We talk with three IT pros in various stages of SSD adoption.

Solid-state drives (SSDs) are touted as a silver bullet for data latency problems. But while flash drives deliver an order of magnitude improvement in data access performance over hard drives, they cost three times or more per GB stored than hard drives. To get some implementation tips and a better understanding of what SSD looks like in real-world environments, we spoke with three UK storage pros in various stages of SSD adoption.

Without SSD, we would need to use a huge disk system with parallel access, which to ensure performance would only use a fraction -- less than 1% -- of its storage capacity.
Mike Beech
VP for product managementAcision

SSD replaces disk for telephony provider

Acision is a messaging company that provides voicemail and other communications-related products to telephony providers. It has teamed with Hewlett-Packard (HP) to produce Bundle Manager, a real-time payment system enabling the launch of new price plans covering call charging elements, such as voice minutes and text messages.

When a subscriber makes a call, Bundle Manager determines whether a bundle applies and the call is then allowed to pass through the service provider's system. Response time is critical. The hardware used is an HP ProLiant DL385 G2 server using a 64-bit AMD Opteron processor with Texas Memory Systems RamSan solid-state drives.

The SSD media supports the persistent storage of the real-time database used within the Acision solution. The database is run in memory for read/write performance reasons, but needs to be continuously updated on solid-state drives to ensure an up-to-date and accurate record is kept in case of failure. The SSD system is then mirrored to provide disaster recovery capability within seconds.

"It's the only solution that provides us with the storage read and write that can support tens millions of subscribers and transactions in real-time," said Mike Beech, Acision's vice president of product management. "Without SSD, we would need to use a huge disk system with parallel access, which to ensure performance would only use a fraction -- less than 1% -- of its storage capacity."

He added, "The main disadvantage of SSD systems is the much higher cost and the lack of general support in the industry for such large-scale, real-time storage solutions. We had to develop our solution ourselves to make the best use of SSD."

Local authority trials SSD

The London Borough of Hillingdon, one of the capital's largest local authorities, has implemented two Compellent storage-area networks (SANs) and a VMware virtual server environment. They were both designed and deployed by Fordway Solutions to deal with 100% year-on-year data growth.

One SAN has 30 TB capacity, with 10% Fibre Channel (FC) and 90% Serial ATA (SATA) drives. The other has 24 TB with the same proportions of FC and SATA. The larger SAN is set to have three 146 GB solid-state drives added. The Compellent SAN technology makes this addition easy: to the controller, there's nothing special about SSDs, as flash drives are just another automatically managed tier of storage with high-access rate data automatically migrated to flash.

But the Borough of Hillingdon doesn't have a data latency problem, and it isn't proposing to retire the Fibre Channel drives in the SAN to gain capacity and energy savings. So why is it adopting solid-state drives?

"We're looking at SSDs for the future, not to solve immediate problems," explained Roger Bearpark, assistant head of ICT for the borough. "We're in the middle of a transformation programme with service improvements based on IT infrastructure. We're at a point where we're in control of our environment and want to know the capabilities of the technology."

Testing SSD will enable Bearpark to understand what happens to performance and write endurance over time. There's also the promise of not needing tens of Fibre Channel disks to scale performance in the future when it can just add another SSD as needed. Bearpark thinks he could be in a position to offer a better level of IT service, at a premium cost of course, in the future to users who need a faster response for business applications. "This is us stepping away from the age-old thing with IT where the response to a problem is to throw money at it," he said. "This is us being proactive."

Lots of cheap disk trumps SSD for high-performance research lab

For now, the University of Cambridge's High Performance Computing Service (HPCS), directed by Paul Calleja, is just considering SSDs

The lab provides supercomputing facilities with massively parallel systems that use industry-standard x86 processors. Its products process data sets measured by terabytes, and for this, fast disk is expensive. Instead cheap and slow SATA drives are used -- with striping and parallel access through a parallel file system like Lustre, which pours data into the supercomputer's memory at high speed and keeps the cores busy.

Calleja's responsibility is to provide reliable and cost-effective supercomputing core hours to his users. This means he has to keep processor utilisation high; it's approximately 87% percent at present vs. 50% in 2006.

That's helped by getting 2 GBps bandwidth from a Dell PowerVault MD3000 array, which is a startling figure. How is it done? "We stripe the data across 270 spindles of standard 1 TB SATA drives and use Lustre software to get the amazing back-end performance," Calleja said.

He could get even higher utilisation by keeping his cores fed with data and not waiting for I/O. The bottleneck arises because finding data on the disks means going through the Lustre file system's metadata. If that could be speeded up, then myriad microscopic delays could be reduced to allow his core utilisation to edge closer to the 100% mark.

"We're thinking of tiering storage with pools of disk and an SSD pool," he said. "Lustre metadata would be stored in the SSD pool and provide a faster way of getting access to the bulk data stored on SATA drives," Calleja explained. That's a potential plan for the future; metadata latency currently isn't a problem for the university.

SSD sweet spot is tackling I/O latency

Solid-state drives are expensive. However, they're affordable if you have a severe data latency problem and are using lots of costly Fibre Channel drives to achieve acceptable disk I/O data latency. If you don't have that problem or can't afford the price, then SSDs aren't for you -- unless you can toss a couple into an array with minimal management and operation disruption to see how they behave in real-life. Prices will come down and the technology will get better. So far, the evidence supports the view that SSDs will take over the job of storing fast access data and lead to a decrease in Fibre Channel drive use.

Chris Mellor is storage editor with The Register.

Read more on SAN, NAS, solid state, RAID

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close