Enterprise flash: Three implementation options for virtualisation

Feature

Enterprise flash: Three implementation options for virtualisation

Ezine

This article can also be found in the Premium Editorial Download "IT in Europe: Finding a home for Flash."

Download it now to read this article plus other related content.

Enterprise flash storage is being bought in increasing numbers, and new enterprise flash products are emerging from vendors old and new to meet that demand.

The key drivers in the enterprise flash market are a combination of the falling cost of flash memory, which is making solid-state disk an increasingly economic proposition, and lagging spinning disk performance, which has become the bottleneck in many data centres.

That bottleneck is largely a result of the demands of server and desktop virtualisation, which can generate large volumes of random I/O from a single host. In addition, some database tasks can provide similar challenges, requiring large volumes of throughput.

Spinning disk struggles to cope with such demands, to which the solution was often to add more disk. This increases throughput, especially if you short-stroke drives. However, this increases heat levels and power consumption and means buying much more capacity than you actually need to satisfy throughput requirements.

Enterprise flash technology ameliorates the problem by providing hundreds of times more bandwidth than mechanical devices -- the exact boost depends on the technologies involved -- and by reducing latency for time-sensitive workloads. Flash disks' lower capacity vs mechanical disks is a minor issue compared with the bandwidth advantages they offer. Their higher prices are their main drawback but, in a growing range of circumstances, they are the only viable option.

Three ways to implement enterprise flash

Broadly speaking, you can implement enterprise flash technology in three ways, with your selection depending on factors such as cost and application.

Server-based flash sits closest to the processor and so offers the lowest latency with zero overhead from network or storage interfaces. It is usually implemented in the form of a PCIe card; vendors include Fusion-io, OCZ, Micron and LSI. While it delivers the best performance, some argue that this implementation has all the disadvantages of direct-attached storage without the flexibility.

Next on the performance chart is an all-flash appliance or array, connected conventionally via NFS, CIFS, Fibre Channel or iSCSI. Products from Texas Memory Instruments, WhipTail and Violin Memory fall into this category. While still providing considerably better performance than spinning disk, they use familiar interfaces and can offer more enterprise-level features, such as redundancy and hot-swapping.

Finally, there is the addition of flash as a tier to “traditional” arrays. This was a move pioneered by EMC in 2008, and now pretty much all SAN and NAS vendors allow for the addition of solid-state to their products.

There are also vendors that have designed arrays especially to mix the two, such as Nimble, which aims to exploit the best cost/performance characteristics of spinning and solid-state disks, using flash for high-performance data while high-capacity, low-cost SATA drives provide the main storage and backup capacity.

Let’s take a look at a few UK IT organisations that have implemented enterprise flash storage.

In-server flash makes Kontera's Web analysis fly

Kontera delivers real-time, Web-based, content-relevant advertising from offices in the US, UK and Israel. Its 300 physical servers contain about 1,000 virtual servers, and it uses a 360 GB MongoDB database to analyse the 100 million daily page views generated by its 15,000 publisher clients, and then insert ads relevant to page content. The aim is zero delay in the delivery of Web pages.

"We can analyse what the page is all about and serve ads according to what people who access this type of content typically read," said Ammiel Kamon, executive vice president of marketing.

The task is a mammoth one. "We have around 300 servers providing input and getting results from the core analysis cluster in real time. This translates to hundreds of gigs of data being written and read per hour as small discrete transactions. This then translates to 20,000 IOPS with each transaction requiring a sub-10-millisecond response time," Greg Pendler, production operations manager, said.

The company designed and implemented a model that it believed would deliver the volumes of data required, consisting of four main servers containing the database, and a range of distributed servers. The main servers run MongoDB on CentOS and consist of 2.2GHz 2U machines, each with 72 GB of RAM plus a 640 GB Fusion-io ioDrive. These central servers are linked to the remaining, distributed servers, whose task is to analyse the data.

"There's one main server that receives data and three others that it's replicated to," Pendler said. "The storage was always going to be direct-attached, as the old model of NAS and SAN didn't work for us because we couldn't scale it horizontally. When we tried NAS, it killed the network."

Kontera tried spinning disks but was unable to attain the throughput required. "We tried installing six [15,000 rpm] drives per server, but writes were taking a couple of seconds," Pendler said. "Sixteen might have done the job, but the size and price of the server were a problem."

The company then tried implementing in-server, PCIe-attached flash memory products from OCZ but found the hardware to be unreliable, with a 50 percent failure rate in the first 90 days. "We tried a cheap solution with OCZ, but it didn't work," Pendler said.

So Kontera replaced the OCZ products with Fusion-io ioDrives and has since experienced no reliability issues.

"Compared with physical drives, we obtained 50 to 100 times better throughput," Pendler said. "Flash-based storage will also work out cheaper than spinning disks in the long term, and this was a factor in our decision to go with Fusion-io. Having multiple spinning disks eating power and generating heat is not a good idea."

Implementation of the ioDrives went smoothly, according to Pendler, with each server being upgraded one by one, and the data replicated over once it was up and running.

And would Pendler do the same thing again? "We know where to go first next time."

The Pensions Trust opts for an all-flash appliance

The Pensions Trust (TPT) is a provider of pensions fund services for not-for-profit organisations from three offices in the UK. Using an all-VMware infrastructure based on VMware View, the IT department commands about 3 TB of storage in its SAN to support about 200 desktops, about 90 percent of which are running Windows XP, while the rest run Windows 7. It recently bought an all-flash WhipTail Technologies Virtual Desktop XLR8r appliance to support its VDI operations.

Darren Bull, business support manager, said TPT decided to implement virtual desktops because of the wide disparity of configurations on the organisation's physical desktop PCs and the resulting high support load, which stretched IT resources. "The PCs were all in a different state, and we had problems supporting that, so it made sense to centralise and make it easier to troubleshoot," he said.

This led to problems with performance, with users complaining that pilot virtual desktops were slow to respond. Bull said spinning disks were struggling to deliver the I/O loads that virtual desktop infrastructure (VDI) places on storage. Bull said that he could foresee even bigger problems ahead when TPT undertakes its migration to Windows 7, as is planned later in 2012, as Windows 7's I/O demands are higher than those of XP.

"We calculated how many more spinning disks we would need to meet VDI's I/O requirements and could see that it would cost a lot of money," Bull said. "So we looked at Atlantis Computing's ILIO [virtual appliance], which would have meant we could carry on using traditional disks at the back end. However, the cost of added Atlantis and VMware licences and the risks created by the complexity of plugging something else in the middle that could go wrong made this solution less attractive.

"We also considered NetApp Flash Cache, but that means buying new storage controllers, which we found would cost too much and result in too much disruption of our existing infrastructure," he said.

"We found WhipTail's website by chance using Google, we got an evaluation unit and straight away we knew it was the right decision. It was fast, simple, and it worked. I wasn't aware of any other unit available for virtual desktops, and while people said we were brave, it has gone well. It has been very reliable and has has reduced issues and support for desktops."

The key benefit is clearly performance, according to Bull, and since the company does not run a large server room, any savings from reduced power and cooling are not significant. Bull also found that users appreciated the new infrastructure. "We surveyed users who said their virtual desktops were faster than their old physical PCs, and that's what the WhipTail delivered, so I consider it money well spent. It does exactly what it said it would do."

Bull said that the XLR8r could be improved upon but appreciates that WhipTail is a small, evolving company. "The management interface is a bit clunky and limited, but they do have plug-ins for vSphere coming up, and this box fits our environment from a size point of view. Also, we would like to enable deduplication, but this reduces throughput massively. However, the company says that it is developing a new deduping engine that will run at near-line speed."

Bull expressed concern that the XLR8r represented single point of failure but said that WhipTail provides next-day replacement. "A redundant unit would be quite a big investment for us, so we will try and use our primary SAN as a lifeboat for the desktops. It's a cleverer way of doing it, even if it won't perform as well for the time it takes for a replacement to arrive."

In summary, though, Bull remains positive about the all-SSD storage: "It was a lot of money for a 1U box, but we're getting the kind of I/O that would not get from a rack of disks. What's more, once the deduping is upgraded we might be able to fit some servers on to it."

Xicon selects Nimble hybrid HDD/SSD array

Cheshire-based Xicon specialises in providing cloud services for UK-based SMEs, including hosted desktops, applications such as Microsoft Exchange and database servers. The company has just incorporated into its virtualised cloud infrastructure a replicated pair of Nimble Storage CS220 hybrid HDD/SSD arrays with a total usable capacity of 16 TB each, and a flash capacity of 640 GB.

Xicon's projections showed that with growing demand for desktop virtualisation and database server access, both heavily dependent on I/O performance, it would run out of IOPS over the following six months. "Our challenge was to find an infrastructure that could satisfy large customers and small, with five or 1,500 users," Managing Director Simon Heyes said.

"We currently use about 30 TB of storage from EMC and Dell EqualLogic," he said. "We have never had a problem with them, but for the next step in performance, we would either have to buy a lot of very fast spinning disks or SSDs.

"The load varies in terms of database transactions, and desktop applications are very time-sensitive with close to zero latency required. For example, if you're typing into a virtual desktop, delays make it very annoying and unusable."

Spinning disk technology alone would not be able to deliver the performance that Xicon's mixed workload demands, especially virtual desktops.

"When you read IOPS ratings, you can calculate that with hosted desktops you need between 10 IOPS to 30 IOPS per desktop, so 100 users need 3,000 IOPS, which is the performance of a fast array with fast drives. With 500 or 1,500 users, you need tons of those arrays. Effectively, you're up against the limitations of spinning disks and the laws of physics, and disks also cost a lot of money to run, generate heat and use power. It gets to the point where you think there must a better way," Heyes said.

So when Heyes looked for an alternative, he found the Nimble appliances, which combine SSDs with mechanical storage.

"Their claims of a combined appliance for backup and primary storage intrigued me because I thought it was a bad idea. However, it got my interest, as did Nimble's performance claims," Heyes said. "The way they used flash sounded feasible and proved itself to be so."

"Lots of storage vendors have used flash as an accelerator, but Nimble said they were doing something different: using high-capacity SATA drives to deliver high performance and capacity. They said they could deliver high performance with a fraction of the number of disks, which is important for a cloud provider because it occupies less floor space and uses less power."

Heyes obtained an evaluation unit and tested it. "I don't impose a new technology on my engineers just because it's cheap; it has to measure up. Our techies were well impressed."

Heyes also liked the Nimble units' storage and bandwidth efficiency. "We like the ability to create volumes with a block size that matches the application. This makes replication from one SAN to another very efficient because you don't waste a lot of bandwidth replicating empty space."

Overall, for Heyes the Nimble appliances' key benefits were hardware costs, efficient space usage, and power and bandwidth savings. He also liked the design and was impressed by Nimble's founders' heritage: "They worked at DataDomain, and we were impressed with that gear, so we thought Nimble will know what they're doing, what works and what doesn't."

The units, once installed, were up and running inside 30 minutes with no technical challenges or even a need to consult the manual.


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in March 2012

 

COMMENTS powered by Disqus  //  Commenting policy