October 2012 Archives

X-IO and why clever drive technologies could be a good bet

Antony Adshead | No Comments
| More

If I were a betting man when it came to the prospects of storage businesses I might be tempted to put some money on the mid-long term prospects of X-IO.

X-IO - which revealed an addition to its hybrid flash array line this week at SNW Europe - makes storage arrays, some pure HDD, some hybrid flash SSD-HDD, that target performance applications such as VDI, OLTP and business intelligence/data warehousing.

It doesn't offer the highest performance available, such as you might get from an all-flash array and the company poo-poohs the idea that you need expensive integrated database-specific compute/storage products to run what others might call 'big data' use cases. Instead it touts its ISE and Hyper ISE arrays with commodity servers and Microsoft SQL 2012 databases as adequate for most.

Nothing that unusual so far; it's a vendor selling wares that perform adequately for the job they aim at. Where X-IO is different, however, is that its arrays don't contain commodity hard drives, unlike just about every other storage array vendor.

Instead, X-IO products come with IP inherited from purchase of Seagate ASA in 2007, namely five-year guaranteed sealed unit 20 drive DataPacs that are engineered to be more reliable and longer-lasting than standard hard drives. X-IO says its drives last 2.4x longer than a Seagate HDD with an MTBF of 850,000 for an individual drive in a DataPac.

They achieve this by building in anti-vibration mountings, features such as diverting cooling intake on physical startup to stop ingress of gathered dust to the array, and details such as retained mounting screws; no lost screw, no chance of unwanted movement, is the aim here.

At the drive software/controller level firmware is stripped from standard Seagate drives and X-IO's installed, while data is written grid-pattern across drives in DataPacs. A fault-repair system goes through a triage process, starting with a reboot that fixes most drive issues. If this doesn't work the drive can be reformatted in situ and if a problem is found, a single surface and its head can be locked out of use while the rest of the drive is reinstated.

Why is this a good betting prospect? The next few years will likely see much of the intelligence of storage, the job of the controller in assembling and provisioning volumes of storage and handling features such as replication, thin provisioning etc, move to the virtual server stack. VMware, for example, recently signaled its intent to bring storage virtualisation capabilities to future versions of its hypervisor.

Should such moves come to pass, storage array vendors selling arrays up to Petabyte capacities could find the rug pulled from under them as the likes of VMware assemble and manage storage capacity from the virtual server.

You could, of course, build storage this way from all sorts of drives; in old arrays, as direct-attached storage, from JBODs full of commodity drives. But not everyone will be happy with that for reasons of reliability. And that leaves the way open for providers, like X-IO, of drive subsystems that specialise in the intelligence that is close to the drives and that provides reliability and resilience.

Maybe everything in storage will one day be controlled from the virtual server, but it feels like a fairly safe bet that the hypervisor vendors will not get into that level of drive management for the time being.

Scale Computing's classic vendor BS

Antony Adshead | No Comments
| More

The subject of this blog is a briefing this week with Scale Computing. And a major factor in why it's getting written about here is the laughable levels of chutzpah involved on Scale's part.

The occasion for the conversation was Scale Computing's re-launch of its HC3 "datacentre in a box", which is a converged storage stack, that comprises server, storage and virtualisation hypervisor in one device. The HC3 comes in 1U nodes each holding four 3.5" SAS or SATA drives. You can have a minimum of three nodes and up to eight, which will serve about 100 VMs.

I say re-launch because they actually unveiled the device in August in the US at VMworld. This week's launch was at IP Expo in London. Why do vendors think we don't know this is a re-packaged, warmed-over, not-really-a-launch launch?

But, this was the killer. Scale Computing's HC3 has virtualisation built in. Is it VMware perhaps? Or Hyper-V? Or even Citrix? Nope. It's Red Hat's KVM. Naturally, I questioned the choice of only offering such a niche hypervisor.

Now, I'm not knocking Red Hat KVM's technology. It's a hosted hypervisor, and as such runs much closer to the hardware than any of the household names in virtualisation and is therefore more efficient.

In response, one of the Scale guys attempted to convince me, "It's the most popular hypervisor on the planet." I asked them to back this up and I'm still waiting for some emailed evidence, but I was told at the time that Red Hat KVM is in use with some big names in the cloud, like Rackspace, IBM and Google. I haven't verified this, by the way.

Anyway, it turns out Red Hat KVM doesn't even register on the V-index survey of most popular server virtualisation hypervisors, which at the last count had a ranking of: VMware 67.6%; Microsoft Hyper-V 16.4%, Citrix 14.4%, and; other 1.6%.

So much for, "The most popular hypervisor on the planet." Red Hat KVM comprises a fraction of 1.6% of hypervisors in use. That's not to say it'll always be that way but Scale's hyperbole here was wide of the mark for now, like several parsecs wide of the mark, and by the end of the call some rowing back had been done to say the least.

It also baffles me slightly why Scale Computing would try to tout the supposed high-end enterprise/cloud credentials of the Red Hat hypervisor in what is avowedly a mid-market play that aims to compete with the likes of Nutanix, Pivot3 and Simplivity.

Anyway, the lesson, dear vendors, is if you don't have any actual news, then please feel free to tell me massive ridiculous porkies that I can call you on and use as the hook for an interesting discussion on hypervisor types and their relative popularity.

The cost of clustered NAS, and can it boost Overland?

Antony Adshead | No Comments
| More

Almost exactly a year ago I spoke to Overland Storage about their then-new DX1 and DX2 traditional NAS boxes. At the time I questioned why clustered NAS capability had been omitted. After all, it's a technology that makes total sense; instead of buying traditional NAS devices that are doomed to become silos of data, customers would be far better served by the ability of clustered NAS to scale capacity, I/O and throughput and for all devices to see a single file system.

Well, this week Overland has announced the fruits of development following its acquisition of Maxiscale's clustered NAS intellectual property two years ago. Overland has taken this, added two years of engineering effort and developed a new clustered NAS OS, called RAINcloud OS.

RAINcloud OS is incorporated in the new SnapScale clustered NAS product. The product comes as a minimum of three nodes in a cluster, with a minimum of four drives in each. You can put a maximum of 12 drives in each node, or have less than full capacity while adding nodes to gain I/O and throughput. Drives are nearline SAS and can be under RAID levels 0, 5, 6 or 10. SSD will be added in 2013, as will automated tiering.

The RAINcloud OS can scale to a staggering 512 PB in a single file system and Andrew Walby, Overland's EMEA and Asia Pacific sales and marketing VP, says they've tested it with 200 nodes with no loss of performance.

So, what do you pay from clustered NAS capability? Well, for 24 TB of Overland's DX traditional NAS you would pay around $8,000 while for 24 TB of SnapScale clustered NAS you'd shell out around $20,000.

That's a $12,000 premium for some clever code, and according to Walby, that's cheap for clustered NAS compared to the likes of Isilon. He was at pains to point out the work that went into RAINcloud.

"The concepts of clustered NAS are simple but the engineering is very complex," said Walby, who added that he hoped it would usher in better times for the company, which has suffered in recent years. "It could be a game-changer for Overland. There are not as many players as in traditional NAS and we come in a lot cheaper than the competition but still have enterprise features such as snapshots."

Violin - a proudly proprietary storage vendor . . .

Antony Adshead | 1 Comment
| More

I met with Violin Memory at VMworld Europe last week in Barcelona. It's always good to spend a bit of time talking with vendors and get to see under the skin of the company a bit.

Chief impression was that Violin spends a lot of time telling you what it's not about.

"We don't do cache," is one of its pronouncements. It believes the job of its flash is to act as super-fast storage in its own right, not as a cache corrective for the deficiencies of spinning disk. "You have to stop thinking about flash as disk augmentation," Violin technology VP for EMEA Mick Bradley told me.

"We don't believe in data tiering*," they also say. Here again they have faith in their ability to, "provide the performance of flash at a price comparable to tier 1 disk", meaning, in Violin world, that your hot data should be on their product and there's no need for it to be anywhere else except in backups and then archives.

Hybrid flash storage? "A race to the bottom in one use case", ie virtual desktops.

Server-side flash? Another compromise.

Perhaps its boldest and potentially most confusing claim is when it tells you they, "don't sell SSD".

It does, of course. Violin bases its all-flash array products on NAND flash chips it obtains via a supply chain deal with Toshiba. It puts this silicon on bespoke cards that carry all its special sauce; ie all the software that does that does the striping, data protection, wear-mitigation etc across these so-called VIMMs, or Violin Intelligent Memory Modules.

So, what Violin actually means when its says, "We don't sell SSD" is that it doesn't sell commodity SSD in 2.5" or 3.5" disk drive format.

You can't say Violin doesn't aim for a bold idea of what it does and doesn't do, and has a decent roster of customers including, most recently, the UK's air traffic control organisation, NATS.

But, informal conversations also reveal a frustration with a customer community that rarely looks beyond the big four or five storage array vendors. You know, the ones you'll never get sacked for buying from.

That, however, is the lot of the small storage vendor, especially one that so proudly ploughs its own furrow with technology that is obviously deeply proprietary. It's not like you could simply swap in commodity drives to a Violin array if the company or its arrangement with Toshiba went belly up.

It's one of those contradictions of the storage industry and of IT in general; the more you carve your own profitable proprietary niche the more you make yourself a potential single point of failure. And that's a fact that can't be lost on potential customers.

(* Despite not believing in data tiering it is planned to add it to future Violin Memory arrays, said Bradley at VMworld)

(For blog posts before mid-September2012 see UK Data Storage Buzz.)

. . . And Whiptail's commodity drives that aren't

Antony Adshead | 1 Comment
| More

Whiptail is another all-flash array vendor. Unlike Violin Memory it doesn't mince any words about being an SSD vendor or not.

It sells hardware that ranges from 3 TB to 72 TB; a head/controller on top of Intel 2.5" MLC drives. Its software provides buffering intelligence that deals with flash wear issues and RAID levels 0, 5, 6, and 10 are available.

In conversation with Whiptail EMEA VP and general manager Brian Feller he made a point of stating that the vendor uses "commodity drives". Here's how the conversation went after that.

Me: "Commodity drives, you say? So, I could buy drives from anywhere, as a commodity, and use them in a Whiptail array?"

Brian: "No. You have to buy them from us or you invalidate the warranty because we quality assure them. We remotely monitor all our arrays so we'd know, and we'd also cut technical support."

It's quite remarkable that "commodity" can come to mean "a product you can only buy from one company." It's also staggering that Whiptail sees the need justify this on the need to QA drives from Intel. It's not like they're some SE Asian white box no-name vendor.

But that's the world of storage, which sometimes feels years behind other areas of IT in terms of customer lock-in.

(For blog posts before mid-September2012 see UK Data Storage Buzz.)

NetApp is storage #1. Oh, really?

Antony Adshead | No Comments
| More

On the streets around VMware's VMworld Europe 2012 event this week you could see a mobile advertising vehicle bearing a hoarding that declares: "NetApp Data ONTAP is the world's #1 storage OS? Yep."

It's a bold claim, and if true, NetApp would be right to plaster it to the side of vans. But it's not as straightforward as they'd like it to be.

In formal terms it's true. NetApp commands the second-biggest or close to second-biggest market share among storage vendors globally and all its storage arrays use the Data ONTAP OS. The world's biggest storage vendor, EMC, has plenty more market share but uses different OSs in its midrange and high-end arrays so loses out on the ability to declare any of them "the world's #1".

But, what is the claim to be world's number one storage OS really worth? Not a lot really. Firstly, NetApp gets to claim that mantle because their position in the market. It's a bit like Toyota declaring Toyota engines are the world's most popular, which is true because they are the biggest seller of cars worldwide. Just like you don't get a Toyota without one of its engines in it, you don't get a NetApp filer without ONTAP in it.

And, while NetApp probably has EMC's multi-OS product range in its sights as a subtext to the advert, the claim to have a single OS across all products is only worth anything if that means many of your devices can work together. NetApp has made strides towards this with its recent announcement of true NAS clustering that can scale to 20 PB and 20 billion files in ONTAP 8.1.1 but that is currently limited to five HA pairs of devices.

So, really, the NetApp ad should read: "NetApp Data ONTAP is the world's #1 storage OS? So what?"

(For blog posts before mid-September2012 see UK Data Storage Buzz.)

About this Archive

This page is an archive of entries from October 2012 listed from newest to oldest.

January 2013 is the next archive.

Find recent content on the main index or look in the archives to find all content.