July 2013 Archives

Storage virtualisation vs software-defined storage

Antony Adshead | No Comments
| More

The aim of this blog post is to try to iron out some misunderstandings in two common terms in storage. Two terms that are actually really rather connected; storage virtualisation and software-defined storage.

First let's deal with storage virtualisation. Here at ComputerWeekly.com we're pretty certain that there's a good deal of confusion about this term. In our yearly poll of readers we keep hearing that "storage virtualisation" is a key priority on IT department to-do lists for the coming year. This year that figure was 36%.

That figure seems unusually high. It's an un-scientific measure, for sure, but as a storage journalist I get a fairly good idea of the type of projects that are hot by what comes across my desk, by speaking to customers, and to vendors, and I just don't hear much about storage virtualisation.

So, when those questioned in our poll ticked "storage virtualisation", what many probably thought we were asking was "is storage for virtualisation a priority?" Why? Because server and desktop virtualisation is a big priority for a lot of organisations right now and implementing storage and backup to support it is a key part of that process.

Meanwhile, storage virtualisation products allow organisations to build storage infrastructure from multiple vendors' hardware. Storage suppliers, of course, would prefer that they provided all of your disk systems. Consequently, while the key storage vendors have storage virtualisation products, it's not something they push particularly hard in marketing or sales terms.

Storage virtualisation products include EMC's VPLEX, IBM's SAN Volume Controller (SVC), NetApp's V-Series and Hitachi's VSP.

There are also the smaller storage virtualisation vendors and products, such as DataCore's SANsymphony, Seanodes' SIS, FalconStor's NSS and Caringo's CAStor.

These are all reasonably well-established products that allow users to create single pools of storage by abstracting the physical devices upon which they are layered to create a virtual storage array.

More recently, we've seen that capability emerge in the form of products at a higher, environment level.

Here, I have in mind, for example, VMware's plans for Virtual SAN, which will allow pooling, from the hypervisor, of server-attached disk drives, with advanced VMware feature integration, such as high availability and vMotion. It will scale to petabyte levels of capacity and will put some pressure on existing storage vendors playing in the SME up to small enterprise levels when it come to fruition.

And there is EMC's ViPR environment, announced at EMC World 2013, which merges storage virtualisation with big data analytics. Key to this discussion is ViPR's ability to pool storage from direct-attached hard drives, commodity hardware and other vendors' arrays into one single reservoir of storage that's manageable from a single screen.

These initiatives contain a large dose of what has for a long time been called storage virtualisation but are billed as software-defined storage.

So, to what extent are either of these terms accurate reflections of the technology they represent?

Well, of course both terms could be said to be so vague to be almost meaningless. After all, all storage is based on the retention of data on a physical drive, but that would be nothing without software that abstracted/virtualised for example, blocks and files to physical media, RAID groups and LUNs. In other words storage never exists without being defined by software or being virtualised in some sense.

So, how do we make sure we're using these terms clearly? Well, on the one hand it seems reasonable that storage virtualisation should refer to the abstracting of multiple vendor systems into a singly-manageable pool of storage. If there's anything such as historical usage in storage and IT then those systems ranging from IBM's SVC to the likes of DataCore seem to fit that billing and have done for some time.

Meanwhile, while we can recognise that VMware's planned Virtual SANs and EMC's ViPR are heavily based on storage virtualisation capability as defined here, they also go beyond this, to incorporate much higher level features than simple storage functionality, such as vMotion and big data analytics respectively.

Despite the efforts of some vendors, notably DataCore, which has gone from dubbing its products a "storage hypervisor" to software-defined storage according to the whims of the IT marketing universe, it seems reasonable to define storage virtualisation as quite narrowly as centring on the ability to pool heterogeneous media into a single storage pool.

Meanwhile, software-defined storage can be reserved for higher level function and environment-type products that also include storage virtualisation.

It's always a battle to define terms in an area so fast moving as IT, and with multiple vested interests and active marketing departments, but it's certainly valid to try and define terms clearly so the customer is able to see what they're getting.


Enhanced by Zemanta

Atomic Writes at the leading edge of flash in the datacentre

Antony Adshead | No Comments
| More

A rather tiny bit of storage news this week illustrates the changes taking place as part of the flash revolution, and also where its leading edge lies.

The news is that Fusion-io has submitted proposals for standardised APIs for Atomic writes to the T10 SCSI Storage Interfaces Technical Committee.

Why this is interesting is that it's is all about the interface between flash memory/storage and some of the most business critical database apps.

Atomic operations are database operations where, for example, there are multiple facets to a single piece of information and you either want all or none of them read/written. Completing only one part is highly undesirable, such as a query for credit and debit in a banking system.

Until now, with spinning disk hard drives, supporting MySQL, for example, because of the possibility of disk drive failures writes took place twice before acknowledgement, as a failsafe. Clearly, such a doubling of operations, is not optimum in terms of efficiency.

What Fusion-io has done is to eliminate that duplication of effort with APIs that build in management of Atomic operations to flash memory.

The flash pioneer claims its Atomic Writes capability provides performance throughput increases of up to 50%, as well as a 4x reduction in latency spikes, compared to running the databases on the same flash memory platform without it.

Gary Orenstein, marketing VP, said: "The background is that many have adopted flash as a direct replacement for the HDD. But Fusion-io believes flash should do more than that and that we should be moving away from the last vestiges of mechanical operations in the datacentre."

"What we're looking at are new ways to break new ground that are faster, with fewer instructions," he added.

Currently these capabilities only come with Fusion-io flash products and are already supported in MariaDB and Percona MySQL distributions but upon T10 standardisation they will be open to all vendors.

Stepping back to take a big-picture view what this also illustrates is the contrast between the extremes of flash implementation in the datacentre.

One the one hand there is this type of work at the leading edge of flash storage use, squeezing ever-greater efficiencies from the interface between flash and the apps/OSs etc that it works with by use of software.

At the other there are the legacy arrays into which flash drives act as a pretty much dumb replacement for the spinning disk HDD.

Enhanced by Zemanta

About this Archive

This page is an archive of entries from July 2013 listed from newest to oldest.

May 2013 is the previous archive.

September 2013 is the next archive.

Find recent content on the main index or look in the archives to find all content.