October 2013 Archives

Supplier hype-watch: Violin and DataCore at SNW Europe

Antony Adshead | No Comments
| More
IT trade events like SNW Europe this week are a great opportunity to study the techniques employed in IT suppliers' marketing messages.

Some use out-and-out distortions of commonly-understood technical terms.

Violin Memory, for example, loves to emphasise the second word in its name, ie "memory". A strapline it uses is, "Run business at the speed of memory" and we're asked to think not as storage admins but as "memory architects" using its "persistent memory architecture" etc etc.

But how does all that stack up? Memory traditionally is the media on the motherboard closest to the CPU where a portion of the data and the application reside during working operations. Now, Violin may produce some very fast all flash arrays but are we really talking "the speed of memory" here?

Its high-end 6000 Series, for example - "Primary storage at the speed of memory" - product specsheets don't distinguish between reads and writes and offer latencies of "under 250 μsec" for SLC variants and "under 500 μsec" for the MLC variants.

I asked Violin CTO Mick Bradley how they could call it "memory" when it doesn't appear to conform with the commonly understood meaning of memory either architecturally or in terms of performance. His reply was: "We call it memory because it's silicon and it is addressed in pages of 16K."

Hmmm, such a definition doesn't cut much ice, especially now there is flash storage that does operate at the speed of memory. An example of this so-called memory channel storage is Smart's UlltraDIMM, launched earlier this year. Such a product could claim to operate "at the speed of memory" with write latency of less than 5 μsec and being actually located in motherboard DIMM memory slots.

Meanwhile, others change the way they describe their product according to which way the wind is blowing.

Storage virtualisation software vendor DataCore is a great example here. At SNW Europe this week, DataCore EMEA solutions architect Christian Marczinke told us how the firm had been the pioneer of "software-defined storage".

Err, hang on there. DataCore's use of software-defined storage to describe its products is dates back less than nine months and is clearly a response to the use of the term by VMware with its overall software-defined datacentre push and EMC's ViPR hype.

In fact, until around a year ago DataCore referred to its product as a "storage hypervisor", clearly bending with the wind blown from VMware's direction. I dealt with that issue of nomenclature here.

Does all this quibbling over terminology matter? Well, on the one hand, it obviously matters to the likes of Violin and DataCore, who despite having great products, clearly feel the need to over-egg their puddings. And on the other hand to IT journalists it matters because it's our job to ensure customers get a clear view of what products actually are.

To be continued . . .

 

The VMworld ecosystem: Avatar's Pandora or Total Recall's Mars?

Antony Adshead | No Comments
| More

Here at VMworld Europe in Barcelona the term ecosystem is being thrown around with gay abandon. It's a lovely-sounding word. It evokes life, the planet, lush green rainforests, myriad plants and animals living in harmony etc etc.

IT vendors like to use it for those reasons and all its positive associations.

VMware is particularly keen on it, and it seems most apt. The layers of virtualisation they have laid onto physical servers are now being joined by levels of abstraction above the networks and storage infrastructures and into those hypervisor(s) they are gathering the intelligence to run nearly all aspects of the datacentre via ever fewer screens.

But stop for a second to think about what it means to step outside your ecosystem. Or alternatively, think about the movie Total Recall where the governor of Mars, Vilos Cohaagen, exercised his power through a total monopoly on breathable air.

Now, of course I'm not likening VMware's gathering of datacentre functionality to Cohaagen's tyranny, but look what happened when Cohaagen got sucked out of the safety of his ecosystem and onto the Martian surface.

Obviously this won't happen to you just because you deploy VMware in your datacentre, but there are good reasons to think deeply about what you're getting into.

Not least with storage, probably the area most affected by virtualisation. It accounts for something north of 50% of datacentre hardware costs and Gartner has predicted those rise by 600% upon virtualising your infrastructure. That's because packing lots of virtual servers into relatively few physical devices makes direct-attached storage unsuited to the massive and random I/O demands, and almost always means an upgrade to shared storage SAN or NAS arrays.

The day-to-day consequences of this are that storage will become more difficult to manage - masked by the VMware ecosystem - as it fills up more quickly, requires more rapid provisioning and generates ever more complex and potentially rapidly changing and easily broken dependencies and access profiles. And that's before we get to replication, mirroring, backup etc, all of which also presents a massively complex and dependency-heavy payload on the VM infrastructure.

All of which goes to show there's a downside to the concept of an ecosystem. VMware et al like to portray themselves as the Na'vi in Avatar, as guardians of their idyllic world. But the reality can end up more like Total Recall, where breathing the air is costly but stepping outside is even more difficult and dangerous.

For that reason it pays to exercise due diligence over the consequences of datacentre virtualisation, the likely costs and knock-on effects into storage and backup and to be sure to you have surveyed all the alternatives available in the market.

Simplivity converged storage converges with the hyperscale

Antony Adshead | No Comments
| More

If you could build a datacentre - and more importantly its contents - from scratch chances are it wouldn't look much like many of them do now. Technologies have come along, have served their purpose as an advance on what went before, but later become the next generation's roadblock to efficient operations.

Take the x86 server. It replaced the mainframe or RISC UNIX server. In comparison to them it was cheap; you could put one app on each and keep adding them. But then, of course we ended up with silos of under-used compute and storage. And latterly, to this was added shared storage - the SAN and NAS - that solved many problems but has challenges of its own.

How would the datacentre be designed if it was built from the ground-up now?

Well, there are two answers (at least) to that one. The first is to look at what the likes of Amazon, Google et al have done with so-called hyperscale compute and storage. This is where commodity servers and direct-attached storage are pooled on a massive scale with redundancy at the level of the entire compute/storage device rather than at the component level of enterprise computing.

The second answer (or at least one of them) is to look at the companies bringing so-called converged storage and compute to the market.

I spoke to one of them this week, Simplivity. This four-year-old startup has sold Omnicubes since early 2103. These are 20TB to 40TB capacity compute and storage nodes that can be clustered in pools that scale capacity, compute and availability as they grow, and all manageable from VMware's vCenter console.

Omnicubes are essentially a Dell server with two things added. First is a PCIe/FPGA hardware-accelerated "Data Virtualisation Engine" that sees data on ingest broken into 4KB to 8KB blocks, deduplicated, compressed and distributed across multiple nodes for data protection as well as being tiered between RAM, flash, HDD and the cloud.

Second is its operating system (OS), built from scratch to ensure data is dealt with at sufficient levels of granularity and with dedupe and compression built in plus its own global, parallel file system.

With all this, Simplivity claims in one fell swoop to have replaced product categories including the server, storage, backup, data deduplication, WAN optimisation and the cloud gateway.

And to some extent the claim rings true. By dealing with data in an optimum fashion from ingest onwards, parsing it in the most efficient way and distributing it according to what's most efficient and safe, it has come up with something like how you'd deal with data in the datacentre if you were to design its parts from scratch right now.

That's not to say it's without limitations. Currently Simplivity is only compatible with the VMware hypervisor, though KVM and Microsoft Hyper-V variants are planned. And it is of course a proprietary product, despite the essentially commodity hardware platform (except the acceleration card) it sits upon, and you might not put that on your wishlist of required attributes in the 2013 datacentre.

Still, it's an interesting development, and one that demonstrates a storage industry getting to grips with the hyperscale bar that the internet giants have set.

About this Archive

This page is an archive of entries from October 2013 listed from newest to oldest.

September 2013 is the previous archive.

November 2013 is the next archive.

Find recent content on the main index or look in the archives to find all content.