Simplivity converged storage converges with the hyperscale

Antony Adshead | No Comments
| More

If you could build a datacentre - and more importantly its contents - from scratch chances are it wouldn't look much like many of them do now. Technologies have come along, have served their purpose as an advance on what went before, but later become the next generation's roadblock to efficient operations.

Take the x86 server. It replaced the mainframe or RISC UNIX server. In comparison to them it was cheap; you could put one app on each and keep adding them. But then, of course we ended up with silos of under-used compute and storage. And latterly, to this was added shared storage - the SAN and NAS - that solved many problems but has challenges of its own.

How would the datacentre be designed if it was built from the ground-up now?

Well, there are two answers (at least) to that one. The first is to look at what the likes of Amazon, Google et al have done with so-called hyperscale compute and storage. This is where commodity servers and direct-attached storage are pooled on a massive scale with redundancy at the level of the entire compute/storage device rather than at the component level of enterprise computing.

The second answer (or at least one of them) is to look at the companies bringing so-called converged storage and compute to the market.

I spoke to one of them this week, Simplivity. This four-year-old startup has sold Omnicubes since early 2103. These are 20TB to 40TB capacity compute and storage nodes that can be clustered in pools that scale capacity, compute and availability as they grow, and all manageable from VMware's vCenter console.

Omnicubes are essentially a Dell server with two things added. First is a PCIe/FPGA hardware-accelerated "Data Virtualisation Engine" that sees data on ingest broken into 4KB to 8KB blocks, deduplicated, compressed and distributed across multiple nodes for data protection as well as being tiered between RAM, flash, HDD and the cloud.

Second is its operating system (OS), built from scratch to ensure data is dealt with at sufficient levels of granularity and with dedupe and compression built in plus its own global, parallel file system.

With all this, Simplivity claims in one fell swoop to have replaced product categories including the server, storage, backup, data deduplication, WAN optimisation and the cloud gateway.

And to some extent the claim rings true. By dealing with data in an optimum fashion from ingest onwards, parsing it in the most efficient way and distributing it according to what's most efficient and safe, it has come up with something like how you'd deal with data in the datacentre if you were to design its parts from scratch right now.

That's not to say it's without limitations. Currently Simplivity is only compatible with the VMware hypervisor, though KVM and Microsoft Hyper-V variants are planned. And it is of course a proprietary product, despite the essentially commodity hardware platform (except the acceleration card) it sits upon, and you might not put that on your wishlist of required attributes in the 2013 datacentre.

Still, it's an interesting development, and one that demonstrates a storage industry getting to grips with the hyperscale bar that the internet giants have set.

Leave a comment