Primary Data and the latest incarnation of the “storage hypervisor”

At VMworld in Las Vegas this week Primary Data showcased its re-worked version of storage virtualisation. That is, in their case, a way of aggregating disparate storage resources from multiple protocols and making it available to applications as block, file or object storage and via VMware’s virtual volumes (VVOLs) performance policy tool.

In conversation with Primary Data’s founders, they push the Datasphere product as a “storage hypervisor” (not the first time this has been done) and one that can help IT departments save money by utilising idle disk space, by tiering between different classes of disk, and reducing over-provisioning.

CEO Lance Smith’s first line of attack was against all-flash and its tendency to be used as a point solution for specific high-performance apps.

“The idea is that flash is super-fast,” he said. “For example, 10x to 40x faster than spinning disk. But often it is used in isolation to other storage or compute resources. So, users are forced to decide which files could benefit from flash such as index or log files. But, why not take all storage sets and make them available to applications?”

Also, Smith pushes Datasphere as a solution to the constant overspends on storage that IT departments engage in.

“IT habitually overspends,” said Smith, “because it over-provisions storage for applications. So, there’s a misalignment between compute and storage with dormant VMs and a cost for data that’s sat in silos. Organisations routinely buy 3x to 5x more storage than they need.”

He added: “The root cause it that IT lacks time to procure properly, with no more than three to six months for procurement and deployment. So, they over-provision and deal with the waste because there’s no risk that way.”

In its launch at VMworld 2016 Primary Data’s stated aim is to bring the effect of virtualisation to storage with automated archiving across all storage and a reduction in the cost cost of over-provisioning by 50%.

Datasphere has a metadata controller that provides a logical view of data separated from its physical location and allows customers to match storage type to application needs, with the cloud as a possible tier, and a policy engine to set provisioning to SLOs (storage level objectives) via VVOLs (more on VVOLs here).

It sits out of band, so customers’ apps deal directly with storage resources, but only after the controller has defined volumes and policies and opened and closed transactions.

It can aggregate storage from any protocol (SAN, NAS etc) and provides access via block (SAN), file (NAS) and object (S3). It comes as a virtual appliance on a VM or as software installed on an x86 server.

So, is Datasphere just another storage virtualisation product? Why not get IBM’s SAN Volume Controller hardware or Datacore’s software storage virtualisation product?

“SVC? We’re not hardware and SVC only provides SAN storage,” said Smith. “Datacore also only allows block access. Also, Datacore touches the data. We’re out of band.”

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close