Kalawin - stock.adobe.com
Datrium rebrands hybrid cloud data platform as Automatrix
NVMe pioneer Datrium announces ControlShift disaster recovery automation as a service, while bundling its data platform offerings as Automatrix, and lauds the benefits compared to siloed IT
Datrium has bundled its data platform offerings under the name Automatrix, while adding disaster recovery-as-a-service to run applications on VMware virtual machines (VMs) in the Amazon Web Services (AWS) cloud
It has announced all this while taking a swipe at the main storage area network suppliers.
The NVMe flash maker has transformed into a “data platform” provider, offering primary storage, backup, disaster recovery (DR), data mobility and encryption that works across on-premise deployments as well as in the cloud.
It has dubbed this Automatrix and added ControlShift, a disaster recovery orchestration functionality that can provide five-minute recovery point objective (RPO) with zero recovery time objective (RTO), offering failover from cloud or on-premise DVX deployments to Cloud DVX or VMware on the Amazon cloud. Microsoft Azure support is planned for 2020.
ControlShift also offers continuous testing of DR infrastructure.
Meanwhile, the company’s Cloud DVX has scaled customer capacity from 30TB to 1.15PB in the cloud.
Broad data platform
Datrium started as a company very much focused on NVMe flash storage, but is now making a big deal of hybrid cloud and sees itself as a broad data platform.
Here, the firm is responding to a trend towards hybrid cloud and multicloud operations. Its core on-premise product is DVX, which sees NVMe flash storage on server compute nodes with bulk hard disk drive-based storage on data nodes. DVX provides flash storage performance of three to four times that of SCSI-connected flash.
Tim Page, Datrium
Last year, Datrium added Cloud DVX, which runs in the AWS cloud as an adjunct to on-premise deployments.
Datrium executives make a big deal of pointing out that many of the functions it offers are usually split between different suppliers’ products and therefore siloed to some extent.
“Customers say they get inconsistent experiences with different virtualisation planes and different data planes,” said CEO Tim Page. “Customers also need several different products, and these are landlocked, with silos that can’t move together.”
It’s the same in the cloud, said Page, who gave the example of AWS, with S3, EBS and Glacier as different code bases in use there.
“You see all the old array vendors trying to move their old cages nearer to the cloud and calling it a cloud strategy. SAN technology served a purpose, but we don’t believe it’s the way of the future,” he said.
“With SANs, you have to create LUNs [logical unit numbers], configure multipathing and masking, deploy switching and map VMs to LUNs. But the business wants to deal with applications and VMs. We allow this with a guaranteed pool that is VM-centric, with a common code base between the different areas of functionality.”
Read more about cloud storage
- We look at the big five storage array makers’ efforts to connect on-premise hardware with cloud storage, and find automated tiering, on-ramps and backup and archive capability.
- Use of public cloud for backup data is something all backup software suppliers provide, but implementations range from simple S3 connections to expansive software offerings.