kalafoto - stock.adobe.com
Infinidat wants to put itself among the big guns in storage, by opening up persistent storage to Kubernetes, adding S3 connectivity and NVMe-over-fabrics and making use of the highest capacity hard drives.
Infinidat made its name via high performance NAS and SAN disk capacity made of inexpensive components. Company founder and enterprise storage veteran Moshe Yanai recently quit as CEO, to hand the reins to COO Kariel Sandler and CFO Nir Simon, who he sees as eminently suited to guide the organisation through an era of “hyper-growth”.
First moves in this direction have been manifested by Infinidat’s launch of Container Storage Initiative (CSI) drivers that can expose its Infinibox arrays as persistent storage for Kubernetes container clusters. That move has gained approval from VMware for its Tanzu Kubernetes Grid, from Red Hat for OpenShift and Google Anthos. It should be able to function with any Kubernetes distribution going forward.
These CSI drivers allow Kubernetes to access Infinibox storage in SAN mode via Fibre Channel or iSCSI, or via NAS access and NFS TreeQ, which is an Infinidat method of regulating use of directories.
The CSI driver brings dynamic provisioning, volume re-sizing on the fly, backup and restoration from snapshots, data import and volume cloning.
The cloning function can uses other Infiniboxes as targets, or Neutrix Cloud – Infinidat’s cloud storage service. That’s the case from the moment the Elastic Data Fabric is activated, which is the logical pool formed of the physical and virtual Infinidat instances in an enterprise.
Data held in Neutrix Cloud storage is accessible directly from container clusters that run in AWS, Azure and GCP clouds in web mode without use of the CSI.
RAM cache plus disk to outperform all-flash?
Infinibox is marketed as an array that performs better and at lower costs than all-flash. The low cost comes from being built mostly from traditional spinning disk HDDs. Performance comes by compensating for the slowness of individual components by use of cache in RAM that is as effective for writes as in reads.
“Our customers see us as a competitor to flash arrays at half the price,” said Yanai in a virtual event attended by ComputerWeekly.com’s French sister website LeMagIT.
Read more about Infinidat
- Core DataCloud decommissions storage from several vendors to consolidate to 1PB on Infinidat Infinibox, with DRAM and flash to accelerate hot data in front of disk back end.
- Infinidat storage takes direct aim at IBM with Data Rescue Program. IBM users can trade in IBM XIV and FlashSystem A9000 gear for Infinidat InfiniBox disk arrays.
In Infinidat each file is chopped up into 64KB pieces that are distributed across hard drives. During reads the array loads up its cache with these shards by reading them in parallel. In that way the aim is to effect file transfer to hosts just as quickly as from an all-flash array where it would be read sequentially from a single SSD. For competitors, cache in RAM is only used for writes.
“Our Infinibox is so performant that there’s no need to use costly memory products like Optane”, said product head Yair Cohen.
That was without citing any competitors in particular but he probably meant Pure Storage and Dell EMC, who have both recently begun to offer Intel Optane in their arrays, and with whom the leadership at Infinidat seem obsessed.
Pure is a supplier that seems firmly ensconced in some enterprise sectors, such as banking, when it was so recently a startup too. Meanwhile, Dell EMC has built its reputation on its Symmetrix/VMAX arrays, which were invented by Infinidat founder Yanai.
One Infinibox provides 10PB of capacity in a 42U rack. That would comprise three Linux controller nodes with 96 cores and 4.1TB of RAM. Those controllers drive eight JBOD shelves for a raw capacity of 4.1PB. The difference between raw and useable capacity is explained by utilisation that comes from compression and data deduplication. Such a configuration can attain 1.05 million IOPS during reads and 1.09 during writes.
Infinidat compares this configuration to the Bryce Canyon VII arrays used by Facebook. These are also 42U and built on triple Linux controllers for a total of 48 cores plus 768GB of RAM and driving six JBODs that contain 432 HDDs with total capacity of 1.7PB raw and useable. “Infinibox consumes 0.84W per TB useable while Bryce Canyon VII consumes 6W per TB,” said Yanai.
“In fact, our arrays are even less costly than object storage arrays. However, we understand that enterprises want to archive the contents of their production SAN and NAS storage to secondary object storage. We plan to enable our Infiniboxes to expert to S3 within a few months,” added Cohen.
The S3 protocol is one of the three key axes of tech development that Infinidat is following currently. The second is NVMe-over-fabric connectivity between SAN and servers, although it is not known by what method that will be implemented, via ROCE or TCP. The third is use of very high capacity HDDs (20TB-plus) with advanced R/W methods such as writes assisted by microwaves (MAMR) and reads via dual heads that have been announced by Western Digital and Seagate.