fotohansel - Fotolia
Enterprises constantly struggle to stay ahead of a growing need for data storage, especially of unstructured data. Scale-out NAS systems are an increasingly important tool in this battle.
By using a scale-out architecture, organisations can add capacity to their NAS systems without the bottlenecks or siloes that arise by adding storage to existing arrays or standalone NAS systems.
In a scale-out system, each node comes with its own processing and storage capacity which can be added to, in some cases independently of each other.
These nodes then form part of a larger cluster, with the addition of new nodes adding to the common pool of storage. So-called parallel file systems are the framework for this and usually scale to billions of files.
Scale-out NAS systems are finding their place in an environment where local storage has found a new lease of life.
Although more organisations are making use of cloud storage, growth in the internet of things (IoT), artificial intelligence (AI) and machine learning have created renewed interest in local or edge computing and the storage to support it.
Scale-out NAS systems provide a robust combination of performance and capacity, with the ability to integrate all-flash for performance, and cloud-based infrastructure for lower-cost, long term storage and archiving.
Among scale-out products from the big five storage vendors, hardware offerings are the cornerstone, with use of the cloud as a tier pretty much ubiquitous too.
Flash media is now commonplace too as an option, as all-flash or hybrid flash, with NVMe offered in HPE’s Qumulo partnership. Some offer software-defined versions, while some – for example, IBM – offer entry-level and large-scale scale-out product choices.
Dell EMC – Isilon
Dell EMC’s enterprise NAS offering centres around its Isilon range. The supplier offers all-flash, hybrid scale-out and archive versions of its products, with the archiving line split into the A200, for active, and the A2000, for deep archiving.
Dell EMC uses the Isilon OneFS operating system to create a scale-out storage system. OneFS supports files up to 4TB, and there is no official maximum file limit.
For hybrid scale-out, options range from the Isilon H400, with 3GBps throughput and storage from 120TB to 480TB per chassis, through the 5GBps H500, to the H600, which supports 12GBps per chassis and up to 144TB of storage, using SAS drives. Each cluster supports up to 36 units, giving a maximum capacity – based on the H400 or H500 – of 17.2PB.
The all-flash F800 range supports up to 924TB in one chassis, based on 60 SSDs, and up to dual 40Gbps Ethernet, with dual 10Gbps Ethernet on the hybrid H400/H500/H600 range.
Dell EMC’s scale-out NAS systems also support CloudPools. This is the supplier’s cloud tiering system, which supports private clouds on Isilon or Dell EMC ECS, as well as Dell EMC’s own Virtustream, AWS S3, Google Cloud Platform and Microsoft Azure.
NetApp arrays offer block and file access, but it is NAS it is known for, built around the NFS file system and its Ontap operating system.
Ontap became Clustered Ontap in 2013, and can scale from one node to 12 HA pairs of nodes, so 24 in total. It offers an environment for scale-out NAS with flash and disk-based systems which extends to the cloud with NetApp Private Storage for Cloud and Cloud Volumes Ontap.
NetApp’s hardware offerings include its AFF flash and FAS hybrid arrays. The FAS2600 series supports up to 17TB and 1,728 drives, depending on the model, and scales up to 24 nodes.
The larger FAS8200 is a 3U unit with capacity for up to 5,760 drives and 57 petabytes, whilst the top of the range FAS9000 is optimised for SAN and NAS workloads. An 8U unit, it supports up to 17,280 hard drives and 172PB of data. It supports both Fibre Channel and up to 40Gbps Ethernet.
Ontap supports files of up to 16TB, although it recommends 500GB for performance-critical applications. The number of files is volume dependent, but tops out at 2 billion.
HPE predecessor HP made a big move in the clustered NAS market by buying Ibrix back in 2009, but by the early to mid 2010s it had faded from view.
File access is possible to its 3PAR high-end storage arrays but HPE seems to be majoring on scale-out NAS in its Apollo hardware in partnership with Qumulo.
Qumulo’s QF2 is a parallel file system that scales to hundreds of nodes in the datacentre or as software nodes in the Amazon cloud, with storage tiering in both locations.
QF2 provides a hybrid cloud Posix/Windows-compatible file system. When new hardware nodes are added the software spawns the file system to it, up to clusters of hundreds of nodes and billions of files.
The product family consists of the all-flash – and NVMe-compatible – P-series and the hybrid flash C-series with throughput up to 16GBps per four-node starter cluster and up to 1.6TBps in a configuration of 400 nodes.
IBM’s Spectrum NAS is a software-defined storage architecture built on industry-standard X86 servers. The system supports NFS and SMB, but with the ability to scale out through adding nodes or by adding capacity.
IBM also offers Spectrum Scale, which is an older – and larger – system based on the company’s general parallel file system (GPFS). This runs on x86, Power and z System (mainframe) hardware, and supports up to 16,000 nodes. Spectrum NAS is squarely x86 only.
Spectrum NAS can, however, be deployed on conventional servers, in the cloud or on virtual machines. There is no IBM-branded hardware appliance for the system, but it recommends 10Gbps Ethernet (25Gbps for all-flash systems).
IBM also specifies a minimum of four nodes per system, and says it scales to hundreds of nodes. There is specified file size or file number limit (although it will be lower than Spectrum Scale, which IBM says runs to “billions”).
Spectrum NAS directly supports the Rest application programming interface (API). IBM also offers Spectrum Virtualize, which connects storage systems to IBM’s own brand of cloud storage.
Hitachi Vantara’s scale-out product is the Hitachi NAS Platform. This supports Hitachi’s hardware and Storage Virtualization Operating System, its software-defined storage product.
Hardware ranges from the NAS Platform 4060, with a maximum capacity of 8PB and a throughput, per two-node cluster, of 2Gbps, through the 16PB, 3Gbps 4080 and the 4GBps, 32PB unit. The maximum number of files per directory is 16 million, with no upper limit on files, and a file system size of 1PB.
The vendor’s VSP G200 and VSP G400 are also supported by the NAS Platform. The G200 runs up to 264 drives, and a maximum capacity of 2.5PB using 10TB LFF media. The G400 supports up to 480 drives and 4.7PB storage.
Hitachi supports connections to Hitachi’s own cloud services, AWS S3, Azure and also the Hitachi Content Platform.
Read more about NAS storage
- Both NAS and object storage offer highly scalable file storage for large volumes of unstructured data, but which is right for your environment?
- We recap the key attributes of file and block storage access and the pros and cons of object storage, a method that offers key benefits but also drawbacks compared with SAN and NAS.