Feature

Scale-out NAS product survey: The big six

Scale-out NAS storage has become mainstream, and for reasons that make good sense.

Traditional, or scale-up, NAS is limited in terms of the numbers of nodes that can be linked together. That often results in islands of NAS storage, between which there is no visibility or movement of data and with a file system that is limited in the number of files it can handle.

DellFS8600.jpg

By contrast, scale-out NAS allows nodes of storage to be added, scaling capacity and/or performance, and usually with a parallel file system that scales to billions of objects. In a world where unstructured data volumes are on a path of huge growth, these are valuable properties.

According to research firm ESG, scale-out NAS not only offers greater scalability and performance but also relieves IT staff from time-consuming tasks such as storage provisioning, reconfiguring LUNs and migrating data as new storage is added to the system. As a result, the firm predicts that by 2015, scale-out NAS revenues will comprise 80% of the NAS market by revenue and 75% by capacity.

In terms of use cases, ESG sees scale-out NAS as suited to archiving and backup, general purpose file serving and high performance computing (HPC) file sharing. Industries that need scale-out NAS include media, financial services, life sciences, cloud computing, utilities, and web-based applications and services.

Further benefits include a reduction in IT management costs and datacentre space requirements, which in turn reduces power and cooling bills. Increased storage consolidation onto a shared resource also means utilisation rates increase, so users get more bang for their storage buck.

All the major storage suppliers offer scale-out NAS, the main distinctions between them being price points, target markets, capacity and performance.

Flash is increasingly offered as prices continue to fall, and features such as storage tiering, data deduplication, thin provisioning and compression are commonplace.

The most often-used connectivity technology is 10GbE – none of the suppliers we surveyed provide 40GbE – although Infiniband is used by many for inter-node connections.

Dell Compellent FS8600

Targeted primarily at medium-sized to large enterprises, the FS8600 (pictured) uses Dell's FluidFS v3 file system, based on the scalable NAS technology Dell acquired with Exanet in 2010, to provide a single namespace across up to four storage units and a maximum capacity of 2PB, accessible using CIFS and NFS.

Ports for 10GbE and 8Gbps Fibre Channel are provided. Management is via Dell Enterprise Manage 2014, which provides a single GUI for its NAS and SAN ranges.

The 2U system uses 24 7,200 RPM SAS disks per unit by default but an SSD option is available, using six 400GB SLC drives and six 1.6TB MLC drives. The other 12 slots can be filled with MLC flash, SLC flash or spinning disk.

The system offers storage tiering, thin provisioning and automated, policy-based data deduplication and compression for aged data, and clients can authenticate using Active Directory, NIS or LDAP.

EMC Isilon S-Series and X-Series

With a maximum of 20PB per cluster, EMC aims its two scale-out NAS system series at large enterprises with large datasets, those running data analytics applications, and those needing high performance levels.

Managed by EMC Isilon OneFS, they allow access via traditional file protocols, such as NFS, CIFS and FTP, as well as Hadoop and REST APIs for access to object-based file systems. They also include VAAI and VASA storage APIs for improved integration with VMware environments.

The three-strong X-Series scales up to 20PB with 144 nodes, with capacities from 12TB to 144TB per node

The two-strong S-Series uses SSD technology to enhance performance for those running IOPS-intensive applications and provides capacities from 16.2TB to 4.15PB, with each node providing from 7.2TB to 28.8TB, depending on configuration. Use cases include digital media, design automation and life sciences.

The three-strong X-Series scales up to 20PB with 144 nodes, with capacities from 12TB to 144TB per node and a claimed throughput of up to 200GBps. All units offer SSDs for storage of metadata, data deduplication, encryption of data at rest using AES-256, and data retention for compliance purposes.

The series consists of the 2U X200 with a maximum of 36TB, and the 4U X400 and X410 nodes. Base connectivity is provided by four 1GbE ports with options to fit either four more 1GbE ports, or two 1GbE and two 10GbE ports. 

The X200 uses between six and 12 7,200rpm disks to provide up to 36TB of capacity, while the larger pair uses up to 36 drives for a maximum of 144TB. The X410 adds up to 256GB RAM – twice as much as the X400 – and provides two 1GbE and two 10GbE ports as standard.

Hitachi NAS Platform

Hitachi Data Systems (HDS) positions its NAS Platform as suitable for enterprise and remote datacentres and for medium-sized organisations across a wide range of industries. Its products can be used as file modules in Hitachi's Unified Storage Platform or as clustered nodes.

The system is expandable from two to eight nodes with a singe namespace and up to 32PB capacity. Features include replication, policy-based tiering, application-aware data protection, cross-platform indexing, and in-place data deduplication, with the ability to manage up to 16 million objects per directory. All provide access control integration with LDAP and Active Directory.

The systems are addressable using CIFS, NFS and FTP, and all apart from the 4040, provide four 10GbE

The range consists of six products, from the 3080, which originates from HDS's BlueArc acquisition and offers a claimed throughput of up to 700Mbps and a maximum usable capacity of 4PB, through to the 4100 with a claimed 2GBps throughput and 32PB of usable capacity.

The systems are addressable using CIFS, NFS and FTP, and all apart from the 4040, provide four 10GbE and four Fibre Channel ports; the 4040 offers two 10GbE, six 1GbE and five 10/100MbE ports. The 4060, 4080 and 4100 offer a further two 10GbE ports for intra-cluster communications and scale to two, four and eight nodes respectively.

HP StoreAll 8800 Storage

HP’s 2U object-based platform was launched in December 2012 and brings together its StoreAll 9320 and 9730 storage systems.

Each pair of nodes provides up to 560TB of capacity for a maximum system capacity of 16PB with billions of objects, scaled by adding nodes and controllers, in what HP describes as "a single hyperscale, economic, ultra-dense appliance".

Disks are arranged in multiple pairs of nodes, each node consisting of a 2U enclosure accessible over a range of protocols including HTTP, WebDAV, REST API, OpenStack Object Storage API, NFS, CIFS, and FTP. In each 2U enclosure are 36 or 70 7,200rpm SAS disks up to 4TB capacity, depending on model. Network connectivity is over 10GbE, with 1GbE ports provided for management purposes.

Features include snapshotting, replication, data deduplication and policy-based data tiering and retention, plus continuous data integrity checking. Using technology from the firm's acquisition of Autonomy, the system also includes automatic indexing and fast retrieval of data for analytic purposes, and aims to provide real-time access to and querying of data.

IBM Scale Out Network Attached Storage

Offering up to 15PB of storage, IBM's clustered SONAS system runs IBM's General Parallel File System (GPFS) and is designed to support scale-out performance for random-access and streaming file workloads. Cloud storage access is provided via integration with IBM's SmartCloud Storage Access, a self-service portal for end users that enables storage provisioning, monitoring and reporting.

SONAS supports up to 256 global file systems, offers tiering and load balancing, supports CIFS, NFS, FTP, and HTTPS, and includes four 10GbE ports per node. It also offers tiered access to nearline SAS and external storage pools, such as tape. The system can also optionally act as a gateway to IBM's XIV, System Storage DCS3700 and Storwize V7000 disk systems.

The hardware nodes offer a raw capacity from 5.76PB to 9.6PB

SONAS allows a maximum of 30 2U interface nodes – which provide connectivity to the network – and 30 pairs of 2U storage nodes in pods that fit into 15 storage expansion racks. Storage nodes are connected to the interface nodes via an internal Infiniband network. 

Each node can house SSD, SAS or SATA disks for a cross-system maximum of 7,200 disks when using 96-port Infiniband switches in the base rack. Each SONAS system also needs a management node for configuration purposes.

Other features include snapshotting and file-level cloning and, according to IBM, SONAS has "virtually no scalability limits".

NetApp FAS8000 Enterprise Storage

NetApp's scale-out operating system, Clustered Data ONTAP, enables a single volume of 20PB using 10 nodes, which in turn can form a cluster as big as 69PB. It runs on NetApp’s FAS8000 series, which was launched in February 2014.

The hardware nodes offer a raw capacity from 5.76PB to 9.6PB, and flash capacities range from 3TB to 18TB. Nodes scale into clusters, which can consist of up to 24 nodes – 12 pairs for high availability – interconnected by 10GbE links, to provide a maximum raw flash capacity of 384TB. NetApp markets two RAID 6 disk configurations: one maximises capacity and consists of 20 disks, while the other aims for performance and consists of 28 spindles. The company also offers equivalent RAID 4 configurations.

Connectivity is achieved via Fibre Channel, FCoE, iSCSI, NFS, CIFS, HTTP or FTP, with all products providing eight ports each of 16Gbps Fibre Channel, 1GbE, 10GbE and 6Gbps SAS, apart from the smallest, the FAS8020, which provides four ports of each connection technology.


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in August 2014

 

COMMENTS powered by Disqus  //  Commenting policy