HPC storage requirements: Massive IOPS and parallel file systems

We survey the key vendors in HPC storage, where huge amounts of IOPS, clustering, parallel file systems and custom silicon provide storage for massive number crunching operations.

Compared with mainstream enterprise applications, compute-intensive, high-performance computing (HPC) places very different demands on storage systems, so it’s not surprising a number of vendors have chosen to concentrate their efforts in this area.

In this article we will look at the main HPC storage products on the market, but first we will set out what we mean by HPC storage.

HPC applications tend to be used for computational analysis, data-intensive research, rich media, three-dimensional computer modelling, seismic processing, data mining and large-scale simulation. Driven by CPU-intensive processing, such applications handle large volumes of data over short periods of time while also in some cases permitting simultaneous access from multiple servers.

The need to process large volumes of data quickly has huge repercussions for HPC storage, given that storage I/O capabilities are typically much lower than those of processors. An HPC storage system needs large capacity accessible at high speed and to be highly expandable, while offering a single global namespace accessible to all users involved in the project.

These requirements have led some vendors to overlay HPC features onto existing scale-out NAS products, although most of the vendors we feature below have built products dedicated to HPC applications.

Consequently, there are a number of common features. These include:

  • The ability to scale out in a clustered architecture to cater for increased I/O and capacity requirements.
  • The use of object storage technology, which organises files as objects in a flat directory rather than a traditional tree hierarchy. This boosts performance for very large collections of files because the storage system does not have to navigate a tree structure to locate a file but instead consults an index, often held entirely in memory.
  • Use of parallel file systems across multiple nodes and in some cases driven by custom silicon, which allows more rapid access to large files than when using a single node, which can become a bottleneck.

Now let’s discuss the major HPC storage products on the market.

Hitachi Data Systems BlueArc Titan 3200

This storage server, which supports twice as many IOPS as its Model 3100 sibling, enables more than 60,000 user sessions and thousands of compute nodes to be served concurrently and is aimed at customers that require high data access rates. Scalable to eight nodes per cluster, BlueArc claims its parallel file system design offers 200,000 IOPS, throughput of up to 20 Gbps and a maximum capacity of 16 PB across its maximum of 256 parallel file systems. The Titan 3200 offers hardware redundancy and multiple RAID levels and is accessible using iSCSI, NFS, CIFS and Fibre Channel or 10 Gigabit Ethernet (GbE) with a single namespace across a cluster. It also includes backup, replication and data tiering features.

DataDirect Networks S2A9900

Using high-speed field programmable gate array (FPGA) silicon to accelerate its parallel file system, the top-of-the-range DDN S2A9900 generates parity data during write operations and verifies it on read operations in near real time, resulting in no write performance penalty, the vendor claims. The unit is the fastest and offers the largest capacity of the five core storage platforms DDN supplies. It uses RAID 6, is fully redundant, supports Fibre Channel and InfiniBand connections, and can host as many as 1,200 drives (SSD, SAS and/or SATA) at up to 6 gigabytes per second (GBps). Maximum capacity is 1.2 PB per rack. Other features include a version of MAID, which allows drives to spin down after a user-defined inactivity period to save energy costs but leaves the drives' electronics awake to boost access times and allow drive checking and healing activities.

EMC Isilon 72000X

The 72000X has a maximum capacity of 10.4 PB in a single file system on a 144-node cluster, with a maximum throughput of 45 GBps in a 4U box. Other variants in Isilon's X series offer lower capacities and performance. With a self-healing drive capability, it supports access over NFS, CIFS, HTTP and FTP. Performance is gained by striping files across multiple nodes and drives with I/O operations that happen in parallel. Large 128 Kb contiguous disk segments help to optimise file layout.

IBM Scale Out Network Attached Storage (SONAS)

IBM's SONAS is based on its General Parallel File System (GPFS), scales to 21 PB across 256 file system instances and consists of between two and 30 IBM System x3650 server nodes. The single global namespace is protected by RAID 6 and is accessible over NFS, CIFS, HTTP and FTP. The clustered GPFS is designed to provide concurrent file access, and IBM says the system's largest existing installations exceed 3,000 nodes. SONAS also offers automated policy-based file management to control backups and restores, snapshots, and remote replication. IBM also resells DDN's S2A9900 as the DCS9900.

NetApp E7900

Offering up to 6.4 GBps, the E7900 houses as many as 60 SATA drives in a 4U box. It can scale to 480 drives and 960 TB per cabinet, with a maximum capacity of 14 exabytes and up to 700,000 IOPS with solid-state drives installed. Software features include RAID levels 0, 1, 3, 5, 6 and 10; full redundancy; and up to 256 self-checking and healing file systems.

Panasas ActiveStor 12

Panasas claims the fastest file system with throughput of 150 GBps or 100,000 IOPS. The ActiveStor 12 is built up in blades, with 11 blades per 4U chassis for a maximum capacity per rack of 6 PB and can be connected to via 10 GbE or InfiniBand. Panasas says its object file system, PanFS, allocates RAID levels according to file characteristics and automatically migrates data to avoid performance hot spots. It supports NFS; CIFS; and DirectFlow, a proprietary, parallel protocol access for Linux clients that allows cluster nodes to access storage directly and in parallel.

SGI InfiniteStorage 16000

Designed for applications such as file serving, Web and media servers, data warehousing, CAD, and software repositories, the InfiniteStorage 16000 uses multithreaded parallel core silicon to deliver up to a claimed 1 million random burst IOPS and 12 GBps for both reads and writes. The modular architecture supports up to 1,200 drives, with each 4U module housing as many as 144 drives and full redundancy an option. The system can house as much as 2.37 PB in a 45U rack, with a maximum of 3.6 PB. The file system offers tiering and RAID levels 1, 5 and 6.

Read more on Storage management and strategy

Data Center
Data Management