Oxford University’s Advanced Research Computing (ARC) centre has deployed 330TB of Panasas ActiveStor 14 scale-out NAS storage in a £200,000 project that has provided a huge boost in reliability by moving to a parallel file system from the shortcomings of a network file system (NFS).
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
The Panasas hardware replaced existing SGI Infinite Storage network attached storage (NAS) hardware that had reached end of life. Panasas scale-out NAS was chosen over competition from EMC, IBM and DataDirect Networks (DDN).
The ARC facility is a central resource available to university researchers that need access to high-performance computing (HPC) resources. ARC operates a range of HPC processing clusters – mainly based around Dell and SGI hardware – that are primarily used for scientific and statistical modelling.
The key driver for replacing the existing SGI storage was that it had reached end of life and was experiencing a number of issues, especially with its NFS presentation of files to users, said Andrew Richards, head of advanced research computing at the university.
“The SGI solution was fronted by two NAS heads and was all NFS, which is not a parallel file system. This was a weakness which manifested in issues with scalability, problems with memory consumption and locking up when under load from many users,” he said.
Scale-out, or clustered, NAS is file access storage that can be scaled to very large numbers of devices with a common file system across the hardware instances. Parallel file systems are an essential component of scale-out NAS and allow for massively scalable file systems across multiple linked storage nodes without performance degradation.
More on clustered NAS/HPC storage
Key benefits, said Richards, include consistent performance and no downtime during deployment: “It’s a parallel file system, with consistent performance across all the compute clusters. We also had no downtime as we moved users to the new storage. We installed the Panasas client on the compute nodes and allowed users to make the changes when it was convenient for them. Other systems would have involved a lot of work and downtime to do this.
“It’s also very easy to manage and has been an out-of-the-box solution for us. We have a small HPC team and I’d prefer them to be supporting users rather than dealing with underlying hardware problems. Now we have a happy user community and no issues that can be traced back to storage.”
During the tender process the ARC team also evaluated HPC storage products from EMC, IBM and DDN.
EMC’s Isilon was rejected, said Richards, because it was not a true parallel file system. “It’s a parallel file system in its back end,” he said, “but not as it is presented out to the compute clusters. It is an NFS-based solution.”
IBM and DDN’s products were rejected because they use IBM’s General Parallel File System, and this, said Richards, would have required days of disruption to configure client systems.
Could Panasas improve on the ActivStor product in future version? Richards said: “It would be very attractive if you could scale Panasas in terms of capacity without having to add processing power at the same time. There comes a point where you just don’t need more performance but you do need to add disk.”