Sergey Nivens - Fotolia
The British Antarctic Survey (BAS) has deployed nearly 1PB of DataDirect Networks (DDN) storage to support data collection and analysis, high-performance computing (HPC) and general datacentre use, including server virtualisation.
BAS collects around nearly 1TB of data a month from Antarctic research stations, ships and aircrafts about climate, geography and topology, sea and sea floor, ice flows and thickness, fauna and flora, as well as from space research that makes use of the clear atmospheric conditions.
Added to this is outputs from HPC and scientific modelling carried out using a Lustre parallel file system on the DDN arrays at the BAS headquarters in Cambridge.
BAS already had several storage area network (SAN) systems acquired over the years from suppliers, including IBM and some existing DDN storage.
The key reason for the DDN deployment – effectively a large-scale hybrid flash file access implementation – was data growth of 75% in volume year-on-year, said BAS IT support engineer and head of Unix systems Jeremy Robst.
“We needed an awful lot more storage and a variety of drive types, including flash and nearline-SAS and all in a relatively small space as we are physically limited at our premises,” said Robst.
After evaluating several suppliers’ products, including those from Oracle and Dell, BAS opted to deploy two DDN SFA7700X storage systems with around 460TB of capacity each, both at its Cambridge premises. The arrays comprise 4.5TB of flash, 36TB of 2TB 10,000rpm SAS plus 420TB of 6TB 7,200rpm nearline-SAS. Connectivity is via Fibre Channel or Infiniband.
The flash drives are used for metadata for the Lustre file system, with SAS drives used for production data such as for VMware and the slower access 6TB drives used for outputs from scientific modelling, for example.
Currently, however, data cannot be tiered automatically between the different drive types and data movement is done manually.
Why did Robst’s team go with DDN? He said: “It was down to a combination of factors, but a big one was to do with space. We’ve got four units from DDN and that’s 16U in total and 200KG. For the same amount of capacity from other suppliers, we were looking at 48U and 600KG.”
“What we also needed was a variety of drive types and we got that, plus low power usage, relatively speaking,” he added.
What could DDN improve in future versions? Robst said: “It’s good that many of the upgrades can be carried out without taking the storage offline, but for some of them you need to. It would be nice to have that in future versions.”
Read more about HPC storage
- Oxford University gets Panasas ActivStor 14 clustered NAS as HPC storage for high-performance processing.
- 100,000 Genomes Project rejects building open-source parallel file system on x86 servers and opts for EMC Isilon clustered NAS deployed by Capita S3.