Video post-production house The Mill has upgraded its Hitachi Data Systems (HDS) clustered network-attached storage (NAS) infrastructure and added a secondary disaster recovery site, cutting the backup window from an incredibly risky several days to just hours each night.
The Mill works on adverts for companies such as Levi’s, Coca-Cola and Adidas, and has done special effects for TV programmes including Doctor Who. At any time it has up to 300 employees creating special effects and building them into video footage.
These workers, using a render farm of up to 400 processors, are served files from a HDS/BlueArc clustered NAS system at The Mill’s central London offices.
Increasing volumes of data, pushed by use of high-definition (HD) video formats, meant that The Mill needed to boost capacity at its primary site and address backup windows that had become unwieldy and risky to the organisation.
Storage capacity boosted
The company had long been a BlueArc customer and has two clustered NAS heads delivering network file system (NFS) shares across the LAN to its render farm. These BlueArc heads sat in front of 100TB of Fibre Channel-connected LSI block storage with serial-attached SCSI (SAS), nearline-SAS and serial ATA (SATA) drives.
Clustered NAS is ideal for storing large amounts of file-based data. It is built around a parallel file system spread across many nodes that scale to billions of files and petabytes of capacity, with performance and disk space boosted as more clustered NAS heads or storage shelves are added.
The Mill’s use of the relatively low-performing nearline-SAS and SATA drives is typical of a user that requires sequential file access rather than many random reads and writes that might be found in, for example, database workloads.
More on clustered NAS
To the existing BlueArc/LSI capacity has been added 40TB of nearline-SAS in an HDS Hitachi Unified Storage (HUS) 130 array.
BlueArc, which was bought by HDS in 2011, was a pioneer in clustered NAS, with systems built around field programmable gate array (FPGA) processors customised for fast file serving.
This element of the project was simply to boost capacity at The Mill’s primary site, said infrastructure manager Stephen Smallwood.
“There’s no massive performance benefit. We just wanted more of the same – and the biggest benefit is that it continued to work,” he said.
Faster backup the biggest change
The big changes came with the addition of replication via a 10Gbps link between the primary central London site and a disaster recovery site at London’s Docklands.
The move has seen backup transformed from a regime based on tape and a backup window that started Friday evening and sometimes stretched to Tuesday to one based on nightly replication.
The former scenario placed an unnecessary amount of risk on The Mill in terms of data protection. If something had happened to the tapes or the building at the end of the backup window, several days’ worth of work could have been lost.
Smallwood’s team moved a BlueArc head unit to the East India Dock site which was populated with 210TB of nearline-SAS capacity in a HDS AMS 2500 array.
BlueArc’s JetMirror replication moves changed blocks only from production data between the two sites every night.
“It has improved backups, and restores now take 10 or 15 minutes instead of hours,” said Smallwood.