sgursozlu - Fotolia

Transport researchers get 2PB of Scality object storage

Transport Systems Catapult retires ageing NetApp filers with 2PB of Scality object storage in a £1.5m project that future proofs the organisation with built-in DR

Not-for-profit UK innovation centre the Transport Systems Catapult (TSC) has replaced ageing NetApp NAS storage with 2PB of object storage from Scality.

The project – which cost £1.5m overall – has massively reduced the management overhead incurred by supporting the NAS infrastructure and provided scalable storage for future growth, with built-in disaster recovery (DR) provision.

The Milton Keynes-based TSC holds numerous transport information datasets and also allows clients to use its processing resources to help develop innovative transport systems. Data is often unstructured and the volume of data it stores can increase and decrease according to the projects being run at any given time.

It had had NetApp NAS in place, but this was nearing end of life and presented challenges, said Alex Farr, IT director with TSC.

“The challenge was that the storage was beyond support from NetApp and was becoming expensive to add shelves,” he said. “It wasn’t scalable and although in theory it was scale-out, it didn’t lend itself well to this.”

TSC spoke to HPE via a recommendation and eventually deployed 10 HPE Apollo 4200 Gen 9 server nodes with three Scality object storage nodes on HPE hardware across two sites at a cost of about £800,000.

It uses these to run VMware virtual machines – about 46 at the time of writing – and a Hadoop analytics cluster. It is in the process of building an SQL database farm.

Read more on object storage

Key benefits are that TSC now has an infrastructure that can easily support its constantly-changing datasets and can scale up easily to multiple petabytes if needed. It has also saved a lot of time administering storage and has simplified backup and disaster recovery.

The latter is down to Scality’s uses of erasure coding and replication, which ensure constant data protection is in operation between nodes at different sites.

Farr said: “We can, in theory, lose the head office and part of our disaster recovery site and the service can still resume. The way the data moves around the boxes, we can lose 75% of it and keep going. It’s DR by design.”

Object storage does not provide the highest performance with regard to access times compared with high-end block and file storage methods, but is well suited to large volumes of unstructured data.

In place of the traditional tree-like file system of NAS, object storage uses a flat structure, with files given unique identifiers.

So why did TSC go for object storage? “We are very much in the dark about what types and formats of data we will have to deal with, but it is often unstructured and we knew we needed to pump it around our system and that it needed to be secure in disaster recovery,” said Farr.

CW+

Features

Enjoy the benefits of CW+ membership, learn more and join.

Read more on Data protection, backup and archiving

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close