In a user seminar at the two-day event, Forrester senior analyst Andrew Reichman said, "The last two years have seen real urgency in terms of maximising the cost efficiency of storage."
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
According to Reichman, users deal with the need to get more from less by using two main approaches: "Buy less and work leaner, and use cheaper hardware."
Lloyds Banking Group classifies data to tiered storage
Speaking at Storage Expo, Jonathan Harris, senior storage engineer at Lloyds Banking Group, described how the firm had gone from a data volume of 4 TB to 300 TB in four years and some of the measures it had taken to optimise storage.
"We have a big budget, but we're looking for efficiencies. It's difficult not to be kneejerk. Everything was on Fibre Channel and we've been looking for ways to implement tiering to rectify that. We've introduced SATA drives for unstructured data and we're using Acopia ARX switches to move, track and archive data to the different tiers [of HDS and NetApp storage subsystems]."
Harris said a key task was to work with the business to determine the level of storage performance required, and how making users aware of the cost of storage helped his side of the discussion.
"Business owners all want the best level of performance until we tell them the cost," he said. "It's not chargeback, but we do charge for pieces of kit and that cost awareness is helpful, as most don't have an idea of what storage costs for a specific level of performance."
Another way Harris' team had helped cut costs was to minimise spend at the firm's disaster recovery site with "less performant arrays, less cache, more SATA, more virtual servers, and all put into a much smaller physical environment."
Houses of Parliament spends on the LAN, saves with iSCSI
Fellow panelist Dan Watson, data storage specialist at the Houses of Parliament, described how he had cut costs by opting for (NetApp) iSCSI over Fibre Channel during a recent move toward shared storage in preparation for server virtualisation. The key drivers for Watson were that he didn't want to spend time and money on Fibre Channel training when an upfront investment in a well-performing local-area network (LAN) could lay the ground for iSCSI as a storage interconnect.
"We realised there may be same compromise on performance, but we were putting in a 10 Gig network so we were able to get the performance we wanted," Watson said. "We've saved costs in that we've had no need for extra staff or skills training compared to going with Fibre Channel."
Panelist Martin Murphy, chief technical officer at University Hospital Galway, said his main strategy in cutting unnecessary spend is by using tiered storage. The hospital group recently implemented four Compellent storage-area networks (SANs) at two sites to replace aging IBM subsystems.
The SANs are split 20% Fibre Channel and 80% SATA, with data moved from the former to the latter after 30 days using Compellent's dynamic tiering feature.
"We wanted the biggest bang for the least buck, and the least admin overhead," Murphy said.
Data deduplication saves disk space
The fourth member of the user panel, Mark Connolly, senior global infrastructure analyst at marketing company IMG, had also moved to Compellent equipment recently and was saving costs by deduplicating data by between 60% and 80% using CommVault backup, which went straight to disk on a repurposed Nexsan SATABeast subsystem.
The Houses of Parliament's Watson also reported good results using data deduplication on his NetApp hardware. "We've had an 80% to 85% space reduction on structured data and 30% to 35% on unstructured," he said.