spainter_vfx - stock.adobe.com

Data storage: The top five storage deployment pitfalls

Key pitfalls in data storage include silos, under-utilisation, backup and disaster recovery, plus neglect of skills and application requirements. We ask analysts how to avoid them

Enterprise storage deployments always involve risk and potential pitfalls.

Upgrading hardware, switching suppliers, or changing backup and recovery processes can expose production systems to disruption, failure and data loss.

But set against this is the need to invest in data storage.

Gartner, for example, estimates that unstructured data is growing at 40% year-on-year, while the demands of the internet of things (IoT) and artificial intelligence (AI) and machine learning (ML) are forcing organisations to store more data and to think about how that data is stored.

Developments such as hyper-converged infrastructure, all-flash storage, including NVMe, and object-based storage are increasing the technology options on offer.

Deployed with care, they can increase performance and reduce costs. But CIOs and IT directors need to balance the potential for storage upgrades against the need to “keep the lights on”.

Computer Weekly asked industry analysts for their advice on how to avoid the most common pitfalls in deploying, and operating, data storage.

Storage pitfall 1: Storage silos

“IT specialists work in silos. They don’t look at it as providing services; they view it as providing infrastructure. But, you’re not providing infrastructure, you’re a service provider,” says Julia Palmer, a vice-president at Gartner Research.

IT departments, she warns, have their own key performance indicators (KPIs), “which are not the KPIs of the business or the customer”.

“The biggest pitfall is if you’re still buying storage on a per-project basis, creating silos and unwanted legacy,” says Bryan Betts, principal analyst at Freeform Dynamics.

“One of the most important things you can do is to start insulating your storage decisions from your application decisions. For example, by giving hyper-converged or software-defined storage a foothold, then adding more apps and capacity to it as the need and opportunity arises,” he says.

Gartner’s Palmer says storage teams need to come out of their silos and start thinking as a service provider, or on the lines of the business. “If they don’t, don’t be surprised that the work is outsourced or goes to a public [cloud] provider,” she says.

Storage pitfall 2: Not addressing resource utilisation and storage costs

Organisations often use barely half the storage they own, and hence pay for more than they need.

“Although we are seeing broad adoption of virtualisation and various data optimisation tools, resource utilisation for on-premise resources has stayed largely in the 45% to 70% range,” says Natalya Yezhkova, research vice-president for infrastructure systems, platforms and technologies at analyst firm IDC.

“There are good and bad reasons for this, from provisioning for spikes in requirements – a good reason – to poor planning or system management, which is a bad thing. All the under-utilised resources still consume floor space and need to be maintained, powered and cooled,” says Yezhkova.

This adds to capital costs. “On-premise systems are expensive and are often upfront investments. Most customers prefer to buy systems rather than lease them from a vendor,” she says. That’s a significant investment and by only using half the storage, they are potentially doubling costs.

Then there are operational costs. “This is one of the major value proposition points for cloud-based resources, bringing savings on power and cooling, datacentre space, hardware maintenance, and more,” she says.

“Intangibly, the cloud provides peace of mind for IT staff, although they are still responsible for quality of service for users within their organisations,” says Yezhkova. And the need to upgrade hardware, and increasingly software, during a storage system’s lifecycle is another overhead that is easily overlooked.

Storage pitfall 3: Overlooking backup and disaster recovery

Backup and disaster recovery (DR) is moving back onto the boardroom agenda.

Recent incidents of business interruption due to IT issues have made the headlines, as has the threat posed by ransomware.

“The ability to replicate and make data redundant is an extra cost often overlooked when considering any form of storage”
Roy Illsley, Ovum

Meanwhile, disk-to-disk-to-cloud has emerged as a viable tiering scheme and so it’s a good time for IT departments to look at backup again.

“The ability to replicate and make data redundant is an extra cost often overlooked when considering any form of storage,” says Roy Illsley, distinguished analyst in infrastructure solutions at Ovum.

“On-premise, the challenge becomes more acute because the processes and procedures, and probably DR technology such as the size of network connections, will need to be evaluated and changed.”

Analysts also point out that backup systems, and backup data, must be air-gapped from production systems and monitored if it is to resist a ransomware attack.

Storage pitfall 4: Overlooking skills

“The challenge faced by any CIO is to find the correct people with the correct skills,” warns Ovum analyst Roy Illsley.

“Storage is seen as not a ‘cool’ technology. How many 15-year-olds say they want to be a storage admin? None that I know. This lack of skills and perception is not restricted to storage, but storage is at the bottom of the pack,” he says.

“The vendors have been trying to make skills and training less of an issue by using AI to automate many tasks. But organisations need to consider this skills aspect before deciding to run storage on-premise,” adds Illsley.

“It is an easy debate if you have an existing team, but it is less easy if you have let the team go and want to re-establish it and bring storage back in-house. If you get rid of IT specialists, you never know if or when you may need them, should the strategy change.”

Storage pitfall 5: Neglecting top-level architectural design and application requirements

Modern storage systems are robust and flexible, and also complex. But too many storage systems are designed around the available technology, rather than the application and its data requirements.

“Thinking that data ‘is not my problem’ is the biggest pitfall for IT departments”
Julia Palmer, Gartner

This is especially the case for applications such as AI, where very low latency matters, or the internet of things, where data processing at the edge is vital to help the business take advantage of the additional information. Conventional, centralised storage does not work well at the edge. Instead, simpler systems with lower management overheads are needed.

“You should start with the applications and map them to the services and the storage you need to buy,” says Gartner analyst Julia Palmer.

“Tell me what service performance and SLAs you need and I can provide that to you. It needs to be based on the workload. We are moving from a storage problem to a data problem. Thinking that data ‘is not my problem’ is the biggest pitfall for IT departments,” she says.

Read more about storage management

Next Steps

The top data storage and management tools of 2021

Top 5 data storage best practices include Teams, containers

10 data storage questions for enterprises to ask

Read more on Data centre hardware

CIO
Security
Networking
Data Center
Data Management
Close