Forrester: Derive maximum value from storage

Forrester analysts Andrew Reichman and Vanessa Alvarex identify the top trends in the storage market

In 2011, storage represented an average of 15% of the total IT budget. Making the wrong bets on how you architect your storage environment could cause this number to grow more than your finance chief might like. We assess the changing storage landscape and identify how the various technologies can be implemented to maximum effect.

Storage will no longer be run as an island

The traditional model for an infrastructure and operations (I&O) organisation is to have a distinct server, storage and network team, with different budgets and priorities – and the result is often strained relationships and poor communication among the groups.

Because most firms don’t have effective chargeback, there is little visibility into the overall IT impact of any cross-group strategy moves. Add in the complexity of technical interactions between these silos, and you get a real mess.

Change in this approach has been sorely needed for years, and we are starting to see it happen. We expect 2012 to be a banner year for convergence across these silos, and cooperation will bring storage out of the vacuum. Because storage is so expensive, CIOs and CFOs are paying more attention to purchase decisions, and this trend is pushing those purchases towards greater consistency and fit with the wider IT strategy.

The consolidation of applications, increased use of virtual server technology, and the emergence of application-specific appliances and bundles means that it is more viable to buy consistent solutions for stacks such as Oracle databases and applications, VMware and Microsoft applications and virtual servers, among other workloads.

Forrester’s advice: Break down the organisational and budgetary walls that prevent I&O people from cooperating.

Consider aligning teams by the major workload stacks rather than technology components; you may see much better communication as a result. Make storage technology decisions in concert with server, network and application strategies, and you will likely start to optimise around the thing you care most about: supporting the business.

Storage to become more specialised for big firms

For years, many firms have simply chosen the “highest common denominator” as their single tier of storage – in other words, if some data needed top-tier block storage, then in many cases, this was the only flavour to be deployed.

As data volumes have grown over the years, the penalty for such a simple environment has grown, when much of the data does not really need top-tier storage. Additionally, unique requirements for specific workloads vary significantly, so the single flavour is often not well suited to big portions of the data being stored.

Major workload categories that demand optimisation include virtual servers, virtual desktops, Oracle databases, Microsoft applications, files, data warehouse/business intelligence, mainframe, archives, and back-ups. Each of these has unique performance and availability profiles, and each has major applications that need close integration to the storage they use.

Forrester’s advice: I&O professionals should be clear about which of these workloads are major consumers of data in your large storage environment and see if an optimised architecture would make more sense than a generic solution.

Once you start measuring and strategising along those lines, develop a set of scenarios about what you could buy and how you could staff along workload-optimised lines, and a strategy will emerge from there.

Cloud storage to become a viable enterprise option

In 2010 and 2011, I&O professionals saw a great deal of attention being paid to multiple forms of cloud, storage included, but still, few large enterprises had jumped on board.

With more enterprise-class cloud storage service provider options, better service level agreements (SLAs), the emergence of cloud storage gateways, and more understanding of the workloads that make sense, 2012 is likely to be a big year for enterprises moving data that matters into the public cloud. I&O professionals will have to assess what data they can move to the cloud on a workload-by-workload basis.

There will not be a dramatic “tear down this datacentre” moment any time soon, but I&O professionals will quietly shift individual data to the cloud in situations that make sense, while other pieces of data will remain in a more traditional setting. The appropriate place for your data will depend on its performance, security and geographic access requirements, as well as integration with other applications in your environment.

Forrester’s advice: I&O teams should evaluate their workloads to see if they have some that might make sense to move now.

Develop a set of detailed requirements that would enable a move to the cloud, then evaluate service providers to determine what is feasible. Focus on files, archives, low-performance applications and back-up workloads as likely cloud storage candidates, and develop scenarios of how they could run in cloud models currently on the market.

Make sure you think about fall-back strategies in case the results are poor, so that you are insulated should your provider change its offering or go out of business.

SSD to play a larger part in enterprise storage

While application performance demands continue to increase, spinning disk drives are not getting any faster; they have reached a plateau at 15,000rpm. To fill the gap, the industry has coalesced around solid state disk (SSD) based on flash memory – the same stuff that’s in your iPod (for the most part).

Flash memory is fast, keeps data even when it loses power, and recent improvements in hardware and software have increased the reliability profile to effectively meet enterprise needs. However, SSD remains far more expensive than traditional spinning disk, so it is still challenging to figure out how and where to use it. In 2012, Forrester expects to see existing and promising new suppliers showcase more mature offerings in a variety of forms, including SSD tiers within disk arrays supported in some cases by automated tiering, SSD data caches, and SSD-only storage systems.

Because SSD is fast, but relatively expensive, the long-term media mix is likely to include cheap dense drives for the bulk of data that is not particularly performance sensitive, and a small amount of SSD that is targeted only for the data that truly needs it. I&O professionals have another option in leveraging the performance power of SSD to enable better deduplication that could bring storage cost down, but these options are still newer to the market.

If you currently use custom performance-enhancing configurations such as “short stroking”, then that data is likely to be a good candidate to get better results on SSD. If you have applications that are struggling to deliver the needed levels of performance, then SSD might be your best option to house their data.

Forrester’s advice: You need to understand the performance requirements and characteristics of your workloads to make effective use of SSD. Don’t overspend on SSD where traditional disk will do – carry out rigorous performance analysis to find out where the bottlenecks are, and pick the tools that will address the gaps you uncover.

Automated tiering will become widely adopted

I&O teams have dreamed of an easy way to put the right data on to the right tier of storage media, but a cost-effective, reliable way of doing so has remained elusive. Tiering, information lifecycle management (ILM), and hierarchical storage management (HSM) look promising, but few firms have managed to get it right and spend less money on storage as a result.

Compellent, now owned by Dell, was a pioneer in sub-volume, automated tiering – a method that takes the responsibility away from the administrator to make decisions about what should live where and has enough granularity to address the varied performance needs within volumes.

Almost every supplier in the space is eagerly working on a tool that can accomplish this goal, and we are likely to see results in 2012, leading to increased maturity and wider adoption. However, some application providers say block storage systems don’t have enough context of data to effectively predict performance needs and that the applications should do this, rather than the storage systems. They also cite that the added central processing unit (CPU) burden outweighs benefits, or that SSD will eventually be cheap enough with advanced deduplication that a tierless SSD architecture will replace the need for tiering altogether. Suppliers such as NetApp also prefer a caching approach.

Forrester’s advice: There is some validity in some of these arguments, but there is little doubt that automated tiering will play a bigger role in enterprise storage, along with alternatives such as application-driven data management, advanced caching, and SSD-centric systems.

This is an extract from the report: “Top 10 Storage Predictions For I&O Professionals” (Feb, 2012) by Forrester analysts Andrew Reichman and Vanessa Alvarez, both of whom are speaking at Forrester’s upcoming Infrastructure & Operations EMEA Forum 2012 (http://www.forrester.com/ioemea2012) in Paris (19-20 June).


Photo credit: Thinkstock

Read more on Computer storage hardware

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close