Not so much storage in IBM's UK storage roadmap

IBM storage UK chief strategy officer Mark Vargo discusses IBM's storage roadmap for the UK and how its goal is to reduce the amount of data that busineses store.

IBM's storage roadmap for the UK is focused on helping users bridge the gulf between two views of their data -- that of business units and of storage teams.

It is a natural stance for IBM, which, while relatively weak in home-grown storage products -- most of its storage portfolio being OEMed from the likes of NetApp, Cisco and LSI -- is a giant in the wider IT sector, with product and consultancy capabilities on a global scale. Playing to that combination of strengths and weaknesses seems a rational position to take, but IBM also looks to strengthen its hand through recent acquisitions such asdata deduplication technology from Diligent Technologies and disk arrays from XIV Ltd., which are tailored to the needs of businesses streaming very large files rather than many random I/O transactions.

IBM storage UK chief strategy officer Mark Vargo recently discussed IBM's philosophy for storage management, specifically on how companies can write and store less data in order to purchase less storage.

SearchStorageUK: What are the key planks of IBM's storage roadmap in the UK?

Mark Vargo: Our roadmap is aimed at attacking the amount of data businesses hold, asking could they manage it better and make more efficient use of their storage in a proper information infrastructure. What's needed is not just a set of products, but to solve the problems, not the symptoms.

A department's view is based on what they need to consume, while storage teams only care about what is available to consume.
Mark Vargo
UK chief strategy officerIBM storage

So, the emphasis will be on looking at data utilisation and the amount of duplicated data businesses hold. Many firms just don't know what their rates of utilisation are and how much data they have duplicated, while at the same time they are seeing storage growth of 60% to 70% per year. At present, people just look at how much space they have allocated and write to it. What we need to do is figure out how these businesses can buy less storage.

The world of storage is based on allocated resources. There are government agencies and banks running less than 20% utilisation with 80% of their hardware bought simply because their users said they needed it. We don't need more features and functions but to attack the problem of why we are writing and storing the data in the first place.

SearchStorageUK: Is the emphasis in the IBM storage roadmap on products or consultancy?

Vargo: Our roadmap is mostly aimed at emphasising the 'soft' side – helping people discover what they have, using tools and consultancy to look from the application viewpoint at existing storage estates.

If you look at how storage is allocated and the capacity planning method, it is mostly based on last year's storage requirements. Looking at things from the application viewpoint – of domains, departments etc. – their view is based on what they need to consume, while storage teams only care about what is available to consume.

Forensic examination is needed to determine, for example, why do we need these six identical copies of a Powerpoint file and 15 copies of this database. Storage people just don't see that view of things and to storage vendors an inefficient customer is a good one, but not in IBM's eyes.

We aim to carry out deep forensics on customers' data and the duplicated information within that so we can take utilisation from sub-20% to above 70%, and, in doing so we may or may not use IBM products. Once you understand a customer's behaviour and how they acquire, use and retire storage, you can build a storage infrastructure in a more sane way.

SearchStorageUK: Where do the recent acquisitions of Diligent and XIV fit into the roadmap?

Vargo: XIV provides an extremely scalable storage platform especially suited to what we're loosely calling the Web 2.0 world. Most of the big vendors -- EMC, HDS and even IBM -- have traditionally built boxes to take care of big transactional operations, data that is required temporarily and then called back. Now we're in a world where many businesses have big multi media files and big transactionally-oriented boxes are not necessarily good.

A traditional array is like a sink of water. You can never put water in from the taps and let it out via the plug hole as quickly as you can scoop out a pint of water with a cup, and for big streaming files that's what you need: the ability to move big pieces of data in and out very quickly. That's the speciality of XIV, although used in a grid configuration, scaled out, it can work well for lots of random I/Os too. We have a product on trial at a customer, and will ship in the UK in the second half of this year.

Diligent gives the ability to deduplicate across all disk and tape assets, both post-process and inline. It has been shipping for two years and the IBM flavour of it is already going out.

SearchStorageUK: How do you see the future for solid state disk in storage?

Vargo: We think there's going to be huge interest and that back end storage capacity will move to SSD in the next twenty years. We believe it will start to happen as the economics allows silicon-based storage to become viable, possibly around 2010 or 2011, especially if you consider that as we move towards 2 TB disk drives the rebuild times will take a prohibitive amount of time.

SSD will sit all over the I/O space, not necessarily at the end of a channel or HBA or Fibre Channel cable. It could be back at the servers or as a shared resource above and below the SAN. The greatest amount of storage capacity will continue to be in the SAN but there is a need for large amounts of RAM such as for database indexes at the server or for holding big GIFs for, say, Amazon or Google web servers.

SearchStorageUK: Will IBM embrace MAID?

Vargo: Well, we have a partnership with Copan and have spin down in the DS8000, so it's something we think is a nice feature. But is it a feature or is it product in the longer term? It has good uses in certain applications, but I think MAID as a specific product category will be overcome by disk arrays that have that capability built into them. It will become a feature rather than a product, something that can be incorporated by software into the hardware.

To move all data to a MAID environment can be disruptive, where adding a piece of software to the existing estate that can spin down components would be far less so. We will start to see that incorporated into arrays, VTLs etc., and IBM is always looking to add features based on customer demand.

Read more on Storage management and strategy

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close