Storage is often regarded as somewhere to keep data in between processing it. In the days when your choice was IBM or a plug-compatible manufacturer, the storage contract was a consolation prize to the company that lost out on the processor bid. Those suppliers that sold both focused on their processors and treated their storage products as the Cinderella of their range. But just as Cinderella had the last laugh, the significance of storage solutions is changing.
System heterogeneity, despite what the pundits say, is not a virtue - and having a large heterogeneous network is actually a sign of management failure. At times organisations go out of their way to install as many diverse platforms as possible, thus erecting artificial walls between their applications so that they can occasionally crow about leaping over one or two of them and achieving some fleeting synergy.
When mainframes were kept in glass rooms, application synergy was assumed, since a common character set, file format, programming language and environment for all applications rendered all data available for multiple use.
That has been lost largely, for no good technical reason but because of supplier politics and short-term strategies. Although today's mainframes fit in shoeboxes and their operating system, typically MVS, will run on a sub-1kg laptop, artificial barriers - such as software pricing - remain.
But the data on all these disparate systems is valuable. It is said that a company consists of its employees, but it really consists of their knowledge. Recruitment adverts still stress experience over skill and are veiled attempts to buy more knowledge. If what a company's employees know is valuable, how much more important is what the firm knows? As long, that is, as this knowledge is available. For data to be available to many platforms, it has to be in a form they can all understand, which may be a form native to none of them or, in a word, virtualised.
The good thing about standards is that there are so many of them to choose from. The same is true of virtualisation concepts and they are just as diverse. What the term means to any given supplier depends largely on that supplier's starting point in the market, but to the user the differences can be crucial. The pooling of available space and controlled assignment is sometimes called virtualisation, but it is really not even management - administration is a better term. Similarly solutions tied to specific subsystems or processors have very limited futures. A network-based approach is the only one offering open-ended functionality.
For the ideal storage solution, the primary requirement will always be reliability in all its facets: availability, security, disaster recovery, and so on. Efficient resource management becomes less important as technology gets cheaper and autonomously managed. But the ability to share data increases in importance. Sharing can be at the subsystem level with a proportion accessed from one platform and a proportion from another; at finer granularity but still screened from other systems; and with one platform owning a space and another being allowed to read it.
Data sharing - at a record or even data element level - is a long way in the future but has to be the eventual goal. In this context, true data sharing is an incredibly complex issue. If a logical database is distributed across many platform types, who does the locking and change logging? Even worse, how are transaction back-out and point-in-time restores handled across multiple platforms with many applications to be notified?
In-band metadata-based virtualisation, coupled with self-describing data, has the potential to restore homogeneity and therefore application synergy, while supporting freedom of platform choice and network-level availability. Not every supplier's current roadmap holds the promise of ever getting there.
Phil Payne is principal at Isham Research