Feature

Tomorrow's storage world dawns today

What new technologies and systems are emerging to help IT managers cope with the ever-increasing demand for storage? Nick Langley finds out

How do you plan and develop for the future when present storage requirements are a fast-moving target? With the demands of e-commerce and multimedia, many organisations find themselves doubling their storage every year just to stand still. Yet within most organisations, there's already a lot of under-utilised storage, and much of the new media purchased each year is destined to be used to, perhaps, as little as 30 per cent of its capacity.

One answer is better storage management, tracking down the capacity that is under-used, while monitoring the loads on busier servers, to ensure a proper balance across all available media. Hence the trend to central consolidation of storage. But without standards, which make any storage available to any device that needs it, much capacity is going to remain locked away. Much dedicated server attached storage remains inaccessible to other devices. Even on network attached storage, servers are essentially coupled to proprietary storage from the same manufacturer, or at best confined to the same operating system.

Overcoming these problems is the promise of Sans, with intelligence provided on the San, by products like Tivoli Storage manager.

Sun's storage marketing manager, Chris Atkins, says that if Sans are to be the de facto way of providing storage, they have to pass the Lan test. 'On Lans you don't have hardware and software coupled; you're not confined to the products of a single vendor. The opposite is true in storage at the moment.'

Sun's most significant current development is Jiro. It is intended to ensure that all storage devices are used to the full. 'Jiro defines the APIs between software bits and pieces, putting Java wrappers around storage devices, such that they can all communicate with one another in the same way,' says Atkins.

He stresses that like Java, Jiro is not a product, but an initiative which Sun hopes will be taken up industry-wide. The agreement of the specification is out of Sun's hands, and within communities of suppliers and users. Atkins expects the standard to be ratified by the end of Q1, 2000, and a number of companies to be shipping Jiro-compliant products by the end of this year.

When standards take over, customers won't need to buy from array suppliers, which essentially manufacture storage products to proprietary standards. 'Currently it's not worth Sun, or IBM, or EMC, going into solid-state disks, because the market's too small,' says Atkins. 'With SAN standards, the market potential will be realised by the technology developer, not a middle-man.'

The real issue is not how to supply storage capacity to meet growth, but how to manage the growth, says Tony Reid, storage solutions manager at Hitachi Data Systems. 'With a lot of NT servers with directly attached storage, only 30 per cent of capacity is being used, yet other servers keep running out of storage. The answer is to consolidate the storage - or attach it all to a SAN where all storage would be accessible by every server. If it doesn't happen, within three years most organisations won't be able to cope with storage access and backup requirements on the Lan.' Reid believes that with the standards coming from the Storage Network Industry Association, we'll have plug-and-play storage in data centres by the end of this year.

According to Robin Pilcher, Tivoli's European marketing manager for storage management, up to 60 per cent of Lan traffic currently is backup, archiving, disk space management - work that is essential, but not productive. This will move to the SAN. The SAN will have the intelligence to manage all storage related tasks, taking the load off application servers. 'More than half a web server's cycles can be disk housekeeping,' he says.

Tivoli talks about information management, rather than storage management - the free movement of any data across the enterprise, to wherever it's needed.

There's an 'aggressive' plan for rolling out enhancements to Tivoli Storage Manager for managing information and the SAN. All modules can be used in stand-alone mode, or be integrated, and all use a common interface which is web-browser based, and so manageable from anywhere.

Tivoli Storage Manager has been radically transformed from ADSM (an IBM storage management product), which quietly disappeared as a product name before Christmas. Also just before Christmas, IBM acquired SANergy, which will provide the base for Tivoli's data sharing capability.

'It's possible today to have data sharing using SANergy,' says Pilcher. AIX, NT, and Solaris can all access exactly the same data file, instead of needing multiple copies.

Storage management requirements are moving beyond the firewall, with the need to support service levels along supply chains, and 'pervasive computing' using laptops, handhelds, and mobile phones. For the supply chain, or business to business support, there's Tivoli Cross Site, which Integrates with Tivoli Storage Manager for the storage-related elements of service - every transaction at some stage is going to require data.

Tivoli Storage Manager client can be installed on laptops, and other mobile devices. This has the intelligence to do, for example, backups in a way appropriate to the link to the Lan or SAN, without interfering with the work the user has logged on to do. If the user has linked via a phone line, only changed bytes would be backed up. If on a network, backup would be at block or file level. So-called Adaptive Differencing is also used for backing up devices on the enterprise network.

Tivoli supports Lan-free data movement over the SAN now. Server-free data movement will be added later this year. By the end of the year, much more sophisticated data sharing will arrive, allowing data to be shared across many different platforms.

Meanwhile, storage media manufacturers are rising to the challenge of annually-doubling requirements. In the first seven years of the '90s, storage density on hard disks doubled every 18 months; now it's every 12 months. The first high specification PCs, affordable only by power users in businesses, had 20 Megabyte hard disks. Now parents buy their children 600 multimedia PCs with disks measured in tens of gigabytes.

Sooner or later, we will hit the limits of current magnetic media technology. R&D white papers from storage developers are already talking in terms of bits being stored at the molecular level, and the search is on for 'frontier materials' - alloys with smaller molecules to increase the storage density still further.

IBM's Zurich Research Lab has come up with an alternative high-density storage, using mechanical components derived from atomic force microscopy (AFM), a technology developed by IBM. It's been christened the Millipede, because it uses arrays of 1000-plus legs, or tips, on cantilevers etched in silicon. Tiny indentations poked into a polymer layer by AFM tips represent stored bits that can be read by the same tip. IBM believes it's possible to reach storage densities of up to 80 billion bits per square centimetre, five to 10 times more than the expected limit for magnetic storage.

A research prototype of the Millipede has already demonstrated the feasibility of this new approach to ultra-high density storage. Low energy consumption and wear is less of a problem than with large mechanical systems. And, like silicon chips, nano-mechanical devices are suitable for batch production.

The remaining challenge is to ensure that Millipede can read and write data fast enough to be practical. Parallel operation of 1024 tips could make possible data movement rates of more than 100Mbits/sec. The medium can be erased by heating up the polymer, and restoring it by a reflow process. Bit-level erasing isn't possible, but IBM says this is not required in most applications.

Peter Vettiger, who leads the Millipede research, says thermomechanical data storage may go well beyond the density of magnetic storage technology. 'However, our work is still in the early stages of development, and the use of a polymer as the storage medium is only one of several possible solutions. If the required functionality can be integrated into cantilevers and tips, the Millipede concept may become a universal read/write device for future storage systems.' Vettiger adds that devices only a few centimetres or millimetres across, will open up new possibilities for integrating computer power into small 'pervasive' devices,such as video cameras, mobile phones, and watches.

On the magnetic media side, IBM has achieved 35.3 Gigabits per square inch,almost double the 20 Gigabit record it set earlier in 1999. This was enabled by development of a new metal alloy disk coating. As bits are made smaller, they tend to lose their magnetic orientation over time. The movement of atoms at room temperature is enough to flip the polarity of a bit from 0 to 1. IBM claims bits written onto the new material have the same stability as lower-density disk drives already on the market. John Best, vice president of technology at IBM Storage Systems Division, says disks with the new coating can be made commercially, using existing equipment.

The one terabyte hard drive is on the horizon. Within a decade, with new magnetic alloys and handling techniques, we could be looking at disks with a density of one terabyte per square inch. A 3.5 inch platter on a desktop PC's hard drive would hold nearly 50 gigabytes, and a 2.5 inch disk on a notebook 20Gb-plus. Since a single drive can hold as many as 10 platters, those figures could be multiplied by 10.

Magnetic tape has had the last rites read over it more often than the mainframe, but like the mainframe, it continues to evolve. Last May IBM's Removable Media Storage Solutions Development Lab in Tucson announced what was then the highest storage capacity in the tape storage industry: 100 gigabytes of native,uncompressed data on a single Linear Tape Open (LTO) Ultrium cartridge (see box).

Already tape stability is assured. SLR (Scalable Linear Recording) tape stored correctly carries a lifetime guarantee. DLT manufacturers promise 30 years shelf life. These figures are extrapolated, since the technologies haven't been around that long.

As the technology moves from megabyte to terabyte capacity, the physical attributes of the tape drives will change.


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in April 2000

 

COMMENTS powered by Disqus  //  Commenting policy