Feature

Storage travels into the future

Insatiable demand for data has forced radical changes to storage technology, writes Nicholas Enticknap.

There seems no end in sight to the massive growth in IT storage requirements,which really started when the Web became a serious business tool.

Gartner Group principal analyst Robin Burke estimates that e-mail boxes alone are generating a massive 900Tbytes of data a year. And that is less than 1% of all the new data generated annually, which has been guesstimated at about 1Exabyte (1,000,000Tbytes).

The Tarmac Group provides an illustration of what this growth can mean for one company. "In 1990 Tarmac Quarry Products had just 5Gbytes of information online," says head of group IT services Keith Howell. "Today our SAP database is growing at the rate of 15Gbytes every month."

Growth in data stored over the past decade has forced radical changes in storage technology. The switch from direct-attached to networked storage started in the late 1990s, and is steadily gathering momentum.

According to IDC, networked storage (including both storage area networks - Sans - and network-attached storage - Nas) will reach 68% of all storage capacity shipped by 2005. IDC analyst Claus Egge believes that, "Networking fixes most storage challenges."

We are still at the beginning of this storage networking revolution, which itself is creating problems for users. There are many new connectivity technologies to evaluate - iSCSI, FCIP, Infiniband - and it is not yet clear which will win.

Standards for switching in Sans are now in place with the ratification of FC-SW-2 last year, and most switches launched since then conform to this standard. Standards for storage management are not so well advanced.

So Sans are still not plug-and-play in the way that Lans are and that Nas is. Few users will be able to install or run one without forming a partnership with a specialist storage networking supplier.

Most Sans are proprietary, involving use of storage devices from one supplier only. Heterogeneous Sans, connecting to multiple server types and supporting all brands of storage, including that already on site and attached to existing servers, are still rare.

According to Sun's chief storage technologist Balint Fleischer, his customers are looking at "putting multiple large storage devices into one managed entity using aggregation technologies". The principal technology available for this is storage virtualisation.

Storage virtualisation software ensures that data is held in a format that is independent of that required by any physical storage device or any logical file or database system.

As a result, virtualisation makes possible the holy grail of data sharing that has been the objective of storage designers since the early 1990s.

There is a great deal of work to be done before we reach that goal, however. In the meantime, users are again faced with a confusing number of choices. Virtualisation can be implemented within a server, within a storage device or within a network connecting the two. There are pluses and minuses with all these approaches, and most large users will end up using different techniques in parallel, according to the nature of the problems they are trying to solve.

Virtualisation not only provides a means of overcoming the problems posed by multiple device types and file and database formats, it also makes managing data easier. Management of data storage is fast becoming the major IT issue of the first decade of the 21st century.

Burke predicts that storage will account for 85% of IT budgets by 2003. Clearly therefore effort spent ensuring this money is spent as efficiently as possible will pay greater dividends than the same effort spent anywhere else. "The next challenge is storage area management, covering everything from application to Raid management," says Burke.

Egge quantifies what this means to the user. "Storage capacity is growing at 80% a year, which means storage management must improve efficiency at greater than 60% a year."

To summarise, users are being driven to make changes to their storage infrastructure to cope with the insatiable demand for data that the e-world has created. The trend to networked storage is inexorable. Once implemented, networked storage will be enhanced by new applications such as virtualisation, which will allow better exploitation of the information that is stored while at the same time making it easier to manage.

Web sites
For White Papers
www.dafscollaborative.org
www.lto-technology.com/newsite/html/about_white.html
www.snia.org
For products
ADIC www.adic.com; Auspex www.auspex.com; Brocade www.brocade.com; Compaq www.compaq.com; Computer Associates www.ca.com; DataCore Software www.datacore.com; EMC www.emc.com; Exabyte www.exabyte.com; Fujitsu Softek www.softek.fujitsu.com; HDS www.hds.com; Hewlett-Packard www.hp.com; IBM www.storage.ibm.com; Legato www.legato.com; McData www.mcdata.com; MTI Technology www.mti.com; Network Appliance www.netapp.com; Oracle www.oracle.com Quantum www.quantum.com; QLogic www.qlogic.com; Seagate www.seagate.com; Sun www.sun.com; Tivoli www.tivoli.com; Veritas www.veritas.com

Steps to the future of server clustering
One challenge for the future relates to the technology itself. The pipes that connect storage devices to each other and to servers are getting larger and larger. Gigabit Ethernet is now the state-of-the-art for Lan storage, and 2 Gigabit Fibre Channel for storage networks.

A server's ability to process data is constrained by the speed at which it can be read in and pushed out, which is a function of the internal bus structure. With Intel servers, the prevailing bus is the PCI bus, which has a current speed of 1.07gbps. This is already slower than 2 Gigabit Fibre Channel, and the problem will get worse once 10 Gigabit links arrive next year.

Intel is working to enhance the PCI bus. The first evolution, PCI-X, is used in the latest Intel-based servers such as IBM's x440.

Intel is also working on next-generation interconnect technologies. It has now put its main development effort behind one known as 3GIO.

But 3GIO will not in its present form cope well with 10 Gigabit transmissions, though. This means that some other solution will have to be found, and that is likely to be clustering of servers. Users can expect to see the emergence of server area networks, analogous to today's Sans, and offering many of the same benefits.

This is likely to be where Infiniband establishes itself first. As things stand at present it looks far and away the most suitable candidate for server clustering.

Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in November 2002

 

COMMENTS powered by Disqus  //  Commenting policy