Analyst groups report that the amount of data being stored in corporate networks is increasing at over 40% per year, but with this increase the ability to move this amount of data is declining rapidly.
Compare the average server storage in 1990 of around 1Gbyte to the average in 2000 of 75Gbytes. Similarly, in 1990, enterprises were using network servers predominately for file storage with some centralised application provision. Today, the growth of high-volume databases on the Lan has become commonplace, often with many hundreds of gigabytes of data.
From the moment that Citrix produced its Winframe multi-user server, based on Microsoft's Windows NT 3.51 operating system, application-hosting across the Lan suddenly became a real business opportunity. With the network management mantra of the past five years being total cost of ownership (TCO), it was no surprise when Microsoft purchased the multi-user technology from Citrix to create its Terminal Server product. By doing so it made application serving a key goal of network designers and administrators.
Other technologies have also dominated the network, such as multimedia, technology-based training, video-conferencing and video streaming. Looking forward over the next few years, the strain on the network is only likely to increase.
Yet how has network architecture responded to this change and increase in use?
When networks first started to be installed Novell was the dominant player, and a multi-tiered infrastructure was popular in many medium-to-large enterprises.
In order to conserve precious bandwidth, networks were split into three tiers. At the lowest tier was the user access to their local server. The next tier dealt with the biggest killer of bandwidth at the time - print services. The final tier was inter-server communication, which was used for administration, the movement of data between servers, any key network services and backing up during the working day.
As printers became more commonplace, and with the take-up of TCP/IP, it became possible to flatten the network back into a single tier. This was because the network protocol suite provided administrative tools for precisely this purpose.
Ironically, as IT departments now struggle to cope with the applications and usage pressures of the corporate network, the idea of installing a multi-tiered network is becoming fashionable again. User access to resources is likely on the lowest tier, high bandwidth and specialised services such as multimedia, video streaming and voice-over-IP on the next tier and data access on the top tier.
When you strip out all the hype, it is this use of a separate tier for moving data around the network that underpins the idea of the storage area network (San).
One key way that Sans differ from previous multi-tiered architectures is how they deal with the way data is held. A serious problem with running large servers holding significant amounts of data in the current network environment is the limitation of the operating systems and bandwidth access to the physical devices. It is rather like having a supermarket with just a small number of checkouts and a single door.
To get around this, a San provides a mechanism whereby traditional servers are complemented, and later replaced, by shared storage devices where the access is via a number of servers. Distance is also no longer a problem, as the fibre mechanism removes the need for storage cabinets to be located in the same room as the servers through which they are accessed. This significantly reduces the cost of global datacentres, and provides higher access speeds.
To achieve this, the San layer is designed around FCAL (fibre controlled arbitrated loop), which is a variation on the SCSI architecture.
Being based on SCSI has enabled a lot of FCAL development to take place very quickly, and when it first came to market over five years ago, it promised speeds five to 10 times faster than the more common version of SCSI. In addition, the networking protocol is TCP/IP-based, although there is already some confusion over the way that bridges and gateways are defined. Such issues should be resolved in the next few years.
There are issues yet to be resolved and, despite the number of suppliers who have committed to producing San and FCAL devices, the hype for FCAL has outstripped the pace of development.
This, in turn, has highlighted two areas that have been responsible for many of the problems. As the San market began to develop, these same two problems have bedevilled the storage industry and resulted in several competing groups of suppliers.
Interoperability: The first problem is that of interoperability. With the high speed of data transfer and the newness of some of the technology, many of the suppliers have had problems showing reliable interoperability with both partners and competitors.
At present, there are a number of suppliers who have been relatively successful with their own San solutions. But almost without exception, these are single-supplier Sans where the supplier has sidestepped the interoperability issue.
Instruction set:The second problem has been the instruction set used to control and manage different devices. This, in turn, has shown how far we are from a true solution that meets the initial goals for both FCAL and Sans.
Alongside this has been the confusion caused by the growth in different industry alliances with overlapping goals and a wish to take control of this potentially lucrative market.
Currently, there are four bodies pushing to set the standards and ensure interoperability. The Fibre Channel Industry Association (FCIA) was originally set up to promote the development of fibre channel devices.
The Storage Network Industry Association (SNIA) is supplier-controlled and is trying to resolve a number of existing problems with both hardware interoperability and software management systems.
Alongside these are the Legato-led Celestra Consortium that is focused on data movement and management in the San. Legato has managed to interest many of the large database suppliers, including IBM, Oracle and Microsoft, and this is likely to lead to some interesting developments in the data warehousing arena.
The first supplier to introduce a new version of its existing database is likely to be Microsoft, although it is unlikely that SQL Server 2000 will incorporate any of the Celestra components.
Sun, through its Jiro initiative, has also entered into the management and control market as part of its expansion plans for Java Enterprise Beans. For some time now Sun has been talking about using its Jini initiative at the device management layer. It is now talking to a number of suppliers about extending Jiro to support managing Sans.
There is room for all in the market at present and, providing that they work towards a common standard, this should resolve the management issue for Sans.
In reality, it will take at least another couple of years before the San market starts to show a significant impact on the corporate network, but given the complexity and the need to redesign corporate infrastructures, managers need to begin their technical training sooner rather than later.
How a San works
As new applications have been developed for voice-over-IP, video streaming and running server-based applications, increased load has been placed on network bandwidth. Storage area networks help to resolve bandwidth issues by providing a separate data network which does not interfere with the rest of the network.
The high speed of data transfer required on a San has resulted in a lack of reliable interoperability between Sans. Four bodies are currently working to create a standard - the Fibre Channel Industry Association, Sun's Jiro initiative, Storage Network Industry Association and the Celestra Consortium.