A new way of networking

As organisations demand more storage space on their networks, the traditional thin client model struggles to cope, Cliff Saran...

As organisations demand more storage space on their networks, the traditional thin client model struggles to cope, Cliff Saran considers two alternatives

IT departments are well aware of the management costs of giving users masses of local storage, yet disc drive manufacturers continually break the barriers of microscopic engineering to pack more data on to drives which are no bigger than their predecessors. Today, PCs installed with 20, 30 or even 40Gbyte hard disc drive storage are commonplace.

According to Sean Hook, product marketing manager for enterprise storage at drive manufacturer Seagate, the aerial density of hard disc drives is doubling every two to three years - effectively doubling the amount of data that can be crammed on to a single disc platter. Before too long, desktop PCs equipped with 100Gbyte drives will be the norm.

Running costs

In response to users demanding lower running costs for desktop PCs, the IT industry proposed an alternative desktop computer called the thin client. Unlike the PC, thin clients do not rely on masses of local storage for installing applications like Microsoft Office.

Some, derived from PCs, are equipped with a hard disc to load the operating system, other pure thin clients have no hard disc at all. The operating system is built into the thin client's read-only memory (Rom).

Now, with data and applications not resident on the thin client computer, the network becomes the only means by which these devices can be operated. The IT industry coined the term network computing to describe this new model of computing.

As the name implies, in network computing there is a great reliance on the network - it must be available all the time, and a fast response time is essential. It is widely accepted that, to allow data to be shared between different IT systems more effectively, a network needs to split data from applications. Thus an application like Lotus Notes would run on a dedicated application server, while a separate file server would be used for the Notes database.

A file server running a general purpose operating system like Windows NT or Solaris is inherently inefficient. This is because a general purpose operating system contains many superfluous components not required for storing and retrieving data across a network. A dedicated storage appliance, also known as enterprise storage, is generally regarded as a better approach, especially when large amounts of data must be managed and backed-up.

Therefore, the IT department has two options: either put enterprise storage on the corporate network; or use a dedicated network for data. The former approach is called network attached storage (Nas), and is a suitable step for a user company moving beyond the storage requirements of its file servers on a local area network (Lan). The latter is a storage area network (San).

Usually, the cost of implementing enterprise storage with the Nas approach is low, as the file server is effectively replaced by a storage appliance on the same network. But, when large amounts of data are being stored and retrieved, network performance can become sluggish.

The San, on the other hand, is a dedicated network for enterprise storage. It is designed to handle vast quantities of data and is suitable for consolidating information from several geographical regions without affecting the performance of corporate-wide and local area networks.

Donal Madden, storage business manager at Compaq, believes that network attached storage could provide a replacement for today's file and print server. But, he notes, "Ethernet networks cannot scale to handle large databases such as Oracle."

The problem, according to Madden, is that 40% of bandwidth on Ethernet is taken up by network protocol, leaving just 60% of the available bandwidth for transmitting data. This makes it unsuitable for passing large amounts of data, as is the case when storage resides on the network rather than locally.

"Two to three years from now users will have three holes in the wall - one for the network, one for power and one for storage," says Madden.

With storage on a separate network, Madden predicts that Ethernet will provide enough bandwidth for mission-critical applications and could be deployed in areas such as videoconferencing and voice over IP telephony.

However, to achieve this goal, San technology will need to evolve to a point where boxes from different suppliers can work with different servers on the same network.

"We need to move to the Ethernet model, where equipment from different manufacturers can talk to each other," says Madden. The technology that will allow this to happen is fibre channel.

Three industry groups are working on standards for fibre channel - the Storage Network Interoperability Association (SNIA), the Fibre Channel Industry Association (FCIA) and Sun's Jiro initiative, which is based on Java. Compaq is involved with SNIA and recently invested $6m in building an interoperability lab for San technology.

Storage giant EMC is involved in FCIA. Nigel Ghent, marketing director at EMC says, "The biggest challenge is making the individual components work together." In a San, routers, switches, hubs, servers, enterprise storage operating systems and applications all need to come together.

Industry experts predict that true compatibility in Sans is about two years away.


Once this has been achieved, Ghent says the next step is management. "When you can access data from anywhere on a San, there is an issue of security," he points out. Other areas of management include load balancing and the ability to move data around the network so that any individual piece of equipment on a San does not impede access to data.

"What we need," explains Ghent, "Is something like a Tivoli Enterprise or an HP Openview which works for a San rather than an IT system."

ComputaCenter is seeing huge demand for Sans, particularly in the City. "I assume banks are consolidating their servers for e-business," says Paul Farelly, infrastructure sales manager at ComputaCenter.

Farelly says Sans are far more complex than networks to implement. "At least on a network you have a static IT infrastructure. The San needs to connect to everything else in a company. It is one of the most complex technologies to implement," he explains.

The problem at the moment is one of a skills shortage. Ghent says, "People with San skills are more sought after, as firms need expertise in fibre channel and people who have an understanding of the procedures and methodologies involved in running large mainframe systems. The mainframe is very rigid, it is well managed and secure, unlike an NT environment where the IT manager may not know how many servers he has."

Ghent believes that mainframe skills need to be combined with the ability to work in a heterogeneous IT environment where IT people have been exposed to NT and the various flavours of Unix.


Storage accounts for as much as 60% to 70% of datacentre costs. And Stuart Curley, senior technical architect at Cap Gemini, says he is seeing resistance to San implementation due to the high costs. "Initial costs are extremely high," he notes, - it can cost millions of pounds to build a San - "It is not easy to build incrementally."

Curley adds, "We have not seen our customers going for Sans as they have already heavily invested in data centres." Building a San effectively involves creating an entirely new datacentre - at considerable cost.

There is also a cultural barrier that needs to be overcome relating to the ownership of data. "The San is seen as negative in some companies as it involves distributing data. Users no longer own the data as a central resource," Curley explains.

Curley says he is seeing resistance to change among the people responsible for managing data (database administrators) but not among those who manage the networks (network managers). "Network managers are keen, but database administrators seem threatened by this new way of working," he says.

For the future, Curley predicts that a new category of IT manager will appear who combines the skills of database administration with network management.

Industry experts agree that, in the long run, fibre channel and Sans will lower the cost of delivering IT. Backing up data will become simpler and the design of Sans can be an integral part of a disaster recovery strategy, where data is mirrored at two or more datacentres.

Companies such as BT are now looking into providing storage on tap through an ASP-style arrangement.

There is no reason why a multinational cannot have its data stored and managed centrally through an outsource agreement with a service provider. Regional offices would then access data over the Internet and back-ups would be performed centrally - which would reduce IT costs.

But data is a company's most valuable IT asset. With external companies holding that data, issues of security need to be addressed. For instance, a service provider's datacentre may store data from several rival businesses. They must find a way of keeping data isolated.

The other big concern is Internet security. With data accessed over the Internet connection - even a secure connection - there is a perception that the data is not housed in a company's own datacentre, making it more prone to attack or theft by a hacker.

San technology is still evolving - standards for plug and play compatibility are still some way off. There appears to be a battle in the datacentre over who should own the data, and companies are wary of outsourcing their storage requirement.

Generally, users need to be prepared for when they adopt a San strategy. This will involve investing in building up mainframe-like skills in their datacentres, combined with Unix and NT expertise.

Read more on Networking hardware