We will no longer have to think about response times; if a system of any description cannot be 'instant on', there will be hell to pay. End of story.
Of course, we'll be able to cope with that, won't we? Every month, it seems, our most powerful and flexible enterprise systems take another step towards the nirvana of 100 per cent availability, and all the way down the line, smaller and lower-cost processors are rolled out with marginally (or in some cases, substantially) more resilience. A little redundancy here, automated failover there, sprinkle in some extra fault tolerant features - and before you know it, you've added an extra '9' to the 99.xx availability you were offering already. At the same time, the networks are gaining resilience, with substantial enhancements in the more troublesome areas of technology (such as cacheing in routers), elimination of single points of failure, and vast improvements in bandwidth and network management to reduce bottlenecks to a minimum.
But is availability enough? Implicit in the whole concept of availability is the idea that a service is either there, or it isn't. The quality of a service, rather than its mere presence, is even more important, and is becoming all the more so, as user expectations of e-business applications increase. As Fred Moore puts it, in his report Scaleability Considerations and Trade-offs (www.xephon.com/scale.html): 'Measuring the availability of a computer or server alone is no longer adequate, as it describes only the presence or absence of a service. QoS describes how well these platforms operate under normal and adverse conditions. If a store has 20 checkout counters and only one is operative, the store is still available to its customers but its ability to serve them (the quality of service) is greatly diminished or degraded. Vendor claims on availability vary, but determining a value that you can assign to a certain computing platform is crucial to selecting a platform. Does the QoS meet your critical application requirement needs? With e-business, the internet and intranets empowering a greater number of people daily, enterprises will not be able to survive much longer without near 100per cent availability levels.'
Of course, the real danger with the hype surrounding 'instantly available' applications is that we are chasing a moving target, just like the roads that magically generate more traffic every time you widen them. Quality of service has to be a balance between what is expected of a potential application, and what is deliverable. As an industry, we have traditionally been far better at improving system performance than in managing the expectations of our customers and business managers. Yes, we need to move towards our goal of an IT infrastructure with the instant access of a PalmPilot calendar, and an architecture as unobtrusive and seamless as the pipes that supply our drinking water. But at the same time, we need to make sure that the end user knows how much of that vision is actually available today.
Even within the relatively stable world of internal accounting systems, it's astonishing how often service level agreements are impractically vague, or neglected altogether. As we move into the territory of 'extranets' with other companies and web based applications, where many of the technical components are outside the direct control of the IT department, the idea of agreed service levels becomes far more complex (but even more critical).
E-business has brought with it the optimistic philosophy that, given enough imagination, anything is possible, and there's no doubt that the most successful implementors believe passionately in this principle. But to realise the vision, and deliver the quality of service that today's users expect, business strategists and IT managers need to work more closely than ever before, and must formalise their expectations of one another. n
Mark Lillycrop is director of research at market watcher Xephon (www,xephon.com)
This was first published in July 2000