Managing applications effectively has always been hard, but for many enterprises it is now worse than ever, with pressure to Web-enable core business systems and open them up to external access for customers and trading partners. The client/server era of the 1990s seemed bad enough at the time, but many IT managers must now be looking back to those days wistfully.
Even the last chaotic months of 1999, dominated as they were by the Y2K issue, at least had clear-cut priorities.
According to Nigel Beighton, technology delivery director at systems house CAP Gemini, a major problem is that the new era of e-business is at an immature stage with a plethora of unstable technologies and small software suppliers. "There's a technology flood on the market, the majority of which is at a semi-beta stage," says Beighton. "In a rush to get market share, vendors are releasing products early. Even the leading vendors are relatively small, and enterprises - terrified of people doing an 'Egg' to their market - are accepting a high degree of risk in systems development that they would never have accepted in the old days of client/server." Accordingly, businesses are prepared to entrust critical projects to relatively untried suppliers whose future tenure in the e-business software market is still in doubt.
The problem of developing new applications, and modifying existing ones, for e-commerce and e-business is compounded by an acute skills shortage - typical of the early stages of a new application era. "We had the same issue in the early days of client/server, but that went away as the market caught up," says Beighton, who admits that CAP Gemini, like other service companies, has been hit by the skills shortage. "It is hurting, and four out of five companies we deal with suffer badly from it."
The problem is being compounded further by what, at first sight, would seem to be good news: the final maturation of component-based development technologies with the big swing towards object-oriented programming languages, especially Java. This is concerned with how individual software programs are developed and packaged, rather than the larger-scale deployment based on Internet technologies, and has been a long time coming. It brings the promise of lower software maintenance costs, because it is easier to identify the specific component where a particular change has to be made, without affecting all the others. It should also lower long-term development costs, because software components performing specific tasks - for example, a credit status check - can be reused in different applications.
But it requires a radically different approach to software design and, again, skills are short. So many IT managers could do without this further complication at this stage. And some IT departments are being put under further strain by the demand for "information liberation" - for example, making use of data already residing in various business systems. Some enterprises, particularly in the retail sector, have already built data warehouses comprising such data and are exploiting this for applications, such as identifying the preferences of individual customers and targeting them accordingly. But many enterprises still have vast stores of untapped information relating to human resources, projects and products that are difficult to access and exploit because they are poorly indexed and classified.
Making this information readily accessible is a huge task involving the creation of some form of metadata providing descriptions and pointers to the underlying documents and records. It is no coincidence that both Napp Pharmaceuticals and Austrian Airlines (see boxes) have created some form of metadata. Austrian Airlines is taking the concept further by incorporating software that automates some of the information flows between applications and the underlying databases.
Exposing applications to external Web access also amplifies some traditional ongoing system maintenance problems, such as keeping acceptable levels of performance in the face of growing user numbers and data volumes. With Internet access, activity becomes almost impossible to predict accurately, but it is essential to at least make some attempt to assess the likely effect on existing systems and take action to ensure that critical processes are isolated from such impact. There are tools to help with this assessment process, and one supplier of these, Advanced Data Systems (ADS), has the motto, "model, monitor, measure, manage". A fifth stage, scale-up, should perhaps be added after "model", but this would spoil the alliteration.
The company offers a single tool set that will, first of all, allow the impact of an impending application on existing processes to be modelled before it is deployed. At this stage, enhancements can be made where indicated to network connections, server capacity, and links to back-end systems, in an attempt to minimise the impact on other applications and to ensure users of the new system experience comfortable response times. Then, when the application is in place, a monitoring structure needs to be established so that relevant measurements of performance can be made. But all this is worthless unless the performance of the application is then actively managed using the measurements, so that pre-emptive action can be taken to avoid problems such as deterioration in response times.
Measurement tools such as those of ADS can also be used to ensure that critical applications are given priority over less essential tasks, according to specified rules that can vary by time of day. They can also help to ensure that external suppliers of applications or services, such as ISPs, meet their service level agreements. But it should be noted that unless a service level agreement is carefully specified with the customers' interests in mind, monitoring adherence to it will be of limited value. Some agreements are merely promises that a service will be available for a certain percentage of the time, without any guarantees of performance.
The challenges posed by the Internet, e-commerce and skills shortages, are encouraging some enterprises to seek refuge under the umbrella of outsourcing. But, as the spate of high-profile project failures involving outsourcing contracts catalogued in Computer Weekly over the past few years shows, this route offers no panacea. Some of the bad experiences have exposed the folly of delegating all responsibility to the outsourcing company and not retaining crucial in-house expertise in project management and strategic IT direction.
Perhaps, for this reason, the emerging application service provider (ASP) model is gaining appeal, with the promise of avoiding having to develop and maintain software, but retaining full control over the use of IT. But as Simon Denison-Smith, managing director of the offshore software development company Rave Technologies, points out, the ASP model is not yet ripe for business-critical applications. "I don't know any companies using ASP in anger yet," he says. "But in the near future the cost benefits of not having an IT manager, and being able to try applications out, rather than have people come in and install them, will make the ASP model very attractive."
Rave Technologies offers its clients some relief from spiralling software development costs by making use of programmers in other countries, particularly India, where salaries are lower. But now, says Denison-Smith, Rave is winning customers by providing access to skills that are scarce in the UK. And there are plenty of those.
Napp Pharmaceuticals' cure-all index
A substantial part of Napp's IT effort is focused on managing projects to develop new drugs or remedies, and these generate large numbers of documents - about 250,000 at the latest count. But these documents were poorly classified, making it difficult to access the information contained about a specific project. This also made maintaining and updating documents expensive, because of the time taken to locate them. A further issue was that much of the company's knowledge relating to the different projects is not explicit - contained in electronic or paper documents - but implicit - held in the brains of people.
So, as part of the task of creating indexes to documents, the company decided to create links between projects and the relevant people within the same metadata. This would make it possible to access not just written information, but also the people who could expand on it.
Napp Pharmaceuticals' IT director Roger James says, "In effect, we've actually put in yellow pages for people, which is usually regarded as a completely separate system from the one holding electronic documents."
Apart from adding the links to people, a major challenge was in creating the indexes to the documents. "These existed as NT file shares and we needed to tag them up with metadata and bring them across," says James.
To speed up this process by automating some of the tasks, Napp developed some tools within MS Word with the assistance of systems integrator I-Group. "The process is partially automated in the sense that it gives you hints on the metadata, so that if, for example, a document is in a directory called project Fred, you can be fairly sure it belongs to project Fred," says James. In that way many documents can be collected automatically and indexed in a single manual action.
Advantages include much easier maintenance and a robust, secure way of accessing documents. "We can now manage access down to individual URLs, and we've also been able to include WTS [Windows Terminal Server], which has NT access control lists down to each program," says James.
Austrian Airlines heads for the Star with conversion programme
The aviation industry as a whole is suffering from severe turbulence as airlines jostle for position within new global code-sharing alliances, allowing passengers to make multi-flight journeys with a single ticket and check-in.
Austrian Airlines recently created a stir by changing partners midstream, joining the Star Alliance, whose members include British Midland, United Airlines and Lufthansa. The decision to join Star rather than a rival group, including Air France and Delta Airlines, was dictated partly by IT considerations. The company's IT department decreed that it would take a year to convert its systems for the latter alliance, compared with six months to the Lufthansa system adopted by Star.
So, the airline decided to join Star, but the estimated six months became an absolute deadline because this would complete the conversion in time for the all-important year 2000 summer season. The season, which began on 26 March, when the airline makes significantly more money with both higher fares and passenger numbers than at any other time of the year.
In the event, meeting this deadline was difficult and involved a combination of management and technical measures. "We tried to avoid thinking about the possibility that we wouldn't make it," the airline's user project co-ordinator Michael Stagl.
The project was split into separate units corresponding to each of the 10 main systems to be converted, each involving a combination of staff from Lufthansa Systems - the external IT contractor chosen because of its knowledge of the Lufthansa standards - and Austrian Airline's own IT staff. This, says Stagl, ensured that end-user considerations were quickly incorporated into the ongoing development rather than requiring time-delaying revision later on.
Although the possibility of failure was never openly discussed, it was incorporated into the project plan. There was never much risk of all 10 systems not being ready, but one or two might make the deadline. To cater for this, the airline adopted a contingency plan whereby some of the new systems could run temporarily in parallel with the old. In the event, however, all 10 systems were converted on time.
The principal technical innovation of the project involved the creation of what the company calls a "shallow database", comprising a kind of intelligent metadata system that links the converted applications with the underlying databases, which were not substantially changed. This incorporated not just static links, but also time-related details associating specific applications with the data they need.
So, as well as replacing multiple paths between applications and databases with a single link between each one and the shallow database, this ensured that information, such as passenger booking records and flight schedules, was delivered exactly when needed to the correct process.
The airline plans to take this further by incorporating software within the shallow database to analyse and combine data from a variety of sources as required and specified by each application. In this way it will be possible to concentrate all the logic required for both extracting data and processing it into the form required within the shallow database, rather than having separate versions within each application.