Take a leaf from the books of three IT directors who have endured serious management and technology problems and successfully resolved them.
They are common to all sizes of organisation across every sector: IT projects that you have wholeheartedly sponsored, that you sincerely believed would work and bring clearly-defined benefits to the organisation, but they have failed because of problems you could not foresee.
US research firm the Standish Group reports that 46% of IT projects are over budget and overdue, and 28% of projects undertaken fail altogether. Another of its studies cites even more grim statistics: only 24% of IT projects undertaken this year by Fortune 500 companies will be completed successfully. The issue has been around for as long as businesses have been using information technology.
You may not be able to avoid the problems, but at least you can recognise them before they become fully-fledged disasters. To that end, Computer Weekly asked three senior IT executives to share their experiences of failed IT projects. Our purpose is not to focus on the failures, but to look at the lessons learned and help other IT directors avoid the same traps.
Key supplier goes bust with no prospect of it being taken over
The first Jean-Marc Fritsch knew about the collapse of hosting company KPNQwest was when he started receiving sales calls from competing suppliers. As head of IT at Manutan, one of Europe's leading distributors of office supplies, Fritsch was busy with a new e-commerce project and had not noticed what was happening to his hosting company.
"I'm responsible for global projects and one-off implementations, and I really didn't pay too much attention to what was going on in the telecoms industry," he confesses. "The first we knew was when KPNQwest announced the network would be shut down within two weeks."
The immediate task was to ensure that the company could continue to operate without the KPNQwest network. All regional IT managers were instructed to order ADSL lines to support vital applications, such as invoicing and order processing. However, the ADSL was a short-term solution - a leased line was the only way for remote offices to access the centralised enterprise resource planning application, which is located in Manutan's French head office.
The next step was to meet the chief executive and other directors to begin the process of approving a new supplier. Ordinarily this process would take about two months, but now it had to be completed in less than a week.
Major hosting companies including Equant and Deutsche Telekom were brought in to pitch for the business. However, the directors were concerned that, with the unstable telecoms market at the time, another supplier could go out of business, leaving Manutan in the same sticky situation.
"We needed a cheap connection fast, but we also didn't want to take the risk of having a single supplier in the future," he says. "We insisted on knowing exactly what partners and network access each potential supplier proposed using, and insisted that it would be written into any contract we signed."
Fritsch also met Vanco, a UK-based managed hosting company. "I was impressed by its chief information officer and also, because it used a virtual network, it could be up and running very quickly," he says. "We took a certain amount of risk because it was a new company and unknown outside the UK, but we had to balance that against the risk of not having a supplier in place before our deadline."
With the chief executive's approval, Manutan signed a contract with Vanco the same day. "We agreed that it would offer us access from a number of specified suppliers, but we opted to own all our hardware and software internally to mitigate the risk," says Fritsch.
Within two hours of the deal being signed, new hardware and software had been ordered and a team of project managers was put together to oversee the migration to the new Vanco network. "We brought in a consultant to work with Vanco to speed up the process. This was a worthwhile investment - in some offices the migration was completed within 48 hours," he says.
The complete migration eventually took 10 weeks, with the company's UK office the last to go online. "In the UK we were dependent on BT installing the last mile access, and it was not particularly reactive," says Fritsch. "Because we were working with a small supplier, it was difficult to press on BT to do things any faster."
Two years after the crisis, Fritsch is careful to keep an eye on the financial health of all of his suppliers and makes a point of reading the technology press. "You have to be aware of the technology choices, even if you are not planning to purchase anything. It is important because you never know when you will have to make a quick decision," he says.
Vital server crashes and the back-up plan does not work
For most IT executives in the UK, a server failure in Africa would not be a major problem. But for Martin Cooper, head of global operations at engineering group Arup, it was a catastrophe.
"We got a call in the early hours from the local IT manager, saying the server had failed, and they could not get it working again," he says.
With the server down, Arup's consultants across Africa had no access to e-mail, the corporate intranet and the design applications they use to help clients with their major building projects including vital infrastructure such as airports and bridges. "We had 120 people working in the region, and once you start to add up their hourly rate, the cost of downtime quickly becomes appalling," Cooper says.
Normally, local IT staff would completely reboot the server and restore any lost data using a back-up. However, it quickly emerged that the African IT department did not have another copy of the server operating system, and the documentation had been mislaid. "They were not even sure when the data had been last backed up, and their back-up had never been tested," says Cooper. "That is the kind of thing you really don't want to hear when you are thousands of miles away dealing with staff whose first language isn't English."
Cooper created a think-tank comprised of IT staff from the UK-based corporate IT department and told it to find a solution to the problem that would not compromise data integrity. Then he left the team to it. "The IT director has a vital role in this kind of situation, but that role isn't to actually solve the problem," he says.
Instead, Cooper's priority was to explain to his bosses what was going on. Within a few hours of the failure, he met the company's directors to explain what had happened, and what was being done to solve the problem. "The best approach is to get in with a pre-emptive strike, before they notice the problem," he says. "Otherwise, they are running around screaming at the IT guys to do their jobs and get the system fixed. The worst thing you can do is leave them out of the loop."
Following the meeting with the directors, Cooper checked in with the IT taskforce who had come up with a plan to restore the African system. The best advice was to boot up the server to a point that enabled the local IT manager to install a second, new operating system. The machine could then be booted up and the configuration settings changed to help staff to identify whether the problem was hardware- or software-related.
The process helped to prevent well meaning IT staff making the situation worse. "Bitter experience has taught me that if a server fails, someone invariably tries to fix it and makes it worse," he says.
The think-tank approach provides an opportunity for experts to share their expertise and, hopefully, avoid damaging action. "The point is someone might have an idea that works for hardware, but not the software, or someone is stronger on disaster recovery but doesn't know the operating system. That is where you really need to make the most of all your internal skills," says Cooper.
The successful installation of the second operating system revealed that the underlying problem was a faulty disc raid controller, which was removed and replaced. The server was back up and running within 48 hours, although Arup let the server limp along for the first day, then performed a full system maintenance check during the night to make sure there were no other problems.
Servers will inevitably fail, but Arup's IT managers are now clearly instructed on how to prepare and test back-ups in case of failure. The global IT department has also reconfigured the network to remove all single points of failure. "Next time a server fails, it is a case of who cares?" says Cooper.
Enterprise-wide software roll-out goes pear-shaped
When the Scottish Agricultural College first began rolling out SAP software in 1997, it hoped to streamline its supply chain management, make authorisation of payments faster and automate purchase orders and invoicing - and make substantial cost savings. Seven years later the project is almost complete, and the college has learned a great deal about implementing enterprise software. "For a while, we thought we had bought the golden chicken that didn't lay any eggs," says Parvase Majeed, business systems director at the college.
SAC's problems began when the college hired consultants to help to roll out SAP's materials management, HR and payroll modules. However, problems quickly became apparent. Majeed says, "We are a public sector organisation and the consultants were not experienced in working in our sector. Also we had not contracted with SAP for the level of support we needed."
With hindsight, the organisation deployed the software too quickly and should have spent more time planning the changes to underlying business processes. For example, SAC was using a number of finance management systems across six campus sites. It would have made more sense to streamline the accounting processes before deploying the new software. However, the college rolled out SAP to all of the sites first.
The team was also hit by a number of problems ahead of the go-live date in July 1998. It was discovered that the internal procurement system used by SAC was not compatible with SAP, and so needed further development. The servers running the SAP system were beginning to buckle under the pressure. And finally, most of the consultants working on the project left, leaving the college with very few staff with SAP skills. "There literally were not enough managers with the skills to keep the systems going," says Majeed.
At this point, Majeed sat down with the remaining IT staff to work out a rescue plan for the project. The team identified all the areas that needed attention, and drew up a recovery plan consisting of 170 application enhancements. Majeed also worked with the college's chief financial officer to renegotiate its support contract with SAP. The college appointed a new team of consultants, from Absoft, to help clean up the system and train the internal IT team to support SAP.
The key lesson that Majeed has learned is that organisations must take the time to analyse processes and existing systems before deploying anything new. "It's rather like an iceberg when you roll out something like this - 90% of the work takes place under water. It could be months before you even take the software out of the wrapper," he says.
Today, the SAC is using its experiences to help other colleges across the UK with SAP implementations. IT managers from the college have worked with universities at Newcastle, Leeds and Warwick on SAP projects.
Long-term lessons from IT problems
When a supplier goes bust, your first concern is to maintain the service to your end-users by getting a new supplier in as quickly as possible. Despite the pressure to sign up a supplier, insist on knowing who it will be teaming up with and what they will be using. Put this information in the contract and keep an eye on the financial health of all your suppliers.
Something as mundane as a server failure can highlight major shortfalls in your recovery plans. It is only when the server has to be rebooted, you discover that the back-up plans have been filed in drawers, system software discs lost and data tapes are blank. Issue clear instructions to all IT teams to prepare and test their back-ups and if possible reconfigure the network to remove all single points of failure.
The lesson most organisations learn when implementing enterprise-wide application software is that any business process changes must be thoroughly planned before deployment.