Year 2000 contingency planning

Feature

Year 2000 contingency planning

With the approach of the Year 2000 and a potential computer catastrophe, many companies and organisations should consider implementing some sort of backup system as a practical solution

As awareness of the Year 2000 problem grows and companies commit increasing amounts of resources, it is becoming apparent that the confidence level in Year 2000 solutions is declining. A Meta Group study reveals the extent of the problem. Surveying more than 1,600 companies, Meta Group found that 78 per cent of companies are missing Year 2000 project milestones and 97 per cent have yet to do any contingency planning.

Confidence is further diminished when attention is turned to the often-neglected PC side of the Year 2000 issue. In reviewing the myths and realities of the Year 2000 problem for PCs, Stephen Levin at GartnerGroup asserts, "The idea that PC-based systems (desktop and mobile) will not be affected by the arrival of the Year 2000 is a common but dangerous fallacy." The reason for this is that the PC is frequently used as the primary support tool for business planning and decision support processes. As a result, any errors resulting from incorrect data supplied by noncompliant systems may represent a significant risk to the business."

Problems on the desktop include inaccessible data due to system problems upstream, localised hardware problems and file-based issues. Levin, in another GartnerGroup analysis, estimates that, worldwide, 55 million to 550 million spreadsheet applications could have Year 2000 date-related errors, with five per cent of those representing a significant business risk. In fact, he projects that more than 50 per cent of user-developed applications that process dates will experience Year 2000 problems by the end of 1998.

Levin notes that organisations that have begun to assess the scope of their Year 2000 risks have been shocked at the amount of user-developed (spreadsheet macros, desktop database applications and basic scripts) and custom applications which have arisen within any business. GartnerGroup estimates that, on average, there are one to 10 user-developed applications per user. While acknowledging that most will offer little risk to the company, those that do are difficult to identify ( let alone repair ( in a cost-effective way.

User-developed applications may not be the only risk for desktops and laptops. Microsoft earlier this year announced the Year 2000 status of its products. Even popular and widespread products like Windows 95, Excel 7.0 and Word 7.0 are listed as having at least minor Year 2000 issues. A recent cover story about the Year 2000 problem discovered that some PCs, manufactured as recently as 1997, have Year 2000-noncompliant BIOS. The software side is also disconcerting says Levin who claims that: "PC software makers have sometimes misrepresented the true state of Year 2000 compliance in their products. They have simply written off a wide range of recent (if not current) products that are still in wide use."

To illustrate the breadth of the PC problem, the magazine lays out a typical set-up. A company with 500 PCs, six commercial applications and 35 internally developed applications running on 20 servers, with 12 switches, bridges and routers, would have "a very tough but manageable assessment-and-repair project." They also note, however, that if each of those PCs produced an average of 10 spreadsheets and five databases, each with 50 date entries, "the fix spirals out of control".

Contingency planning

So is this doomsday, or is there hope? The consensus solution is to put more effort and resources into, or simply to begin, contingency planning. A GartnerGroup research note is explicit: "Contingency planning is essential to Year 2000 risk management." William Ulrich, writing in Computerworld, reports that the percentage of companies performing contingency planning for Year 2000 increased from three per cent to 72 per cent in four months this year. "That means companies now realise their best efforts can't eliminate all Year 2000 problems," he writes.

It is clear from the reporting that even the most thorough Year 2000 evaluation and repair is expected to miss some things. Expectations are high that most of the software-based problems will be resolved by the deadline. The wildcard, though, is embedded systems. These are "the billions of programmed microprocessors hidden inside nearly every piece of sophisticated equipment in use today." These systems contain software code that usually is inaccessible, poorly documented and not inventoried. Many users will discover the existence of these microprocessors and their individual Year 2000 problems on January 1, 2000.

Additionally, nearly all the Year 2000 solutions to date have focused on mainframes and servers with little effort or resources for desktop. Yet many IT managers are beginning to realise that there may be some business-critical data on the desktops and laptops. GartnerGroup reports that spreadsheets "represent the highest risk area of user-developed applications."

Contingency planning obviously must be a component of any Year 2000 solution. In addition to reporting the growing realisation of the need for contingency planning, Ulrich also notes that "recent code audits uncovered dozens of fatal Year 2000 errors in systems that had already undergone remediation and testing." He states that those findings, combined with "concerns about supply-chain continuity and embedded system reliability provides management with more than enough justification for creating Year 2000 contingency plans."

Ignore the desktop at your own risk

Debate continues on whether it is practical to extend Year 2000 solutions to the desktop. The massive variation in configurations and software needs, and the colossal number of files requiring examination is prohibitive. Rob Enderle, an analyst at Giga Information Group, said: "In many, many cases, you're talking about thousands of desktops, servers, Internet servers and applications. This will require migrations that haven't been planned. There isn't enough time anymore and there's a huge shortage of technical people to help companies get this work done. Now it's going to be far more expensive, a far higher risk of mistakes, and a far higher risk of not being completed in time."

The answer for many IT managers apparently is to let the desktop users deal with the problem on their own. With limited resources and undetermined risk in the organisation, the belief is that the company can weather the loss of data on these machines rather than draw valuable assets away from solving the mainframe and server issues. Given GartnerGroup's estimate of business-critical data on desktops, this may be a highly risky gamble.

There also remains the question of what to do with laptops. While desktop users may be required to store their data on the server where it is protected by backup systems, this is not a practical option for the laptop user. Equally important is the status of the typical laptop user. IT managers must consider that most laptop users are sales staff or executives, each of whom maintain voluminous data on their machines that can be of crucial importance to the company. Even if the losses are not critical, there are great political risks in classifying data losses from laptops as acceptable.

The Year 2000 safety net tool

While there are many facets and tools for Year 2000 contingency planning, one often overlooked element is backup. Backup and recovery software obviously is useful to maintain the integrity of data when crossing over from 1999 to 2000, but it also can be an important tool in testing a company's Year 2000 solution. Until recently, the only viable option in such software was at the server level, but a new product offers the hope of protecting those business-critical spreadsheets and databases on the desktops and laptops of the organisation.

The practical benefits of backup are many. An organisation can maintain a full system backup of all networked PCs ( both desktop and laptop. Testing Year 2000 PC solutions, supporting triage and prioritisation, and offering a complete backup of the desktop are three of the most important ways that help backup solutions fit into an organisation's Year 2000 plans.

Testing the Year 2000 PC solutions

One of the most trying, difficult and important aspects of an organisation's Year 2000 solution is testing. The risks of catastrophic failure on January 1, 2000 require extensive, in-depth testing of selected solutions. Yet, the absence of historical data for such a unique event make it difficult to predict possible problems. Any serious solution must be tested with real-life systems.

Triage is a term gaining currency in discussions about the Year 2000 problem. Familiar as a medical term, triage refers to sorting patients and allocating treatment to maximise the number of survivors. With regard to Year 2000, the term refers to prioritising affected systems; applications and files based on their criticality to the business.

Is such a process necessary? Evidence is mounting that not only is it necessary, but the continued survival of the organisation may depend on it.

A typical 20th century day for the IT department might find staff working on various projects and, of course, standing by for the inevitable problems that arise on client PCs. While every day is different, most IT departments will respond to trouble tickets for three to eight per cent of the organisation's PCs on most days. While bothersome, such a rate is well within the capacity of most departments.

January 1, 2000 is certain to be much more burdensome. IT departments can expect a plethora of crises and other minor problems to command their limited resources. Some departments may experience a trouble-ticket rate from client PCs as high as 40 per cent. Even the best staffed and best qualified IT department in the world will be unable to resolve all the troubles quickly. Yet, if the organisation is to function through the days (and maybe weeks or months) that it will take to fix these problems, some solution is necessary.

GartnerGroup makes the most forceful argument for triage as part of the Year 2000 planning. In the war against the Year 2000 problem, most business processes will be injured. The extent of their injuries must be determined immediately and the most critically wounded must be addressed first. Failure to triage and prioritise Year 2000 efforts ( and not utilising emergency surgery where necessary ( may result in significant long-term injury to the business itself.

Using backup and disaster recovery software like Replica NDM can make the process of prioritisation simpler. Low-administration recovery features allow users to recover uncorrupted versions of their files easily, freeing IT administrators to focus their energies on more serious cases. Also, once an offending application is repaired or replaced, the user can restore any files that were corrupted, again minimising the demands on the certain-to-be-overburdened IT staff.

If an important user's system suffers a complete failure, the network administrator can keep them operational by providing a new machine they know to be Year 2000 compliant. Using Replica NDM, they can restore all the user's files to the new machine, allowing minimal disruption of business. Likewise, as other users' systems are repaired or replaced, complete copies of their files can be restored.

Doomsday protection

In the worst case scenario, an organisation may experience complete and catastrophic failure throughout the organisation. Limited service can be restored by using functioning machines and restoring needed files from the data vault, while intensive work continues on restoring the IT infrastructure. In addition, once the system is restored, IT staff are assured that they have a complete, uncorrupted set of files and applications for every user.

Conclusion

It is no wonder that so many organisations are turning to contingency planning to cover them on the PC front. Essential to the success of their efforts is software that provides them with a safety net. A safety net that allows uninhibited testing of Year 2000 solutions. A safety net that affords a means of maintaining basic business operations for extensive, but isolated failures. Most importantly a safety net that leaves IT staff reassured that a widespread system failure will not take with it all the data vital to the organisation.

With Replica NDM, Year 2000 staffs have their safety net tool. A full backup of networked PCs protects an organisation's crucial data. Versioning maintains the integrity of data even in the face of creeping problems, allowing the user to return to data backed-up prior to the solution or the event that initiated the difficulty. With negligible network traffic and administration, neither system performance nor IT staff time need be sacrificed to rig such a robust safety net.

Compiled by Paul Phillips

( Dataquest


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in June 1999

 

COMMENTS powered by Disqus  //  Commenting policy