Java and Perl applications that were developed just three years ago have developed "legacy" attributes: the original authors are gone, the applications are poorly documented and poorly understood, and although the business relies on the applications, it fears replacing them because of the unknown effects that any changes may cause.
Have the first legacy Java applications really arrived? Perhaps, but a deeper analysis of the complaints reveals the real truth: the wholesale loss of application knowledge creates most of the maintenance issues, whether the applications are written in Java, Perl, or Cobol. IT organisations should not dismiss these applications as useless legacy artifacts, destined for rip-and-replace, rather, they should redeem the value locked inside of the applications.
The term "legacy" has historically been reserved for Cobol applications running on mainframe and midrange platforms such as the IBM iSeries/AS400. People tend to apply the term "legacy" pejoratively, regardless of the value that legacy applications represent and the work they perform. However, recently developed Java and Perl applications now share many legacy attributes:
lThe people who know the application are no longer available. The people who initially wrote the application have moved on to other assignments within or outside the organisation.
lThe remaining staff do not understand how the applications work. The applications are poorly documented, and none of the existing staff understand why it was written as it was or why things operate as they do.
lThe business relies on the applications but does not dare change them. Some portion of the business relies on the application functionality. Business needs dictate that the systems change, but people are hesitant to do so because of the chance that a change to one area of the system will create unintended and adverse changes in another.
lNo one wants to work on it. Because of all the issues they present, programmers dread working on these applications.
Although it may be tempting to dismiss applications with these attributes and develop a plan to retire them, the truth is that every application is destined for the same fate unless the maintenance cycle changes.
The true problem is that two to three years after source code hits production, much of the knowledge about it has dispersed. People move on, forget, or the systems change and the documentation does not. In the time crush to pump out more new features, much of the care used to originally assemble the application falls away.
As the availability of experienced people, accurate documentation, and application knowledge wanes, the cost to maintain the application escalates. Lost application knowledge accounts for tremendous resource waste. In fact, much of the noise about the dire shortage of workers with legacy skills is in reality describing an application knowledge shortage.
People with esoteric mainframe skills, such as performance tuning, VSam cluster tuning, assembler language and 4GL knowledge, are in short supply. However, Cobol programmers are in far better supply than the pundits would have you believe.
Lost application knowledge is costly. Research published by Capers Jones of Software Productivity Research suggests that a lack of application knowledge can have a seriously adverse impact on programmer productivity.
According to Capers' research, just a few months of experience working with an application conveys enough application knowledge to make programmers 34% more productive, and programmers with no experience are on average 40% less productive. Clearly a move from little or no knowledge to some knowledge represents a large productivity swing.
The answer to the chronic problem of lost application knowledge in large IT organisations with hundreds of applications lies in the newly available application portfolio management (APM) tools. Although application mining tools can provide knowledge about a single application, they present snapshot-in-time views, and they have no long-term strategic or analytic value of their own.
In contrast, APM tools reclaim lost knowledge by reading all of the source code in all applications and then presenting the structure of applications graphically. The tools provide impact analysis and drill-down capabilities across application boundaries, all the way down to the source code level. Widespread use of a tool such as this mitigates the loss of application knowledge to a large degree and institutionalises the storage of application knowledge such that knowledge loss never happens.
Building on the knowledge store, the tools also calculate comparative application complexity using industry standard measures, and open the door to other metrics collections that can form the backbone of an overall measurement and improvement programme.
With annual savings reported in the range of 10% to 30% of the IT budget, these tools are worth serious consideration by any firm with a large percentage of custom-written source code.
Phil Murphy is principal analyst at Forrester Research