Your Shout! On new rules to force disclosure of IT failures

In response to news that governance regulations could force companies to reveal the details of failed IT projects (computerweekly.com, 28 May)

SHOUT_150X150.JPG  
   



 

Have your say on computerweekly.com

 

 

 

 

 

On new rules to force disclosure of IT failures

In response to news that governance regulations could force companies to reveal the details of failed IT projects (computerweekly.com, 28 May)

It is a shame that recent articles have taken a negative view of the new Operating Financial Review Standards (OFRs), which will require organisations to be more open about how funds are spent (including IT investments).

If you take a "glass-half-empty" approach, then, yes, the OFRs could be used as a stick with which to beat IT departments. However, the viewpoint I prefer is that IT departments have to turn this increased transparency into an opportunity.

Although it is a double-edged sword, organisations should use the increased transparency created by the new rules as a platform to demonstrate the business value being created from IT investments in service and cost efficiency improvements. This could also move IT up the boardroom agenda.

With regulations such as Basel 2 and the OFRs requiring compliance, it may mean additional work for organisations. To ensure the success of any IT project, and therefore the satisfaction of all the key stakeholders, it is essential to demand more of IT suppliers to make the process of compliance a painless one. A simple place to start is ensuring that the supplier fully understands the business and its objectives.

We in the IT industry worry that the broader business environment does not fully comprehend the business role played by IT. Non-compliance is not an option, and these new measures give IT the chance to step out of the shadows and seek commendation.

Steve Norton
Managing, director, financial services division, Fujitsu Services


On complying with the Freedom of Info Act

In response to news that many public sector systems will not be ready to meet the requirements of the Freedom of Information Act (Computer Weekly, 1 June)

I read with interest your discussion about the Freedom of Information Act and document management. All too often information and document management is overlooked by the media. What is even more disconcerting is that this neglect of document management is a reflection of the attitude of many IT managers/directors.

As your reporters rightly pointed out, many public sector organisations have thus far failed to implement the technology required to meet their legal obligations with regards to the Freedom of Information Act. Organisations in the public sector often have an extremely low awareness of what needs to be done.

Many IT directors do not have a complete grasp of what information and documentation management systems they currently have or need, let alone how they measure up to the implications of the Freedom of Information Act.

And unfortunately the public sector is not alone. Research by Macro 4 shows that eight out of 10 IT directors do not regard printing and document management as an important strategic concern. Put simply, it isn't seen as "sexy" enough to warrant concern, and is subsequently sidelined. Even in the financial services industry, where banks are writhing under the pressures of the FSA and Basel 2, document management is way down the priority list.

As a result, the failure to adopt effective document management systems is costing organisations millions of pounds a year - estimated to be in the region of £400m for a FTSE 100 company. It is frightening to think how much public sector organisations could be wasting.

It is time to wake up to the importance of document management, or the potential damage to reputation, combined with the cost of ineffective systems, could make for some salacious media coverage.

Mike Wenham
Director of operations, Macro 4

System administrators must share worm blame

In your article on the military report into the Ministry of Defence's Lovgate worm infection in February 2003, blame is clearly laid at the feet of a single "careless" user (Computer Weekly, 8 June). Although the user may have been careless, he or she surely does not carry all the blame.

My own organisation was not hit by this worm, nor were thousands of other companies. My home PC was not affected either. In fact, I have yet to suffer a virus infection either at home or at work. The last infection at work did not affect my PC, although I still suffered the consequences of lost time because our network was trashed for two days.

The reason for my PC being safe from attack is not down to luck. It is simply the adoption of a commonsense approach to unexpected mail and up-to-date anti-virus software. At the time of our last company infection it was not policy to allow users permissions to do their own patch and anti-virus updates. I had additional permissions and used them to keep my PC safe.

For the majority of users, all updates were managed centrally with weekly anti-virus updates and occasional patch updates. In my view, this was completely inadequate to afford protection to a network. I am pleased to note that this policy has now been changed.

It is extremely rare for a mass worm/virus attack to occur with no prior warning. Patches and updates are usually posted and can be applied in time to prevent mass infection. The blame must therefore be shared by the system administrators for building vulnerable systems. It would be good if this was pointed out when reporting events such as the Lovgate attack on the MoD.

Richard Wilkinson
Systems analyst


Chinook illustrates failings of blame culture

Your well-researched coverage of the Chinook controversy (Computer Weekly, 1 June) draws attention not only to an enduring miscarriage of justice but also to the general issue of critical software authority and validation.

For aircraft, this chimes well with the work by the Civil Aviation Authority, with which I was personally associated, on so-called "pilot error". It was a basic tenet of that work that our professional pilots - both civil and military - are among the most rigorously trained, checked and supervised operators in any of the disciplines in which lives are routinely at risk. We have therefore to accept full responsibility for the tasks we ask them to perform.

We cannot go on recording 70% of accidents as being attributable to pilot error (let alone negligence), without admitting that most other human beings, ourselves included, would have made the same errors, if there were any.

The key question is what induced the error? Not who can we blame? And most members of the public know that, for that purpose, we need data and voice recorders to be fitted.

Ron Holley
Former director of helicopter projects, procurement executive, Ministry of Defence


CV writing course will pay off in interviews

Following the recent discussion on CVs and recruitment in your letters column, I would like to point out that there are steps you can take to ensure that a recruitment consultant will make a connection between your CV and a client's requirements.

First you should assume that unless a computer program scores your CV highly, it will not be considered any further. These programs are actually quite crude; I found with my employer's internal resource finder that I had to list my primary computer language many time to be selected as a candidate.

To maximise your chances with both machine and human, you could take a course in CV writing, which will be cheaper and more effective than any technical certification. My wife also found that she was offered many more interviews this way.

John Mason

Disasters should not take us by surprise

It is alarming that analysts have to issue warnings about business continuity preparations ahead of potential petrol strikes (Computer Weekly, 8 June).

This month alone we have seen the Nats system crash and the threat of rail and tube strikes. But despite the frequency of these events, how many IT departments are prepared? And of those that do have business continuity plans, how many are up-to-date and fully tested?

At a time when political and economic events are driving risk and continuity strategies, it is vital that headlines are watched closely, so that plans to keep operations up and running can be made accordingly. But it seems we have yet to learn from the past in order to prepare for the future.

Surveys carried out after similar events in the past have shown this not to be the case. How many more serious interruptions or disasters must we see before UK business concedes that a comprehensive and up-to-date business continuity plan is a must-have? The first few years of the 21st century have shown that continuity processes need to be part of a company's DNA.

Dennis Thomas
Director of business continuity, Synstar


Nats crash highlights need for proper testing

National Air Traffic Services' computer failure (Computer Weekly, 8 June) has highlighted the need for proper testing to ensure IT systems run smoothly.

Software upgrades on an operational system are a fact of life and, as done by Nats, increase the risk of a system crashing.

Testing on a simulated environment mitigates some of the risk of operational upgrades, but ensuring your operational system is ready to continue delivering a live service once testing is complete is paramount.

Companies of every size and across all sectors must research the most suitable options as soon as they realise the need to test operational changes. But crucially, they must assess each stage of a test, and rigorously analyse every outcome so that no loopholes remain unchecked.

James Alevizos
Chief quality officer, Vizuri

This was last published in June 2004

Read more on IT risk management

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close