Following the news that the NHS National Project for IT has been dropped I have been posting some of the views I have recently had provided to me for an unrelated feature I am working on.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The feature, which will appear in two parts on Computerweekly.com , asks the question: Why do large IT projects fail? Here is the first part.
Here are the other parts already published: Part 1 Brian Randell, part 2 Anthony Finkelstein, part 3 Yann L’Huillier, part 4 James Martin, part 5 Philip Virgo , part 6 Tony Collins, part 7 ILan Oshri, part 8, Robert Morgan part 9 Sam Kingston, part 10 Peter Brudenal, part 11 Mark Lewis, part 12 John Worthy and part 13 Stuart Drew.
Today is the view of Milan Gupta, chief architect at Barclays bank.
He says: “Large IT projects fail for the simple reason that business moves a lot faster than large projects ever can. It’s a competitive world out there and to be on the cutting edge, agility is everything.
Gone are five year priority plans, today, businesses are living in the now adapting to market dynamics and eco-system changes daily. This does mean that by definition large IT projects are doomed to failure because by the time they’ve reached completion, the business has changed.
There are a number of things you can do to mitigate this risk by doing projects in smaller chunks – typically 90 days from user story specification to production. Some other core principles to accelerate project execution: balanced business and technical leadership, small top-talent co-located teams, decreasing the layers of “translators” between the end customer and the developer, continuous integration, and test driven development. It is fatal to allow a development team to disappear for a year before they put their software into production.”