Giving developers the time to improve software quality at the development stage is more cost-effective than fixing bugs during the testing phase
Week in, week out we hear stories about problems with new IT applications. Only recently there were news reports about testing delaying NHS IT projects.
Obviously, testing work has to be carried out to ensure that applications are only rolled out when they can provide the level of performance and reliability needed. However, if major problems are continually being found when applications are being tested, one has to ask whether there is a problem with the way applications are being developed in the first place.
As companies look to new applications to improve organisational efficiency or competitive advantage, developers are under constant pressure to build applications quickly and cost effectively. So, as you would expect, developers have been focused on speed in relation to writing code and building functionality that works.
What is the problem with that, you might ask? On the face of it, nothing. However, because developers are being pressurised to deliver, and deliver quickly, quality is not necessarily something that will be at the forefront of their minds. We all know that doing something quickly does not always equal doing something well.
The problem is that developers are typically trained to construct code and functionality whereas testing professionals are trained to deconstruct something that a developer has built.
Most developers will not be able to test for implications such as the network going down, or consider how changing one line of code might affect other parts of the application. Additionally, they will not be thinking about how their code handles errors.
They will test what they write and build to make sure it works, but very few developers have time to think about circumstances that could lead to their code or functionality failing.
Developers cannot be blamed for this; they typically have not been trained or given enough incentive to develop with quality in mind.
Today's competitive landscape, where companies are constantly striving to become leaner and more aggressive in strategy and delivery, has resulted in developers being put under pressure to develop faster than Michael Schumacher can get round a Formula One track.
The problem is that these factors have resulted in developers being judged and measured against how well they keep to project timelines and ultimately how quickly they can deliver the code/functionality they have been tasked with. Quality becomes a poor relation to the project deliverables.
This needs to change. However, the change will only happen if management, IT directors in particular, first recognise there is a problem and second start to understand that it is worth encouraging developers to focus on improving the quality of their code.
IT directors need to take a long-term view and remember that making quality improvements at the development stage is much more cost effective than reacting to problems the quality assurance team picks up, or indeed fixing a bug once the application has been deployed. They need to foster a cultural change so that developers recognise that they will not only be measured on speed, but also on the consistency and quality of their code.
Essentially quality targets need to be put in place for developers so that the quality assurance team is only handed code once it is proven to be of a certain standard. So how can this work in practice?
Quality targets need to be established and quality "gates" need to be put in place. A quality gate is a process through which a deliverable must pass before it is accepted by developers as ready for systems testing.
Developers need to assess the code they are delivering against quality targets to see if it can pass through the quality gate. If it does not meet the quality targets, the developers need to take it back for further work, otherwise it can then be passed on for systems testing. But how can quality targets be met without compromising on project deadlines?
This is where structured processes and tools come in. Developers need to use these to make quality assessments and provide reports on code coverage and test results. These tools and processes act as a virtual adviser to the developer to ensure they develop with quality in mind, with minimal impact to the project schedule.
Outputs from such tools give an objective opinion on the overall quality of the application, ensuring that consistency is applied to software quality and there is tangible evidence that a quality target has been met.
By arming developers with these processes and tools, organisations should see a decrease in development projects failing or running over budget and time. If developers can carry out thorough, timely testing on the units they produce, system testers can focus solely on testing business-critical processes rather than on low-level code.
Inevitably, this will improve application quality and ultimately project success as software testers will have more time to test and guarantee the quality of the high-risk parts within the application.
IT directors must make these changes so that everyone involved in a development project is thinking about quality and to enable quality assurance teams to expose any risks in the applications rather than spend time debugging them.
Quality is not something that should be thought about once an application is developed; it should be considered from the start. If you were building a house, you would make sure you used good quality bricks and mortar. Likewise, when organisations are building applications they need to make sure they have good quality code.
Sarah Saltzman is technology manager at Compuware