In the land of software testing, static analysis reigns king

bridgwatera | No Comments
| More

This is a guest post by Rutul Dave of Coverity, a company that builds tools and technology to equip developers with resources, techniques and practices to help maximise the integrity of software.

I recently started a discussion on the 'Static Analysis' LinkedIn forum on the topic of "what can static analysis find that other forms of testing cannot" -- which led to a healthy and informative discourse from the other participants. I was so impressed with the debate that I wanted to share some of the most thought provoking ideas from the forum.

Among the 40 comments I received, here are the major points that stuck with me:

#1 -- It's not simply a question about which metric/metrics static analysis helps with when compared to other forms of testing. There are various ways to look at how static analysis provides a cost-effective and usually easy-to-use way to improve the quality of code as it is developed. I always focus on the development cycle to evaluate the value that this method of testing brings.

Developers are already aware that the cost to repair defects increases the further in the software development life cycle a defect is allowed to persist. For this reason, there is value in addressing newly detected defects as they are identified because they cost less time and effort to repair. By lowering defect numbers during development, organisations can lower the cost of development by delivering higher integrity code to testers that require less test case generation for effective coverage.

#2 -- Defects in parts of code not executed during normal operation and error-handling routines are things that are usually close to impossible to test with most other forms of testing. Static analysis really helps here. A good example is its ability to spot what I call an "invisible defect," which is memory corruption or a leaked system resource.

The execution is usually way past the place in the program where the defect is when it manifests into a visible error. Testing for such defects using traditional testing is like trying to find the fatal disease that caused the death by figuring out the time of the death. By looking for the defect like a null pointer dereference that leads to a visible error like a program crash, static analysis is testing code for invisible defects.

#3 -- Static analysis can show the absence of bugs while other testing methods usually focus on showing their presence. For example, while functional testing a mobile phone application, a QA tester will be looking for the visible bugs - missing functionality and unexpected behavior in context of what he or she is testing. But what about the hidden resource leak or uninitialised variable during execution that will result in an error only after a certain number of iterations or in another part of the software?

#4 -- There are hidden benefits when static analysis finds defects in difficult-to-understand code - even in situations where they might not even be true defects or false positives. The main advantage is it forces the developer to look at the code again and make improvements towards reducing code complexity and to help in maintaining the code going forward. (I should stress, however, that static analysis is generally known for its low rate of false alarms and that the warnings they issue often do correlate very well with real defects. So if an issue is flagged and its not necessarily a problem- its still worth looking into. Otherwise, why would it be singled out? There is probably room for improvement)

Stop!.png

The forum was a good reminder of how topical the issue of software testing has become for businesses. I'd say tt's probably one of the most time consuming and frustrating processes for a developer - especially if they don't have access to the right tools. The sheer complexity and size of the software being developed these days makes developer testing a challenge that will go away any time soon- or one that should be ignored.

Consider these statistics:

  • For every thousand lines of code output by commercial software developers there could be as many as 20 or 30 bugs on average.
  • As they progress through the development cycle, defects that are found become exponentially more expensive to fix- it's at least 30 times more costly to fix software in the field versus during development.


For some fascinating data on software testing, take a look at the Software Integrity Risk Report, a commissioned study conducted by Forrester Consulting on behalf of Coverity.

I learned a lot from this forum and I am glad I walked away with the key points I've mentioned. But most importantly, I have been able to reconfirm what I've known for a while now: static analysis is definitely "King" in the world of software testing. There is simply no substitute.

Leave a comment

About this Entry

This page contains a single entry by Adrian Bridgwater published on June 27, 2011 10:35 AM.

Software Architecture: is it an art or is it a science? was the previous entry in this blog.

Developers now working on Internet Explorer 10 is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.