I’m on a personal mission this month to learn as much as I can about software defects, software integrity and code defects right down to the kernel level — or least as close as I can get.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
The issue at hand is that you’ll typically find software code analysis firms talking at a high level about code defects, but they rarely sit back and lay down a succinct definition of what manifests a defect and how we measure it.
Put simply, defect density is:
Total Number of Known Defects
Total Size of Application Code
Or in other words, defect density is a ration of the ratio of number of defects in relation to the size of the software program — and this is typically measured as a percentage used to express in lines of code (LOC) or function points (FP).
Defect density can be used to analyse individual software components prior to roll out, or it can be applied between release cycles to gauge how well a code base is improving (or not!) over time.
Vocal companies in this field include Coverity and also Microsoft — so on the MSDN resources the company features a white paper on this subject authored by Nachiappan Nagappan and Thomas Ball who say that, “Software systems evolve over time due to changes in requirements, optimisation of code, fixes for security and reliability bugs etc. Code churn, which measures the changes made to a component over a period of time, quantifies the extent of this change. We present a technique for early prediction of system defect density using a set of relative code churn measures that relate the amount of churn to other variables such as component size and the temporal extent of churn.”