The total biosphere of software application development is speeding up.
Users download to consume (and of course delete) applications faster than ever before; and disposable mobile usage patterns only serve to exacerbate this trend. By direct consequence, programmers themselves have increasingly adopted agile development principles to match the speed of the market.
Under an agile doctrine, software development is focused on the need to release early and often – continuously even.
The agile programming team positively welcomes changes to user requirements that will divert or alter the direction of the code in motion. Even late changes are welcome as these lead to higher competitive advantage for the customer who is ultimately (in theory) served with a better application.
Surely then, it remains tougher to control software quality in this vortex of dynamism? The Agile Manifesto demands a "continuous attention to technical excellence and good design" throughout, but we know that nobody and no team is perfect. Among other agile principles there are edicts insisting on business-developer connections and face-to-face conversations, but is any of this enough to ensure the health of your software in the real world?
More articles on software testing
The agile purists view vs. the applied view
Purists will argue that the agile process involves continuous testing for software quality and thus standalone testing tools are not needed. This assertion is a little overemphatic for most reasonably prudent software architects. This is the reason why we have both “pure” and “applied” mathematics – that is, one works on paper, one works in the real world.
Software quality itself is something of a moveable feast. For some it means straight quality control to examine whether a product is broken or not, but it can also relate to quality assurance to examine whether a product meets the needs and expectations of its users. We can even form subdivisions between software “functionality” quality and software aspects classed as non-functional, that is, more structurally focused on strength and maintainability.
Given this brief history of the agile software quality envelope, we can perhaps reason why so many suppliers have indeed developed products to serve this space. Sometimes shrouded in a wider application lifecycle management (ALM) suite of tools, software quality systems range from code testing to penetration testing to stress testing to usability and performance testing in their shape and focus.
Size (of code length) matters
A seasoned player in this space is Compuware with its application performance management (APM) software. The firm’s eponymously named Compuware APM encompasses some language-specific (or at least language nuance-aware) intelligence to get to the heart of where performance bottlenecks or errors may lie.
Compuware explains that, with modern high efficiency programming languages such as Scala, there is a need to architect quality testing that is conscious of the fact that developers can write code that is much shorter than traditional code in length. While this can help programmers create new software and bring new services to market much faster, it also makes it markedly more difficult to see into the application to identify the root cause of any performance issues during the testing stage.
"In the agile world, development, test and operations teams can no longer work in silos. They need to work collaboratively to reduce development time without sacrificing quality – or users will be just as agile in switching to a competitor," says Michael Allen, director of APM at Compuware.
"All this demands a continuous quality assurance process across the delivery chain, combining production environment tests with real world feedback and cloud-based testing. Automating this process will ensure problems are discovered early on with sufficient detail and context to eliminate any guesswork."
We started off talking conceptually about "application biospheres". At a higher level, Compuware advocates and insists on an appreciation for the other environmental factors that exist inside a given agile application’s own microclimate. Only by appreciating these factors can thorough quality testing be performed.
Agile moves fast, testing must move faster
In the real world, testing must be multifaceted and multifarious. An application could be optimised for one localised market with wildly different web connection speeds to another. An application inside an e-commerce platform relying on a multitude of other services all working in harmony together needs testing that embraces the entire service delivery chain. An application needs external testing with real data to assess its suitability for cloud-based deployment. Agile moves fast, so testing needs to move just as fast – if not faster.
One way to make testing faster is to focus it on the areas of the code that pose the greatest risk, such as new code, or legacy code affected by change.
“Too many companies chase ‘code coverage’ numbers. They may reach a point where 50% of their code is covered by an automated test, but how do they know if they have done ‘enough’ testing? What about the 50% that isn’t covered by a test? With code coverage, each line of code is treated as if it is of equal value, including code for a critical feature as well as dead code and debugging code,” says Kristin Brennan, senior director of product marketing at code testing and analysis company Coverity.
Brennan advocates an approach in tight agile cycles where development teams are able to focus their testing efforts on the most critical code and ensure that the most serious defects are being found as the code is created.
Garbage in, garbage out
Mark Warren, European marketing director at version management specialist Perforce, warns of the old “garbage in, garbage out” axiom. He says that if quality assurance is only performed towards the end of a project (or even as part of a continuous integration process), feedback to the code writer on build fail, test fail or poor performance is always coming late in the day and therefore needs more effort to correct than if the code had been right in the first place. Much better, and cheaper, to have good quality in the first place.
“One approach that many agile teams adopt when moving into continuous delivery is to adopt a mainline model for source control. This is not a new concept, Perforce founder and CEO Christopher Seiwald was writing about this in the last century. The proposition is that a single mainline should contain an always buildable and complete version of the project source code. This means all developers have visibility into all changes which should avoid the ‘integration hell’ that typically happens when trying to merge parallel branches of development,” says Warren.
Shelve, peer review, pre-integration
But keeping this clean and essentially buildable mainline requires some thought and process. This means that changes intended for the mainline must be validated before being committed. “That validation may include ‘shelving’ the changes so they are stored securely while peer review and pre-integration testing can be performed,” says Warren.
The company provides code review and collaboration tools such as Perforce Swarm, which promises to automate much of this process and capture discussion around the changes for future audit purposes or just to try to work out what was going on.
Olivier Bonsignour, European vice-president of product development at software analysis and measurement company Cast, offers a departmental manager perspective asserting that, while agile has shown success in small, greenfield projects, most organisations have found it difficult to scale agile to mission-critical systems that involve legacy components.
“Many organisations struggle to run the continuous integration environment at meaningful scale. They find it difficult to follow all the tenets of Scrum to produce high-quality output. Code quality is frequently compromised when agile is not followed thoroughly. This creates risk for the business, which may get stuck with a software architecture that does not work. Teams may use methodologies such as Scrum, but take out certain aspects such as doing daily builds, which reduces the effectiveness of the methodology,” says Bonsignour.
Cast’s latest offering of its flagship application intelligence platform includes features designed specifically to address technical vulnerabilities during the increasing adoption of agile techniques in the face of ever-increasing code sourcing complexity across multi-tiered, multi-technology applications.
“Agile has driven automation. We need to automate software build processes and testing, because there is little time for humans to do commoditised work. Part of the automation missing is an intelligent source configuration management capability across all components of critical systems – not just the most recent ones. This industrialisation of software development is essential for IT-intensive businesses to survive, lest the IT organisation is left to drown in maintenance issues, technical glitches and security breaches,” says Bonsignour.
Agile testing without automation is worth zero?
So it would seem, at least from the testing suppliers’ points of view, agile testing without automation is worth zero. Continuous agile delivery across structural and architectural software quality factors needs to incorporate everything from robustness to penetration and security, right through to performance under stress and even transferability or changeability of the application software code in hand.
Software today needs to change fast and agile software changes faster than most. Pulling off competent analysis and refactoring of the application code in a high wind is no mean feat and the agile software testing suppliers know this. Perhaps not quite fully envisioned at the inception of the Agile Manifesto itself back in 2001, agile proponents have not voiced major objection to the tools here referenced, so they may just be useful in some cases.