This Computer Weekly Developer Network series is devoted to examining the leading trends that go towards defining the shape of modern software application development.
As we have initially discussed here, with so many new platform-level changes now playing out across the technology landscape, how should we think about the cloud-native, open-compliant, mobile-first, Agile-enriched, AI-fuelled, bot-filled world of coding and how do these forces now come together to create the new world of modern programming?
This contribution comes from Dr. Gerald Pfeifer in his capacity as CTO at SUSE — the company is known as a provider of enterprise-grade Linux, it also specialises in an accompanying set of open source software-defined infrastructure technologies and application delivery tools.
Pfeifer writes as follows…
Hold on now, this isn’t going to hurt. But in case your chest just contracted a bit – relax. I am not intending to use you as Guinea pigs fed with semi-random code de jour to test you. That being said… I am going to test, test, test your ability to embrace testing, because testing is never overrated. In fact, you hardly can test too much.
The question is “when” and “how” and “what” you test.
It’s long been a common understanding that the earlier you catch an issue, the less effort there is involved to address it… and the cheaper it is to fix. Not to mention the fact that you will upset fewer colleagues (or indeed users!) in the process of calming them down and showing that code has been remediated. So, in general, ‘the earlier, the better’ should be our mantra here.
CI/CD/CD (or Continuous Integration/Continuous Delivery/Continuous Deployment) are common buzzwords these days, alongside DevOps. These are about a tight, virtuous cycle of developing in small increments and high frequency, integrating those changes and making the result available (Delivery) and rolling them out (Deployment).
While technically not a strict requirement, the success or failure of a CI/CD approach closely hinges on testing being part of the integration phase, beyond merely building the software.
Tighten up testing
Successful open source projects like GCC, the GNU Compiler Collection, that builds large parts of a Linux distribution, for decades have required running a test suite before submitting a patch for inclusion. Others, like LibreOffice, leverage tools like Gerrit to track changes and tightly integrate those with automation tools like Jenkins that conduct builds and tests.
For such integration testing to be effective long term, we need to take it seriously and neither skip it (“My change is obviously beneficial”) nor ignore regressions introduced (“There already was some breakage before, adding a bit now isn’t too bad, and we can fix that later”).
Extensive automation is a key success factor, as is enforcing policies, be it the toolset gating the inclusion of changes based on test results or managing it in other ways.
If your workflow involves code reviews and approvals, doing automated testing before the review process even starts is a good approach.
Much is contingent on the extent and coverage of test scenarios. Admittedly creating individual tests for something like a compiler is comparatively easy, alas as LibreOffice demonstrates even graphical applications, where visual appearance is a key element, can be tested automatically.
Both LibreOffice and GCC share an important trait with other successful projects focusing on quality: Whenever a bug is fixed, a new test is added to their ever growing and evolving regression test suites. That ensures the net which is cast constantly becomes better at catching and hence avoiding, not only old issues creeping in again, but also new ones from permeating. The same should apply to new features, though when the same developers contribute both new code and test coverage for those, there tends to be a risk of having a blind eye.
And, yes, all the above is applicable to pretty much any kind of software: command-line, graphical, web, or even a full-fledged operating system.
The openSUSE project and SUSE’s Enterprise Linux team for example have created openQA, which we use to even test an operating system – from the boot process through starting a graphical user interface, logging in, invoking a web browser and verifying its display. To give you an idea, for SUSE Linux Enterprise Server 15 SP2 alone there have been 107,361 such runs of which 34,311 (a third) involved a full installation. That’s a good example of testing early, often, automated and comprehensively. As is openSUSE Tumbleweed, a rolling Linux distribution that integrates a constant flow of updates to its thousands of components, yet remains surprisingly stable by virtue of openQA.
Last, but not least, definitely leverage users as testers – if they are aware and do so voluntarily! openSUSE Tumbleweed, Firefox Nightly, Chrome Canary, and LibreOffice Nightly builds are nice examples. All these release to users after and in addition to their automated tests.
Look Ma, no more bugs?