Modern development - Hazelcast: a brief history of testing (and our chaos theory future)

This series is devoted to examining the leading trends that go towards defining the shape of modern software application development. 

As we have initially discussed here, with so many new platform-level changes now playing out across the technology landscape, how should we think about the cloud-native, open-compliant, mobile-first, Agile-enriched, AI-fuelled, bot-filled world of coding and how do these forces now come together to create the new world of modern programming?

This piece’s full title is: Testing — characteristics of modern software development practices, by Nicolas Frankel, developer advocate at in-memory computing platform company Hazelcast

Frankel writes as follows…

The software development industry might be considered to be 50 or 60 years old… and it still hasn’t reached maturity yet… and perhaps never will.

Though we are still using nearly the same building blocks i.e. processors and memory, the underlying hardware technologies evolve very rapidly. The pace of evolution is even faster for software that runs on top of them: operating systems, languages, frameworks, libraries and last but not least, applications that we craft.

It’s not my intent to single out a language, but most readers are probably aware of the term ‘JavaScript fatigue’. It describes what happens to developers when the latest hype framework has an average lifespan of months before being replaced by a new trend. In that context, what would be the common factor across all the stacks, which would define ‘modern’ (or at least contemporary) software development practices?

Testing, as a trend

I’ve been working in IT for two decades and I think that the most important practice for a software delivery team is… testing. Testing? I admit this doesn’t sound very appealing, but I stand by it: testing is the common practice across all language stacks that characterises the modernity of a development team.

When I started my career, testing (as a practice) was already in place and had become a ‘thing’. However, it had a completely different meaning. Testing meant manual testing. If you worked on a project with some quality procedures in place, it also meant filling in a form with the details of the component tested and your signature. That way, we could “guarantee” that the delivered software was working as expected. Of course, that approach had a lot of loopholes. For example, what if the tested component uses code that is shared across multiple other components? Also, shall we follow the same quality process during maintenance?

JUnit, created around 1997 but popularised a bit later, as well as its xUnit counterparts in other tech stacks, solves the second question. They are frameworks that actually put the testing steps in code. Once the step of automating tests is done, it’s easy to integrate those tests in a build tool and run them when it’s necessary, such as after code has changed. This allows us to more easily catch regressions or changes that introduce bugs to the previously working software.

Once tests can be run through a command-line instruction, it’s a no-brainer to integrate them not only in a build tool, but in a continuously running build environment. Those are now known as Continuous Integration (CI) servers. From that point on, we were able to automatically check for regressions in existing software. But all of this was still based on a wrong assumption: successfully testing individual components meant successfully testing the complete assembled software. It’s akin to test every nut and bolt of a car… and conclude that the car has been tested.

The birth of end-to-end

To actually test the software with its dependencies, including databases, email servers etc. developers started increasing the scope of what was tested: from components to the whole software – end-to-end testing, via interaction between components – integration testing. The debate is still ongoing on what constitutes unit testing and integration testing, and whether one is superior to the other, but teams now at least are familiar with both concepts and can decide what’s best in their contexts.

The last few years saw rapid adoption of containers. This approach tries to cope with dependencies’ version mismatch in different environments by packaging software and its dependencies in the same artifact, called an image. In the Java ecosystem, a library called Test Containers provides a programmatic API to get and launch containers during testing, so that integration testing becomes even easier.

I didn’t get a chance to mention testing in production: the foundation of this approach is that testing in different environments won’t find all bugs. Thus, one should continue testing once the software has been delivered in production. I’d also need to mention performance testing and its different flavours, endurance testing, load testing and stress testing, penetration testing, etc. There are so many different kinds of testing, it could well be the subject of a whole book – and likely is already published.

Yet despite all this testing that’s been going on, engineering is all about optimising the management of finite resources. Engineers focus on the key aspects of the product and invest more testing time in those areas. For example, at Hazelcast, we’re known for the performance and fault-tolerance capability of our in-memory computing platform. In order to test performance, we spend a lot of time running various performance tests targeting high throughput and low latency, followed by a detailed investigation of every single aspect of the system like garbage collection, network and hardware utilisation. 

Absurd crashes & chaos theory

In order to make sure that the system is reliable under the worst conditions, we developed a proprietary tool that constantly tries to make it crash in absurd scenarios. Besides the basic random killing of servers and the introduction of artificial network partitions, we do:

  • Non-symmetric network partitioning: one side sees the connection broken, but the other thinks everything works.
  • Freezing of the processes: hardware machine works properly, but the process itself freezes.
  • Introduce artificial network latency on some of the connections.
  • Other things, much more (than I can mention here)…

We also combine those issues to create even more unlikely scenarios, all this under heavy load and sometimes during a version upgrade. In system-testing, this is known as chaos testing. We just adapted this practice to our context.

When developers think about modern software development practices, they hardly consider testing, “because everybody tests!” today. While it’s true, the domain of testing is so rich that one’s testing practices cannot be summed up in a sentence. Testing itself is not a new idea: it’s born out of the idea that software should be as bug-free, performant and fault-tolerant as possible. The way a team approaches testing – and its different varieties – tells a lot about its maturity regarding software development and how much it values this reliability.

For that reason, I’m a firm believer that testing can (and should) be counted among the top modern software development practices.

Image source: Hazelcast

CIO
Security
Networking
Data Center
Data Management
Close