The environmental impact of common architecture patterns

In this guest post, Chris Darvill, vice president of solutions engineering covering Europe, Middle East and Africa (EMEA) at cloud-native API platform provider Kong, talks about the environmental benefits of moving from monolith to microservices-based application architectures. 

The most popular transformation companies are making is the shift from monolith to microservices. With many business-critical processes still being powered by systems built in and before the 1970s, it’s a transformation that won’t let up any time soon and is one that can drive the next wave of sustainability.

It’s easy to think that replacing systems built in the 1970s with modern technology delivers an immediate efficiency gain. Microservices are independently scalable and can be individually configured, resulting in less wasteful usage of resources.

However, as we decompose the monolith into microservices, we go from a handful of in-app connections to an exponentially increasing number of microservices all talking to each other over various networks – creating a considerable increase in network traffic.

We need to ensure this increase does not translate into a net increase in resource consumption. To prevent this, we should use the most appropriate transfer protocol for the traffic. Consider implementing services in gRPC rather than REST, which tests have shown is seven to 10 times faster due to the use of HTTP/2 and streaming, and the highly compressed Protobuf message format. Additionally, think about compressing large payloads before sending them over the wire.

Service mesh: a network necessity

With an increasing amount of network traffic, it becomes imperative to manage that traffic: eradicating unnecessary requests, shortening the distance travelled, and optimising the way messages are routed. This can be achieved with a service mesh. By managing all inbound and outbound traffic on behalf of every microservice, implementing load balancing, circuit breaking and reliability functions unnecessary requests can be minimised and visibility provided into the requests that do take place.

In our digital world, consumers expect a real-time response after an interaction. This has seen a shift from batch processing to real-time processing over the last several years, to deliver the enhanced capabilities that people expect.

Consider what real-time means: the way a system immediately reacts to something that has happened; an event, essentially.

The way this is implemented with RESTful APIs is through polling. An API is configured to run every X seconds, to check if something has happened. If nothing has, then it waits for the next poll X seconds later to see if something’s happened then. If something has happened, then it takes that data and triggers downstream processing (for example, updating a customer’s direct debit details on their profile). However, 98.5% of API polls don’t return any new information. This means that most polls are a waste of energy.

Event-driven architectures (EDA) only act when there is something that needs to be done – consuming energy when it’s actually needed. When an “event” occurs, such as a payment details update, downstream services can be invoked to do the relevant updating.

Reusable APIs vs. point-to-point integration

A key principle of green software engineering is to use fewer resources at higher utilisation, reducing the amount of energy wasted by resources sitting in an idle state. This correlates to integration patterns: the more we reuse APIs, the less time they’re idle and therefore the less energy they waste (assuming every API call is necessary).

On the contrary, in a point-to-point approach, code is built for one specific purpose: to connect A to B. It cannot be reused for connecting B to A, sending data in the opposite direction. It cannot be reused for connecting A to C.

Assuming the average company integrates over 400 data sources, this equates to an unmanageable 159,600 single-use connections when following a point-to-point approach. That’s 159,600 individual services, all deployed on infrastructure running somewhere, using energy from somewhere to power them to sit idle the vast majority of the time. What a waste.

With this many connections, the overall architecture is complex. Pathways between systems are convoluted and unexpected, resulting in “spaghetti code.” There is a lot of wasted network traffic, trying to find the shortest path from A to B, and wasted traffic means wasted energy consumption.

On the other hand, an API-first approach leads to much simpler architectures and highly reused services, particularly those sitting around back-end systems. This means more efficient message routing and load balancing, simpler code, and higher utilisation of deployed code.

Playing our part

Whether motivated by conscience or by the fact that more efficiency means higher profits for the business, we need to acknowledge and accept there is a problem. Not just that global warming exists but that IT is a large and growing part of that problem. We know we need to fly less – why don’t we think about how we build and use technology with the same guilt?

We can make a difference. Consider following green engineering principles when building or versioning an API. When breaking down a monolith into microservices, minimise the microservice traffic. Remove unnecessary network hops.

We need to work together to figure this out and right now we’re just at the beginning.

CIO
Security
Networking
Data Center
Data Management
Close