Green coding - MinIO: An unlikely problem in 'modern' software environments

This is a guest post for the Computer Weekly Developer Network written by Keith Pijanowski in his role as subject matter expert (SME) for Artificial Intelligence & Machine Learning (AI/ML) at MinIO.

MinIO is known for its work which has seen it create high-performance, Kubernetes-native object storage. The company’s open source, software-defined Amazon S3-compatible object storage system is optimised for the private cloud.

MinIO offers enterprise features including inline erasure coding, bit-rot detection, state-of-the-art encryption, active-active replication, object locking, lifecycle management and identity + access management.

Pijanowski writes in full as follows…

When I first started my programming career, I was responsible for a bug that crashed a VAX 6000 series from Digital Equipment Corporation. 

This particular VAX was the development environment for the entire company. The bug was an infinite loop that occurred under conditions I did not account for. Additionally, it was more than a CPU loop – I was requesting memory and disk space with each iteration. The VAX did the best it could, but eventually, it could give no more and fell over. 

This problem was discovered quite quickly because when an entire environment is down, engineers are unproductive and this is a problem that gets immediate attention.

Cloud-native realities

What is interesting about the story above is that if this same code were deployed into today’s cloud-native modern clusters with their elastic scale, there is a good chance the problem would go undetected for a very long time, wasting precious resources. 

Today, most code gets packaged as a microservice and multiple instances are deployed – each instance is running within its own container. If a particular instance gets into an inefficient state, then the other instances make up for it. If all the instances cannot keep up with their workload, then the underlying cluster can detect this condition and add more instances. Additionally, individual instances may detect that they are in a bad state and can request a restart. The bottom line is that inefficient code can continue to operate because modern clusters (think Kubernetes), which can run in cloud environments and on-premises, are capable of automatically scaling out services and restarting them when the underlying code makes less than optimal use of resources.

This is certainly ironic because one of the biggest achievements in the software industry for delivering scalable, available and reliable services is the modern clusters I referred to. 

Unfortunately, they are also friendly to not-so-green code. 

Let’s look at a few tools, procedures and best practices that could be implemented to detect ungreen code or low-quality code before it even gets to a modern environment.

The pull request workflow

The pull request (PR) workflow is a process used by most development teams to manage changes to a codebase collaboratively. The most important step in the PR workflow is the code review. Source code management tools like Github and Gitlab have capabilities for facilitating code reviews. For example, modified code is highlighted and feedback can be sent to the original engineer. 

Teams that consistently produce quality code take the PR workflow seriously and make time for code reviews. Unfortunately, some teams, when they get busy, ‘rubber stamp’ PRs. In other words, the reviewer accepts the changes without really understanding the logic of the new code.

Static code analysis

MinIO’s Pijanowski: Modern software environment ‘efficiency’ functions can lead to no-so-green inefficiencies.

Static code analysis tools scan the source code of a service without executing it. The primary goal of static code analysis is to identify potential issues, vulnerabilities, bugs, or violations of coding standards in the codebase. Static code analysis tools should be run before a code review because they can enforce style guidelines – making code more readable for a reviewer. 

They also look for ‘dead code’ which is never executed and duplicate code. 

Finally, static code analysis can identify overly complex functions, which should be broken up into simpler (and easier to review) functions. The analysis provides insights into the quality of the code.

Unit tests & end-to-end tests

A unit test is a type of testing where individual functions, methods and classes are tested in isolation to ensure that they function correctly. A common metric that is associated with an application’s unit tests is code coverage. Teams serious about quality code should have unit test coverage well over 90%.

Then we have end-to-end (E2E) tests, which evaluate the entire flow of an application from start to finish. Unlike unit tests, which focus on testing individual components in isolation, E2E tests examine the interactions and behaviour of the entire application, including all integrated components, subsystems and external dependencies.

Both unit tests and E2E tests can be incorporated into an organization’s CI/CD pipeline so that all tests are run automatically prior to deployments and PR requests. If properly monitored, these tests could detect inefficient use of resources.

Life in modern environments

Modern software environments, with their elastic scale and access to near-infinite resources, can cover up problematic code that is using resources inefficiently.

Fortunately, tools and techniques like the PR workflow, static code analysis, unit tests and E2E tests can help detect ungreen code before it even reaches an environment.

CIO
Security
Networking
Data Center
Data Management
Close