This is a guest post for the Computer Weekly Developer Network written by Yves Junqueira in his position as CEO at YourBase – a software testing acceleration platform that automates the test selection process for large, complex codebases.
Junqueira writes as follows…
Composable IT as I understand it is about breaking down pieces of the infrastructure stack into simpler, self-contained components that are easier to replace. When an organisation adopts a mindset of composability and whenever it sets a goal to change something about the organisation, it can simply replace pieces of the stack for components that deliver the intended outcomes—i.e., faster, better, cheaper, more environmentally friendly. In short, composability has substantial benefits.
Composable software delivery
Organisations want to deliver software fast so they can serve the fast-changing needs of their customers. But organisations with monolithic software development stacks, which include build tools, testing infrastructure, CI and deployment systems, will have the most trouble adapting to this new world of on-demand software delivery.
One of the reasons for this is it’s harder to speed up software development when teams already have an established codebase. Every successful software project goes through growing pains where the team’s productivity goes down as more and more code is added to the project, leading to a hard-to-change monolithic system.
The second reason is software development stacks that are built in-house can be the hardest to change. A build, test and CI stack built by the company’s own talent, integrating various open-source components (e.g., Jenkins, Docker, pytest, JUnit), often becomes an inflexible monolith by itself. It achieved its original goal of delivering software, but changing this stack is very difficult because it is not built with APIs or composable interfaces. So now that the world has changed and customers want software delivered on-demand, the development team is stuck with a hard-to-change build and test stack and the entire organisation suffers as a result.
Accelerating software delivery
As is the case of composable IT in general, organisations can start breaking down monolithic development stacks—first by understanding their dependencies and finding out which pieces are preventing them from achieving the business goals and second by replacing those pieces.
One significant challenge here is the complexity of software systems. It can be extremely difficult to find the seams that can be used to break the module, application or tool into smaller pieces, because there are many different moving pieces and it is hard to find the smallest isolated unit that can be separated as an individual component.
Automation can be used to identify the real inputs and outputs of each component and how components are connected. We call these dependency graphs. Function call tracing can be used to create a dependency graph to indicate which user-visible feature ends up using each part of the code.
API tracing can identify the different elements in a service-oriented architecture. Network monitoring can be used to identify which pieces of the infrastructure are calling into an old and poorly-maintained server host. A dependency graph can identify all callers of all servers. It can map how everything fits together.
The testing conundrum
Our customers all had the same challenges: evolve existing software development stacks towards on-demand software delivery. Our approach so far has been to help them understand and optimise one specific piece of the software delivery lifecycle: software testing.
Software testing has been stuck with inefficient, slow, hard-to-use tools. We’ve created technology to automatically decompose the team’s software testing process and help them understand their current system. The dependency graph created by the tool can map all parts of the development stack and identify what is slow, inefficient or unreliable. Practitioners can replace those components with faster, cheaper, better systems.
For example, by replacing just the software frameworks used to run tests, our customers have dramatically improved their software testing times. They would not have been able to make such a surgical optimisation if they did not have a complete map of their software development process and components, including their inputs, outputs and time spent in each piece.
Decomposing the delivery process
Teams can follow this approach for everything in IT. They can use special tools to deeply understand a monolithic piece down to self-contained parts. It is a tangible, viable path towards making existing infrastructure more composable, everywhere in the IT stack.
Scanning tools can identify individual services provided by a server. Firewall rules can identify code that is making unexpected calls to a database. Kernel tracing using eBPF can identify processes that are writing to a sensitive file.
These approaches aren’t new, but we encourage practitioners to use them continuously to build living maps of their IT stack and reduce friction for future analysis.
By making it easier for teams to study, analyse and ultimately replace underperforming components, the organisation can achieve their goals to make IT components faster, better, cheaper or more environmentally friendly.