Unlike digital-first organisations, traditional businesses have a wealth of enterprise applications built up over decades, many of which continue to run core business processes.
In this series of articles we investigate how organisations are approaching the modernisation, replatforming and migration of legacy applications and related data services.
We look at the tools and technologies available encompassing aspects of change management and the use of APIs and containerisation (and more) to make legacy functionality and data available to cloud-native applications.
Hughes writes as follows…
Once upon a time, not so very long ago (perhaps as recently as the end of the last millennium), enterprise organisations looking to implement new platforms, tools and solutions would hold a comparatively singular view of the route they should go down when looking for a new set of solutions.
That view, for the most part, concentrated on engineering and architecting for what we normally refer to as monolithic applications.
In terms of form and function, monolithic applications have an inherent strength that stems from the fact that all their layers are tightly coupled and bound together. In-process communication, shared database transactions and session data – things that become much harder when considering distributed systems – are relatively simple to implement. The trade-off here is that, particularly as these systems grow, release cycles become much slower (it takes longer to turn a big ship around). Overall system complexity is increased, leading to rigidity, and in many cases, fear of changing the beast.
Moving to microservices
But times have moved on… and while monolithic applications still have their place in the enterprise (some legacy software exists because, fundamentally, it still works), we also have the opportunity to think about the benefits of building applications using more loosely coupled systems. While early service-oriented architecture approaches failed to live up to their promises, a fast-growing evolution has taken root, namely the combination of microservices and containers.
This dynamic duo offers an inherently improved level of modularity; it’s intrinsic to their nature. This, in turn, offers a plethora of other benefits. Teams become more autonomous when managing the evolution of their products. It creates technological independence, since each microservice can be implemented using different stacks. It improves flexibility, allowing independent evolution and deployment of different pieces of an application. Fear of breaking the monolith is no more, and teams can organise in a way that aligns more closely with how the business operates. Finally teams can think in terms of products instead of projects.
But, it’s important to realise that there is no such thing as a free lunch and that distributed systems come with a price. An organisation looking to adopt containers and microservices may not have the team skill set in place, so training or hiring is required. Teams are now deploying things to production at an increasingly faster pace. Can the stack itself and the business cope with this new dynamism? When teams don’t all use the same tech stack, the incentive is actually to experiment and use the right tool for the job; that’s great, but any level of fragmentation needs management.
These aren’t the only challenges on the road to IT modernisation through microservices and containers. Other factors come into play inherent to distributed systems – building for fault tolerance, handling communication errors between individual components of the stack, service consistency, debugging… the list goes on. A higher and holistic platform-level solution needs to be brought to bear in order to allow these almost live-wire elements of efficiency to be corralled, contained, connected and coalesced.
A palliative alternative
A single stack solution based upon low-code efficiencies can provide immediate consistency in how critical areas like CI/CD, security, debugging, monitoring and logging are approached.
Further, the more advanced low-code development platforms provide mechanisms that allow teams to choose the degree of coupling within their applications. The visual nature of these platforms levels the playing field for resources so that practically any IT team can start being productive right away. The complexities of microservices and containers are abstracted away and developers can instead focus on building what the business needs.
This is not to say that a solid understanding of architecture is no longer important. At OutSystems, we recommend that teams breaking down monoliths or building large mission-critical applications adopt a domain-driven design approach. Simplistically this means using a strongly coupled approach for functionality within a specific domain. Then, by exposing domain APIs, other teams working in different domains can interact using a loosely coupled approach.
When considering service granularity, be sure to look for low-code platforms that track and understand the relationships between all these services. Without effective impact analysis you will be faced with what’s affectionately known as the microservices Death Star.
The way forward
Low-code single platform computing represents a key channel on the journey to application modernisation. With enterprise-grade robustness delivered across the stack, developers can take advantage of proven components and bring them to bear inside live operational systems faster than ever.
Yes, the monoliths will still be around for a while, but a low-code approach ultimately provides a more efficient route to the core business software that enterprise organisations will need in the immediate future. Those who simply relegate low-code platforms to building toy apps are missing the bigger opportunity.