This is a guest post for the Computer Weekly Developer Network blog ‘penned’ by Olivier Bonsignour in his role as executive vice president of product development at Cast — – the company is a software measurement and analysis solution provider focusing on source code parsing, debugging, impact analysis, real time map building and evolution management for all client/server SQL based applications.
The shift to digital
While the shift towards digital continues, the main struggle for IT-intensive organisations as they push toward transformation are the layers of software complexity that have been amassed over years of doing business. Particularly in industries like insurance, healthcare and financial services, core business units are run on technologies that are as old as the companies themselves. Enter microservices.
As an approach, Microservices has become more widely adopted in recent years because it enables large IT organisations to add new features quickly. This is specifically true for front-end apps where digitalisation is actually occurring while leveraging decades-old backend applications that take months to be rewritten. Breaking applications down into individual components – or “bricks” – gives dev teams the ability to swap out bricks as needed with minimal disruption.
Sounds good, right?
What many teams fail to consider is that modernised bricks on the front end do not equate to a sound, secure application structure overall. For example, if the new microservices-based front end isn’t properly leveraging the backend and is triggering tens of CICS transactions per minute, you can bet that the hardware bill at the end of the month will jump sky-high.
Said otherwise, bricks can be perfectly built and their edges straight, but if the mortar between the bricks is faulty, or the manner in which the bricks are laid is unaligned, the resulting wall won’t be sound and solid. The same thing is true for applications.
It’s nice to talk about what microservices has the potential to deliver, but at the end of the day, many beautifully packaged components do not necessarily deliver on the “fast and easy” promise of microservices.
It is essential to consider the value of microservices against the introduction of software complexity and risk to your IT organisation.
Organisations considering large-scale projects like cloud migration or digital transformation are looking to microservices to help reduce risk and ensure that any software changes made don’t completely take down operations.
However as mentioned above, IT complexity presents significant risk to the business when large, modernisation efforts are underway because there is little visibility into the inner workings of the systems being changed.
With microservices, you’re breaking applications down into components – or bricks in this case – and developing those components independently. Small, nimble teams work on each component, making it perfect, and then each piece is combined to make the whole.
The danger with this model is that no developer or architect takes whole-system functionality into account. They are only thinking about the individual bricks themselves, not how the bricks join together to create the “wall”.
Have you ever noticed in Facebook that sometimes your photos go missing, or your events malfunction? This is likely due to a flaw in the microservice.
With microservices there is no behind-the-scenes calculation or verification of the data call. Microservices assumes each component is fully functioning, so the system blindly relies on the number or value it’s given by the component.
Because you are building smaller pieces that are supposed to be independent, the assumption is that you also reduce the risk of integrating them altogether – but this is not true, particularly if the integration is through the API only. If you follow the definition of what an API should be, it should never be changed with new builds. When tapping into the API, it is possible to accidentally trigger source code in ways that are not known to the developer. This means your microservice will be linked to this function because of the way you’re using the API.
If you’re Facebook and just handling people’s photos or chats, it might be an issue that can be fixed quickly. But if you’re a global 100 bank or financial institution, integration issues between components and API can spell disaster for your customers and for your business. Don’t let microservices give you a false sense of security.
Microservices & DevOps
Microservices is a close friend of DevOps. It would be impossible to keep up with the fast sprints of DevOps without smaller, componentized builds. Integrating microservices with DevOps also reduces the need to retest applications, saving on big time costs.
The trick to marrying speed with risk-free development is to ensure that the microservices are both independently and wholly secure. If you are quickly introducing many small errors into your applications, vulnerabilities will be introduced that put your organisation at risk. Testing is typically done at a unit level, but as applications are built from microservices, it becomes increasingly difficult to test the whole system. Replicating all the runtime components is prohibitive, causing developers to resort to service virtualization, or just issuing canary releases. In all cases, with DevOps the testing is being pushed back on the user. Again, this is acceptable in applications like Facebook. Less so in a critical application at a commercial or institutional bank.
This makes it all the more important to address security and risk at a structural level – how sturdy is the wall – before releasing the functionality to the user. Structural software analysis can look across services to see whether the whole transaction taken together may experience unexpected flaws or security holes.
Microservices on its own is not fool proof. When paired with something like system-level analysis, microservices can make transformation much more tangible by illuminating the interconnectivity within each individual component, and then all the components together within the system.
Not only can microservices improve scalability, but big gains can also be felt in simplicity and cost. Each microservice can be thoroughly tested independently, without the need of a complex testing environment which removes some complexity from the process. Additionally, because the maintenance of each microservice application is faster and easier, thanks to the automation of system operations, less support and resources are needed, which in turn reduces cost and boosts efficiency.
The ability to replace, add, amend and re-deploy microservices can prove to be a life-saver for organisations trying to fix large-scale problems. Especially since these services can be deployed without compromising the entire application.
Organisations that use a microservices approach, alongside system-level analysis, can make their IT systems much more nimble and responsive, and ensure they exploit the latest technology development trends. With system failure on the line, it’s time to not only focus on the integrity of software bricks, but on the soundness of the walls we build with them.