hanakaz1991 - stock.adobe.com

Mitigating the risks of modern application development

Organisations need to have visibility over their software supply chain, secure and monitor interfaces to legacy systems and adopt zero trust to mitigate the risks of modern application development

The modern economy means that new business ideas must be expressed in some kind of digital form to realise success. This digital transformation has led to a huge growth in custom development within organisations.

The complexity of modern systems, along with the need for fast time-to-market, means that in most cases software development does not mean writing programmes from scratch. Instead, applications are assembled from a wide range of containers: modular components, libraries, and services sourced from many places including shared code from the internet.

Ideally, these containers are stored in an organisation’s secure repository and subjected to scans for known vulnerabilities before being incorporated into applications, but often a significant question remains: “Do you know the provenance of these components?” In other words: Who created and maintains them? What are their dependencies? What are the support implications if something goes wrong somewhere along the software supply chain? How will you respond to a zero-day event if a component’s upstream source is compromised or accidentally broken?

Without good answers to these questions, significant risk can be introduced into an organisation’s environment, resulting in downtime, security breaches, and loss of reputation.

A starting question to determine how well an organisation understands the foundations they rely on is “What is the base container image of this component?”. It’s very common for inspection of a container to stop at the main service or library it supplies, but since the foundation of each container is an operating system (OS) of some kind – currently almost always Linux – then there may be a lot of additional foundational components that are included under the covers. This is where questions of provenance become important, as different OS versions have very different levels of security, support, and reliability.

For example, SUSE’s Base Container Image (BCI) inherits the software supply chain security and certifications of SUSE Linux Enterprise, with the level of reliability and security that is expected when running mission-critical applications.

That means it’s the only BCI based on a major OS that has a current Common Criteria EAL4+ security certification, and was the first OS to attain and exceed the top level of assurance for Secure Lifecycle for Software Artifacts (SLSA). The BCI also inherits a range of other certifications such as FIPS (Federal Information Processing Standards) compliance.

Perhaps more importantly, SUSE’s Linux distribution is built with enterprise support at the top of mind – meaning that each component has a known provenance and a well-understood path for support if something goes wrong, or if something needs to be proactively fixed. SUSE’s experience of over 30 years with open-source software means that it has lots of experience in providing stable, reliable, and supportable foundations for development.

Once an organisation has a well-understood foundation for their development components and undertakes best practices to securely store and scan container images for known vulnerabilities, the dynamic ecosystem and modular approach to development can still lead to unexpected issues. In particular, the rapidly deployed accumulation of infrastructure and application components can – layer by layer – create a significant attack surface.

For example, the Optus breach in 2022 was allegedly enabled by the exposure of test APIs (application programming interfaces) to a public network, and access from those APIs into production data. Among other issues, the complexity of the layers of software making up Optus’ environment made it possible to find and exploit a pathway into the company’s customer data.

On top of this concern, zero-day bugs that may be introduced anywhere in a software component’s supply chain may result in flaws that can be exploited before they’ve been identified and fixed. There’s no way to scan in advance for these vulnerabilities as there’s nothing to check against – which is why it’s so important to have real-time monitoring of applications as they operate.

Nevertheless, organisations have embraced the ability to develop applications quickly and implement ideas with a fast time to market, and there’s an increasing awareness of the need to do this securely and reliably as well. Across different markets in Asia-Pacific, we can see the difference in how this is being addressed in mature economies versus emerging economies.

For example, in mature markets such as Australia and Singapore, organisations must consider how to shift from and integrate with legacy technologies and monolithic software, even as they look take advantage of the new and faster capabilities of cloud-native development and microservices.

This “brown field” environment can uncover significant complexities and introduce significant risk, especially if there’s integration with systems that weren’t designed to be cloud-ready. In many cases, core data and services are run in traditional mainframe or similar environments over which the organisation have a high degree of control. Accessing these services from more agile and dynamic environments means a lot of thought needs to go into securing and monitoring the interfaces.

In emerging economies without a similar level of technical debt, we see many more “greenfield” environments, and an arguably “cleaner” landscape that can be rapidly evolved. Developers in those environments can take advantage of the clean slate to implement modern development practices, yet the need for careful attention to the security and observation of the system remains, especially as software components may be sourced from many places.

In both cases, a zero-trust approach is necessary. Microservices make compartmentalising applications possible, and the interactions between microservices can be audited to ensure the application is working as expected. Blocking unexpected transactions between components (whether they are internet sources or gateways to legacy systems) means that if a problem arises, an organisation can be alerted in real-time to a possible zero-day attack.

SUSE’s ongoing focus is about how to trust, secure and manage one’s infrastructure and how the new hybrid cloud landscape must be approached differently from when organisations ran their own infrastructure or had only a single service provider.

Our heritage in open source provides a unique perspective and longevity over several iterations of software infrastructure development, providing yet another level of reliability for our customers.

Peter Lees is SUSE’s head of solution architecture in Asia-Pacific

Read more on IT risk management

Data Center
Data Management