Gorodenkoff - stock.adobe.com
A lot has changed in IT over the past 40 years. We’ve gone from a world dominated by mainframes, through to client-server architectures and running virtual machines (VMs) inside datacentres. Then, to today’s modern environments where applications run inside public cloud environments on infrastructure-as-code, in containerised environments and using modern architectures such as microservices.
One thing that has been a constant throughout this evolution is the need to understand what is happening inside the applications and services we provide. It used to be achievable because a lot of what was built was permanent, or at least existed for weeks, months or years.
But in today’s environment, how do you measure and monitor something that is fleeting, ephemeral and barely there? Something that only exists at the time of execution? What makes containers great, for example, makes them hard to manage as a large number have a very short lifespan of minutes, sometimes seconds.
This creates great challenges for IT operations leaders as they struggle to migrate existing monitoring infrastructure to modern architectures and observe how the applications and services that run over the top actually perform. This leaves frustrated end users and large gaps in visibility. The way we think about monitoring these systems needs to be radically redesigned.
To add to the complexity, enterprises are also expecting more from monitoring and observability tools. Frequently, the demand is not only on identifying, isolating and resolving issues, but also optimising systems in production to maximise throughput or minimise cost.
Cloud-native vs third-party tools
As organisations increasingly adopt cloud first for the development of new applications and workloads, they are looking at using the tools native to the cloud ecosystem, rather than take their existing monitoring and observability tools with them to the cloud.
According to Gartner, 80% of organisations will prefer cloud-native services from public cloud providers for monitoring and observability over third-party solutions by 2025, up from less than 20% in 2021.
Only a few years ago, these tools were very much a poor relation to enterprise products, focusing on basic infrastructure metrics such as CPU utilisation. Third-party monitoring solutions allowed much more granular information, resulting in quicker problem identification and resolution.
Originally considered an optional extra focused on infrastructure metrics, monitoring and observability tools from public cloud providers have improved in scope and variety to encompass much of modern application performance management. Today, all major public cloud suppliers have solutions for not just metric monitoring, but also logs, events and traces. Many also have offerings for digital experience monitoring and are expanding into new technologies.
Using the tools built into the cloud provider’s portfolio, businesses can expect to achieve cost savings as they reduce their traditional infrastructure use by rationalising the number of tools needed, while increasing operator efficiency. By centralising monitoring data, visibility across multiple applications is improved.
Cloud suppliers are also incorporating open standards, such as OpenTelemetry, into their products, increasing pressure on third-party vendors to differentiate their offering based on analysis rather than generation of telemetry.
Currently though, cloud service provider tools are focused only on their own platform, limiting their ability to extend to other clouds. To monitor across multiple environments, third-party solutions maintain an advantage and are still the primary way to ensure visibility of key applications across architectures.
How to get started
For each cloud supplier that you use, evaluate the monitoring and observability capabilities available with the cloud ecosystem.
When building new applications in the cloud, evaluate cloud service provider monitoring tools in addition to third-party suppliers for your monitoring and observability needs.
For multicloud environments or hybrid applications, consider using cloud native tools that feed into a suitable third-party supplier, such as a software-as-a-service (SaaS) monitoring supplier or an artificial intelligence for IT operations (AIOps) platform for cross-platform visibility.
Padraig Byrne is senior director analyst at Gartner
Read more about observability in APAC
- Tokopedia has consolidated its observability capabilities on a single cloud-based platform, enabling it to improve customer service and identify infrastructure issues.
- Organisations in Southeast Asia are grappling with a hodgepodge of observability tools and have some way to go before they can achieve full visibility over their technology stacks.
- Splunk has been doubling down on its investments in the cloud in recent years and pushing more capabilities into its observability platform.
- With software moving to cloud architectures and no longer monolithic, organisations will need to establish intelligent observability so they can make better decisions.