API series - Cisco ThousandEyes: Go deep, granular & adaptive on API testing

This is a guest post for the Computer Weekly Developer Network written by Joe Dougherty in his role as product solutions architect at Cisco ThousandEyes

ThousandEyes monitors network infrastructure, troubleshoots application delivery and maps from ‘switch to SaaS’ and everything in between to deliver digital experiences.

Dougherty writes as follows…

The rise of API-first application development, along with the proliferation of public, third-party APIs, has empowered many developers to build richer, more connected digital experiences. 

But there’s a challenge: that richness comes at the cost of increased application complexity. With APIs that reside in multiple and separate IT environments, including external environments like the cloud or the Internet, every application experience powered by that API is subject to disruption whenever there is a connectivity glitch, starting at the network level.

So, how is developing and delivering reliable application experiences even possible under such circumstances?

Traditionally, DevOps and NetOps have operated as entirely separate teams with separate responsibilities. But the seismic shift to APIs, cloud architectures and distributed applications have rendered that bifurcated approach to application experience toothless.

When it comes to optimising digital experiences across every level and layer of the digital supply chain, APIs play a critical role in that delivery infrastructure, ushering in new approaches to monitoring technology, as well as a new mindset with the lines between DevOps and NetOps increasingly blurred.

No ‘steady state’ in the API cloud

Maintaining reliability of modular applications has always been hard but, with the explosion of cloud APIs, backend integrations and third-party services, it’s more challenging than ever. While it makes applications more intelligent, the increase in interconnectivity means there’s a new level of reliance on the Internet – where the fluctuation of service delivery paths between apps, datacentres and end users is constant.

Take a retailer, for example. In such a competitive industry customers’ application experience needs to be hyper-performing at all times. As little as a two-second delay can drive bounce rates upby 103% for a retailer. From real-time inventory updates, multiple payment options and instant shipping notifications – it’s all built on an ever-growing library of different APIs to keep it all together at all times. So, to deliver reliable digital experiences, integrations between these application functionalities have to be measured and tested to ensure critical workflows and backend interactions to external API endpoints are working as expected.

To test both application and network environments, both DevOps and NetOps have to work together.

Ensuring application performance means ensuring API performance… and that starts with having visibility into which and where your APIs are. #

Enter adaptive API monitoring – a new approach that offers a dynamic synthetic testing framework that emulates backend application interactions with remote API endpoints.

Nearly three-quarters (74%) of businesses, however, say they don’t have a full inventory of all APIs in their systems. When one of these connected services fails, without end-to-end visibility, it is impossible to quickly get to the bottom of where in the long list of both internal and external environments an issue is occurring.

Expanded view, smarter workflows

So, how can the latest monitoring tools help?

Some organisations will naturally turn to browser synthetic monitoring tools. Whilst these are a powerful way to continuously test key user workflows within the application, some browser-related user requests rely on multiple backend API interactions that are too complex to be visible from a user’s perspective.

Businesses must be able to test external APIs at a granular level from within the context of their core application, instead of only through a front-end interaction. In addition, they must be able to understand the impact of the underlying network transport, usually an ISP or cloud provider network.

Enter adaptive API monitoring – a new approach that offers a dynamic synthetic testing framework that emulates backend application interactions with remote API endpoints. Testing APIs on the backend means that DevOps teams can validate API responses and raise an alert if the check fails, then users are affected.

This level of deeper API testing allows application owners to measure and test critical workflows and backend interactions to external API endpoints, which in turn grants them an expanded view of the individual elements of the application workflow – both within the application and across the cloud and Internet.

Bridging DevOps & NetOps

Ensuring optimised application experiences requires more than monitoring; it’s a change in mindset, too. What used to be a single focus on ensuring application logic is now moving to a focus on measuring overall application performance instead, the way that a user would experience it.

Understanding user performance of API-centric applications means understanding how the digital supply chain traverses everything from the network, Internet, cloud, the server, the code all the way to the application — and for each hop in between, the functionality of an application means you’re doing several of those things back and forth, all over again.

Breaking silos is easier said than done and no single API monitoring solution will solve the hurdles between application owners and the holy grail of always-on application experience.

Building the bridge brick by brick, though, by providing DevOps and NetOps with an API monitoring solution that lets them share a common language, will allow teams to quickly pinpoint where along the digital supply chain the application, network, or API dependency is breaking down.

With the speed and complexity of change in today’s digital economy, an organisation’s ability to adequately understand and monitor its APIs’ performance – especially those of a third-party provider – cannot be overstated.

CIO
Security
Networking
Data Center
Data Management
Close