This is a guest post for the Computer Weekly Developer Network written by Chuck Herrin in his capability as CTO of Wib, a full lifecycle API security platform that helps companies secure their APIs across the entire API lifecycle – from code to testing to production.
Herrin writes as follows…
Let’s ask how we ‘scale’ API management in the webscale world of cloud computing and the expansive road ahead?
I believe we need automation, continuous discovery and as much visibility as you can get into where these interfaces are being published, how they change and how these changes impact enterprise risk. There’s no way to keep up with this manually and many APIs are either not documented at all, or poorly documented.
How do we catalogue and manage APIs on a day-to-day basis? With as much visibility as possible, through multiple lenses and then aggregate the population and telemetry about them into a single repository. This type of visibility and updated inventory is foundational to all other aspects of API management.
Open standards are increasingly important and we are working with industry groups to further this objective. Standardisation leads to a better understanding of the security and threat models facing critical APIs, such as the Financial Grade API (FAPI) for open banking… and can lead to a more robust understanding of the security vulnerabilities and defences that entities need to put in place and validate for proper function.
Without some standardisation, each API will remain its own unique population of zero-day weaknesses, making attack surface and vulnerability management increasingly more difficult and complex.
So how do we achieve unified monitoring of APIs from a security perspective?
It all starts with visibility and understanding of the vulnerabilities and risks exposed to the outside world, partners and other internal systems these interfaces make available.
Decommissioning old APIs
With that understanding of their actual attack surface, organisations can quickly move to shrink their attack surface by removing/decommissioning old versions or APIs that provide duplicative functionality, then harden and remediate the now smaller attack surface.
As this is taking place, firms should use a solution to create an ongoing, continuously updated baseline of API traffic that can discover information about API structures, data and backend systems that are exposed and as that baseline is established, automatically create alerts on deviations from that normative base. As the attack surface, population of vulnerabilities and normal traffic patterns are understood, automatic blocking of common API attacks and unusual traffic patterns can protect APIs and sensitive backend systems. But it all starts with complete and automated visibility – there’s no magic or silver bullet that protects assets or interfaces you can’t see.
So then, is a governance control plane needed to keep APIs where they are supposed to be doing the job they are supposed to be doing? It depends on the architecture and environment, but almost always the answer is yes. There are a lot of good options now for how to accomplish this.
Testing, testing, API
API testing is critical and should first and foremost align with business goals and risk.
Ideally, testing should occur as early in the development lifecycle as possible, but it’s also important that for environments with high security and compliance needs that you actually test with simulated attacks in production.
The only way you actually know what’s exposed to the outside world is to simulate attacks from the outside world, but that should be considered the last piece of testing, not the first. In addition, for firms with high availability requirements or multiple 3rd parties in the mix, it may be appropriate to integrate continuous synthetic testing from the outside world, such as those performed by APIMetrics.