Open source at its heart and essentially a web server technology, Nginx (pronounced: engine X) is the company that would like to have its name capitalised in the media but can’t, because it’s not an acronym.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Branding police histrionics and so-called ‘marketing guidelines’ notwithstanding, Nginx does have some arguably interesting work going on in the microservices space.
The firm is now working to bolster its application platform technology with enhancements focused on API gateways, Kubernetes Ingress controllers and service meshes.
So let’s define those technologies first.
An API gateway is responsible for request routing, composition and protocol translation — it provides each of an application’s clients with a custom API.
According to an Nginx blog detailing the use of API gateways, “When you choose to build your application as a set of microservices, you need to decide how your application’s clients will interact with the microservices. With a monolithic application there is just one set of (typically replicated, load‑balanced) endpoints. In a microservices architecture, however, each microservice exposes a set of what are typically fine‑grained endpoints.”
An API gateway is a server that is the single entry point into the system — it encapsulates the internal system architecture and provides an API that is tailored to each client.
It should be noted that an API gateway may also have other responsibilities such as authentication, monitoring, load balancing, caching, request shaping and management – plus also static response handling.
Ingress & service meshes
Looking briefly at Kubernetes Ingress controllers we will note Ingress can provide load balancing, SSL termination and name-based virtual hosting. Also noted in the Nginx news above are service meshes – as defined here by William Morgan a service mesh is a dedicated infrastructure layer for making service-to-service communication safe, fast and reliable. If you’re building a cloud native application, you need a service mesh.
Nginx CEO Gus Robertson claims that his firm’s technologies span the data and control planes and are infrastructure-independent, working across bare metal, VMs, containers and any public cloud.
The firm’s application platform collapses ten disparate functions into a single tool: web server, application server, load balancer, reverse proxy, CDN, web application firewall, API gateway, ingress controller, sidecar proxy and service mesh controller.
“Nginx eliminates the operational complexity and management frameworks needed to orchestrate these technologies, accelerating microservices adoption for enterprises that lack the skills and resources to manage point solutions,” said the company, in a press statement.
As part of this news, Nginx is tabling a new control-plane to manage both legacy and microservices apps. New data-plane features aim to simplify the microservices application stack — and, finally, a new app server aims to improve performance for modern microservices code.
As we get used to the notion of microservices and the ‘composable’ nature of more granular application architectures, we need to start understanding the internal mechanics of microservices themselves so that we can precision engineer their connection points and management effectively – with its higher level control plane approach, Nginx is attempting to take some of that complexity away, which it is (arguably) doing so admirably, but surely the smart money is still on making sure that we have an understanding of what is going on at the API gateway microservice mesh coalface.