Is Azure ready to go cloud-native?

We look at the suitability of Microsoft’s Azure platform for deploying cloud-native applications

It is one thing to move to the cloud. The consensus today, though, is that for full business agility, cloud-native applications yield the best rewards. But cloud-native is associated with Linux applications and container technology. How suitable is Microsoft’s Azure platform for cloud-native deployment? 

The term “cloud-native” application is imprecise, used because of a need to distinguish between lift and shift migration of existing applications to cloud platforms, and re-architecting applications for optimum business value in a cloud environment.   The supplier-neutral Cloud Native Computing Foundation (CNCF) says this as part of its definition: “Cloud-native technologies empower organisations to build and run scalable applications in modern, dynamic environments such as public, private and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure and declarative APIs [application programming interfaces] exemplify this approach. These techniques enable loosely coupled systems that are resilient, manageable and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.”

Key concepts

Packed into this definition are several key concepts. Perhaps the foremost is microservices, the idea of composing applications from multiple services, each of which is small and therefore easily managed and updated. This has several advantages. One is agility, as modifying a small module is quicker and safer than modifying a large and complex body of code. Another is scalability, as each service can be monitored and scaled separately, enabling resources to be added precisely where they are needed. Third, by building fault tolerance into each service, resilient applications can be created.

But there is a trade-off, because distributed applications also introduce complexity, and the overhead of managing many component instances.

Decisions about the right architecture for an application are influenced by many factors, including how business-critical it is and the anticipated load. The good news is that modern cloud platforms allow even small applications to have a level of resilience and scalability that comes almost for free, presuming you use application services such as Microsoft’s Azure App Service or Amazon’s Elastic Beanstalk, rather than depending on virtual machines (VMs) to host your application directly.

How do you deploy a microservice? In many cases, containers are the ideal unit of deployment. They are much lighter than a VM, relatively well isolated, and amenable to automation. A container provides a predictable environment, defined by the developer, and specified in code. This enables “infrastructure as code”, which also enables the immutable infrastructure referenced by CNCF. Infrastructure is immutable when you do not change it, only replace it. 

Container technology was first developed for Linux. In Windows Server 2016, Microsoft introduced Windows Server Containers.  There are two types of container on Windows. Alongside standard containers (maximum efficiency), Hyper-V containers improve isolation and security at the expense of lower resource sharing.  Deploying a container is easy, thanks to Docker, a tool that knows how to instantiate and run a container defined in a configuration file called a Dockerfile. That said, if you have an application composed of perhaps hundreds or thousands of container instances, with requirements for resilience, scalability and efficient use of resources, then deploying those containers manually is out of the question. Rather, you need a tool to orchestrate the containers, and a discovery service to tell microservices how to find one another.

There are a number of such tools, including Kubernetes, Docker Swarm and Mesosphere DC/OS. Of these, Kubernetes, which originated at Google and is now an open source project managed by CNCF, has become the tool of choice in most, but not all, cases.  As you would expect, the major public cloud platforms have hastened to provide first-class support both for containers and for Kubernetes. Microsoft is no exception, and its Azure public cloud now offers strong support for containers and microservices. 
Microsoft has multiple solutions for cloud-native applications on Azure. One is home-grown and is called Service Fabric. The other, usually called the Azure Container Service, supports Kubernetes and Mesosphere DC/OS.  With the ascent of Kubernetes, this is also now called the Azure Kubernetes Service (AKS).

Another option is Pivotal Cloud Foundry, available via a partnership between Microsoft and Pivotal, and with support for Linux and Windows applications. Cloud Foundry is engaged in transition from its own container orchestrator, called Diego, to Kubernetes. So this is another route to microservices/Kubernetes on Azure. 

A service used in common by all Azure container platforms is the Azure Container Registry (ACR). Container images in ACR can be deployed using Kubernetes, Service Fabric, Docker Swarm, DC/OS, Azure App Service and more. ACR supports the Docker Registry 2 CLI (Command Line Interface).

A key feature of ACR is Tasks, which lets you trigger automatic container image builds in response to events such as committing code to a GitHub repository, or the update of a base image on which other containers depend.  Now in preview is multi-step tasks, which let you add steps such as testing and validating images before deployment.

ACR also now has preview support for the Open Container Initiative, a set of cross-industry standards for container images.  
Both Service Fabric and AKS have a strong future. Service Fabric is deeply baked into the Azure infrastructure. It is used by Azure Active Directory, the directory service for Office 365, and by other applications including Cosmos DB, Microsoft’s NoSQL, multi-model database manager. Equally, Kubernetes is the de facto industry standard and its use will only grow. 

Microsoft describes Azure Service Fabric as a “distributed systems platform that makes it easy to package, deploy and manage scalable and reliable microservices and containers”. This includes an orchestrator (also called Service Fabric) and services for container-to-container discovery.

Read more about Microsoft's container strategy

We assess how Microsoft is aligning its development tools to support Linux and open source, in light of its acquisition of GitHub.

Windows wasn't always an ideal environment for containers, but recent changes offer a wealth of options, including those for Windows Server and the Azure public cloud.

Now in preview is Service Fabric Mesh, a fully managed service for microservice applications. Microservice platforms use clusters on which to deploy containers. Service Fabric Mesh abstracts the cluster management so that you need only specify the resources, resource limits and availability requirements. Service Fabric Mesh supports both Windows and Linux containers. 

At its Ignite conference in September, Microsoft announced several improvements to Mesh, including the ability to mount Azure Files (Azure-hosted file storage) and highly resilient storage called Service Fabric Volume Disk. 

Managing Kubernetes applications

In the Kubernetes world, Helm is a solution for managing Kubernetes applications using Charts, where a Chart is a packaging format for describing Kubernetes resources. Charts make it easier to deploy, version and roll back Kubernetes applications.  
At Ignite, Microsoft announced support for Helm repositories, now in preview. Helm Charts can now be stored in ACR.  

The company also announced Kubernetes support on Azure Stack, Microsoft’s packaging of a subset of Azure services for local installation on-premise.

So, if you have decided on Azure, should you deploy your cloud-native application to Service Fabric or Kubernetes? 
Sometimes this decision is made for you by your available skills. Kubernetes is an industry standard, while Service Fabric is unique to Azure, so Kubernetes is more likely to be well understood.

The application model is another factor. Kubernetes may support Windows containers, but it is Linux-oriented. If an application is written for Java and Linux, say, then Kubernetes will be a better fit than if it is built with .Net Framework and Windows. 
Service Fabric is, in a sense, Azure’s native platform. It is a natural fit for Windows Server containers, although it also supports Linux. If your application is written in .Net and makes use of many Azure services, then Service Fabric is a natural fit. 

Abstract away the complexity

Service Fabric Mesh is not yet production-ready, but is interesting as an attempt to abstract away the complexity of managing high-scale microservice-based applications, while retaining their benefits. Note, though, that the Kubernetes community is also engaged in efforts to reduce the burden of management and configuration and get closer to a “just run my code” ideal.  
CNCF executive director Dan Kohn told Computer Weekly: “One of the consistent themes of the leaders in the community is that the current way of deploying an application on Kubernetes – that you build this container and then you write some YAML – is not the correct abstraction layer most developers should be dealing with. But there is no consensus on what the better one is.” 

This is Service Fabric Mesh territory, and shows that while Kubernetes will remain the industry standard, there can still be good reasons to use other approaches. 

Read more on Containers

Data Center
Data Management