The ephemeral composable stack - Hadean: Alignment to a common abstraction

This is a guest post for the Computer Weekly Developer Network written by Aidan Hobson Sayers, technical director at Hadean.

Hadean was founded in 2015 with a mission to democratise supercomputing resources and empower developers, data scientists and decision-makers to solve the world’s most critical issues – issues that the company says cannot be solved using a 40-year old technology stack.

Hobson Sayers writes as follows…

Composability is a keystone that all software is built on – the ability to take software libraries and compile them into your applications is probably the most widely understood example of this. But when it comes to infrastructure, the benefits of composability have only started becoming accessible relatively recently.

Alignment to a common abstraction

Composability implies an alignment on a common abstraction that everybody builds on top of.

The difficulty historically with composable IT has been this lack of alignment – at one point the abstraction might have been ‘containers’ (and before that, ‘virtual machines’), but there was a pressing need for a higher level of abstraction to permit application and service standardisation across large IT estates. Out of this need arose a number of orchestration and management tools, with Kubernetes ultimately emerging as the number one player.

Rallying around Kubernetes has notable advantages in the simplicity and composability it brings to its ‘happy path’ of application creation, specifically running microservices within a datacenter.

Standard mechanisms for service discovery, resilience and deployment all pay huge dividends (both at large enterprise scale, and also more generally in tech skills availability) and the Cloud Native Compute Foundation (CNCF) landscape [https://landscape.cncf.io/] shows a breathtaking ecosystem of tools that, while not entirely oriented around Kubernetes, will often have it front of mind as the deployment target of choice.

The ephemerality of multi-region clusters

But there are unanswered questions around the edges of ‘out-of-the-box’ Kubernetes on topics like ‘ephemerality of clusters’ and ‘multi-region deployment’.

Cloud provider-hosted Kubernetes offerings can offer some answers, but they come with their own set of tradeoffs of lock-in to vendor-flavored-Kubernetes and a bias towards keeping workloads within the cloud provider infrastructure, instead of permitting easy shift between different cloud providers as well as on-premise datacenters.

Hobson Sayers (foreground): Align to the cloud’s core common abstraction.

Alternatively, the CNCF landscape references a few efforts around these (and similar) questions, but these are often nascent, competing solutions, with a lack of clarity whether they adhere to the composability promise that provides much of the draw of Kubernetes in the first place – whether ephemeral cluster management solutions plays well with edge-aware deployment solutions may be something is something that may only be discoverable through experimentation.

These hazy areas off the happy path of Kubernetes go from a curiosity to a pressing problem when building more powerful applications that lean into the capabilities of disaggregated technology infrastructures and need an understanding of the resources on which they sit. This is the kind of application that brings capabilities of pushing compute closer to the user, or adaptively provisioning GPUs, or bursting on-premise compute tasks to the cheapest cloud provider. Indeed, applications like this might reasonably be said to be more ‘cloud native’ than existing types of workloads, embracing the dynamic nature of infrastructure, and adapting its own behaviour based on a holistic view of patterns of use and the availability of resources.

Artificial decomposition

The higher level, simplifying abstractions of Kubernetes, a force multiplier for microservice-based applications, act as constraints that hobble these more capable applications, with the upshot of either forcing an artificial decomposition of the application (both its code and its infrastructure) or requiring the addition of Kubernetes plugins to add the necessary primitives that give your application the control it needs.

A fresh look at this problem space suggests a foundation with a more general, lower-level abstraction on top of the disaggregated compute infrastructure that gives applications the power to manage their own resources, adapting to workloads and environments dynamically. This doesn’t replace the need for orchestration tools like Kubernetes that attack a specific problem space, but instead provides a standardised way for applications to ‘drop down’ a layer of abstraction rather than performing unnatural workarounds to bypass some underlying architectural assumptions that don’t hold in their case.

Given the prevalence of Kubernetes, these concerns may seem academic – aren’t people coping ok today? But with growing data scales for AI and other analysis, increased global connectivity of users and devices and the rise of ephemeral IT resources, agility in building more powerful applications will become a competitive advantage.

 

 

CIO
Security
Networking
Data Center
Data Management
Close