Containerisation in the enterprise - Hammerspace: Portable potential in storageless data

As businesses continue to modernise their server estate and move towards cloud-native architectures, the elephant in the room is the monolithic core business application that cannot easily be rehosted without significant risk and disruption. 

These days, it is more efficient to deploy an application in a container than use a virtual machine. Computer Weekly now examines the modern trends, dynamics and challenges faced by organisations now migrating to the micro-engineered world of software containerisation.

As all good software architects know, a container is defined as a ‘logical’ computing environment where code is engineered to allow a guest application to run in a state where it is abstracted away from the underlying host system’s hardware and software infrastructure resources. 

So, what do enterprises need to think about when it comes to architecting, developing, deploying and maintaining software containers?

This post is written by Douglas Fallstrom, VP product and operations at Hammerspace — Hammerspace is storageless data, achieving the promise of Kubernetes to enable workload portability across any environment.

Fallstrom suggests that workloads will never be truly cloud-native until we solve the problem of data portability and writes as follows…

Chugging wheel reinvention

As enterprises chug along their path to become more cloud-native by adopting containers and moving things into the cloud, they find themselves embedding a lot of data management functionality into their applications.

This is the wheel being reinvented repeatedly, often by developers who are not experts in enterprise storage infrastructure. 

This isn’t the same for compute and networking, which have been standardized and largely virtualized. When it comes to data, your app needs to be aware of the infrastructure and where data is located, which just adds the overall complexity of containerisation and contributes to the need to reconfigure apps if something changes. This reality isn’t compatible with the philosophy of cloud-native workloads that people expect as they containerize. Just as compute has gone serverless to simplify orchestration, we need data to go storageless so that apps can access their data without knowing anything about the infrastructure running underneath.

Douglas Fallstrom, VP oroduct & operations at Hammerspace.

What is storageless data?

When we talk about storageless data what we are really saying is that data management should be self-served from any site or any cloud and let automation optimise the serving and protection of data without putting a call into IT. 

This is achievable when data is managed through the metadata; lightweight and descriptive, metadata can be replicated everywhere without effort making app data portable. 

This allows for the orchestration of data along with compute by relieving the need for applications to take on the job of data management and plugging that gap that holds us back from a cloud-native Nirvana.

From the company’s product pages, we can see that it advocates working in a format where all data is accessible active-active across all sites through a universal global namespace, virtualised and replicated at file-level granularity.

It further notes that managing data at file-level granularity is the only way to efficiently scale across complex mixed infrastructure without creating unnecessary copies of entire volumes of data.

Extensible metadata allows for user-defined metadata (keywords and tags) as well as pre-declared entries (labels and attributes) to be added to the file system. Users can view, filter and search metadata in-place while navigating the namespace. Rather than relying on filenames to identify data, user-defined metadata rapidly, accurately, and efficiently enables users to find the data they need.

CIO
Security
Networking
Data Center
Data Management
Close