Meanwhile, hyper-converged infrastructure (HCI) has begun to transform the datacentre, bringing compute and storage together in the same node, usually connected to others in a scale-out cluster.
Now, numerous hyper-converged suppliers offer the option to deploy hyper-converged from their products. In this article we look at options to deploy and work with OpenStack on hyper-converged infrastructure in the datacentre.
IT organisations that implement OpenStack can choose to use its “native” storage solutions or replace them with supplier-specific solutions that adhere to published application program interface (APIs).
OpenStack also has modules for compute, networking and other key infrastructure components such as domain name system (DNS), messaging and monitoring.
New releases are made available approximately every six months, with date-numbered releases. Many suppliers also package OpenStack into their own modified or enhanced distributions.
The hyper-converged market has matured over the last five years, with a range of hardware and software offerings.
Where OpenStack is targeted at large organisations with a wealth of developer and infrastructure knowledge, HCI has followed a different approach, initially focused on replacing silos of IT teams and being useable by IT generalists.
As hyper-converged has developed, supplier solutions have expanded to support larger workloads and a range of hypervisor options.
Typically, hyper-converged doesn’t look to offer all components of IT infrastructure in the same way OpenStack does, but instead brings together compute, storage and networking with server virtualisation. HCI has been generally based on VMware vCenter and the ESXi hypervisor, although support does exist (as we will discuss) for other options including kernel-based virtual machine (KVM).
OpenStack vs hyper-converged
Comparing the two, OpenStack and hyper-converged have come from opposite ends of the infrastructure market and look to solve different problems.
HCI has evolved from enterprise computing, collapsing traditional server/virtualisation and storage into one platform that offers a distributed storage layer across a cluster of server nodes.
Resiliency is implemented through the hypervisor and storage resources on each node.
OpenStack was designed to work with cloud-native applications, where resiliency is pushed more to the application in the case of failure, rather than relying on resilient hardware.
Where hyper-converged fits the enterprise, OpenStack suits the needs of developers, with APIs to enable the rapid deployment of virtual machines (instances) and other resources.
OpenStack with hyper-converged
At first, it may seem the two platforms have little in common other than implementing private cloud. However, bring OpenStack together with hyper-converged and the combination offers an infrastructure with the resilience of the enterprise and the flexibility of DevOps.
Enterprises can choose to leverage existing knowledge and assets around private cloud infrastructure like VMware, Hyper-V and KVM, while exposing the functionality offered by OpenStack.
Typically, as we see from the suppliers in our analysis, OpenStack is deployed on top of hyper-converge, and provides common APIs based on the OpenStack platform.
This means the underlying infrastructure can be obfuscated from the developer and optionally replaced or extended over time without affecting the deployment of applications.
Some suppliers have separated hardware components that deliver virtual machines (or instances) from management functionality, in some cases implementing the latter in the public cloud as a software service.
What’s clear from this roundup is that OpenStack support is an added feature rather than a clear direction for the market.
Suppliers seem to be keeping their options open by enhancing their solutions with OpenStack support. As we see the continued rise of Kubernetes for containers, the management landscape will become even more complex than it is today.
VMware has offered vSphere integration with OpenStack since 2013. vSphere Integrated OpenStack (VIO) is a VMware supported implementation of OpenStack on top of a vSphere deployment. Currently in release 4.0, VIO exposes APIs and interfaces that provide the OpenStack look and feel, but are run on vSphere and associated hardware.
The aim is to provide the resiliency and reliability of vSphere while exposing the agility of the OpenStack environment to developers. As expected, services in vSphere map to OpenStack services that offer similar functionality. Indeed, vCenter/ESXi maps to Nova (instance management), storage (Cinder) through vVOLs and vSAN and networking (Neutron) with NSX.
Simplivity (now part of HPE) announced OpenStack support in 2015 through the KVM hypervisor. The OmniCube platform is heavily integrated into vSphere and is essentially a distributed storage layer (the DVP) built on virtual machines across multiple physical nodes.
Since the HPE acquisition, it’s not clear whether Simplivity as a platform continues to support KVM and OpenStack.
Nutanix supports OpenStack from the Kilo release onwards and AOS (Acropolis Operating System) 4.6 running AHV onwards.
Integration is achieved through the use of a set of Acropolis OpenStack Drivers that translate requests from an Openstack controller into Acropolis Rest API calls. Nutanix AHV can also integrate with the Platform9 managed OpenStack solution to provide simplified software as a service (SaaS) management of on-premise resources.
Red Hat provides support for OpenStack through the Red Hat OpenStack Platform. This solution integrates Red Hat Linux, Ceph (64TB licence) and the Red Hat OpenStack Platform Director to automate many of the functions expected of enterprise solutions, such as hardened deployments, patching and bug fixing and support past the standard six-month development cycle provided in the open source platform.
Read more about OpenStack storage
- OpenStack is a rising star in private cloud infrastructures. But what about OpenStack storage? We run the rule over OpenStack Cinder and Swift.
- VMware vs OpenStack storage: The opportunities and challenges presented by the two virtualisation environments when it comes to storage, backup and disaster recovery.
Stratoscale has developed a private cloud solution that offers all the features of hyper-converged, while supporting OpenStack APIs. Stratoscale Symphony can be deployed on as few as three servers, while providing standard private cloud building blocks such as compute, block and object storage, networking and application catalogues.
ZeroStack has taken an interesting approach to delivering hyper-converged infrastructure by implementing a solution that implements on-premise hardware supporting OpenStack with management in the public cloud. On-premise servers run ZeroStack’s Z-COS operating system, with the Z-Brain SaaS management layer delivering user management functions and administration tasks like monitoring and maintenance.
Breqwatr offers an OpenStack enabled hyper-converged appliance that can be easily deployed on-premise. The solution includes features such as a dedicated Cluster Manager for monitoring the heath and status of nodes in a cluster. Storage support is provided either through native Cinder support for block or integration with external suppliers such as Pure Storage. Object storage is provided through Ceph.
Maxta originally offered the MxSP platform for building hyper-converged solutions on VMware vSphere. The company now offers support for OpenStack environments, either using dedicated appliances or software-only solutions. Integration is delivered through support for OpenStack Cinder and Nova drivers. The company also offers solutions integrated with Mirantis OpenStack.
The Mirantis Cloud Platform is a private cloud solution that includes OpenStack support for virtual machines and bare metal environments. Storage support is provided through Ceph for both block and object. The interesting evolution of the Mirantis cloud platform is the move to support container environments running Kubernetes, which sees the benefits of hyper-converged solutions applied natively to container workloads.