ar130405 - Fotolia
It’s been almost eight years since the first release of OpenStack and the promise of a platform that would deliver a new era in open source private cloud.
In that time, we’ve seen OpenStack’s star rise and fall somewhat, with the project entering what can only be called a more mature phase.
Since we last looked at the storage components around three years ago, what changes and evolution have there been in the way OpenStack consumes storage?
And how does this fit into the wider storage ecosystem as the project becomes more mainstream?
The initial OpenStack platform was a joint collaboration between Rackspace and NASA that produced an Open Source ecosystem to run private clouds.
OpenStack is developed and distributed under the Apache software licence, which allows for free distribution and the ability to modify the original code.
The platform is made up of a number of projects, each of which manages a typical part of private cloud infrastructure. The first to be developed were focused on virtual instances, networking and object storage. Each project is assigned a name – Nova is the orchestration engine for virtual machines, while Neutron implements networking.
The three main storage projects are:
It’s probably accurate to describe these three projects as core storage infrastructure. There are also other projects that are application data-focused, including Trove (Database as a Service) and Sahara for big data processing. Some of the storage projects integrate with other OpenStack services, such as Keystone, which implements identity management.
More on OpenStack storage
- The opportunities and challenges presented by the two virtualisation environments when it comes to storage, backup and disaster recovery.
- OpenStack and its Swift and Cinder object and block storage lead the way, but are not the only options for building open-source cloud platforms.
OpenStack produces new software releases around a six-month cycle, with each release given a code name that follows the letters of the alphabet. The current and 18th version, Rocky, was released at the end of August 2018. To add further complication, individual projects issue multiple releases during a major platform release. Swift uses this approach, for example.
As we discuss some of the updates to storage projects, we will highlight some of the more notable updates and enhancements. Due to the number of updates, it’s not practical to list them all in this article and we recommend reading the online release notes for each version of OpenStack as it is released.
Block storage support
The Cinder project provides support for persistent block storage that can be attached to virtual instances (virtual machines) running applications. This is divided into native support through LVMs (Logical Volume Managers) on OpenStack server infrastructure or plugins to support external storage platforms from traditional vendors.
As with all OpenStack components, provisioning of storage is driven by API with vendor-based plugins providing a translation layer that maps Cinder volumes to storage volumes on the vendor platform.
Cinder was originally introduced into the Folsom release of OpenStack (September 2012) and vendors have continued to add driver support over time.
However, the degree of support varies widely. The ability to attach a snapshot as a new virtual machine, for example is not supported by many vendors.
Even within the same company, feature support isn’t consistent. Dell EMC PowerMax, as an example, supports many more features than Dell EMC’s PS series. This may reflect the priorities of individual developer groups within the company and the long-term support for specific storage platforms. Of course, not every storage vendor will be able to deliver features that don’t exist in their products – quality of service for example, is only implemented by a small subset of storage vendors.
It’s fair to say that over recent OpenStack releases, features introduced into Cinder have been more incremental than revolutionary. Many close gaps that are generally available in other ecosystems or introduce new drivers for new product platforms.
Object storage support
Object support is provided by Swift, a scale-out object storage platform that provides high durability and availability for object-type data.
Compared to file systems, object stores generally provide higher scalability and distributed data access, while sacrificing the benefits of POSIX compliance. For large volumes of generally static or read-only content, object stores can be a much more cost-effective solution than scale-out file systems.
Where Cinder is mostly used as an interface for platforms from storage vendors, Swift is an object store in its own right. Companies like SwiftStack build on top of OpenStack Swift to create more mature solutions.
However, storage vendors with platforms that offer native support can be simply dropped in in place of the native Swift implementation. This is because object storage platforms are natively driven by APIs that follow either Swift or S3 standards.
Swift is updated more frequently than Cinder or Manila, with multiple updates (usually every two months) issued with every major OpenStack release. Most recent new features, introduced in 2.18 with Rocky have included container sharding (an improved metadata distribution feature) and S3 API compatibility. There have also been internal optimisations to cater for the growth in individual storage drive capacity and server storage density.
File storage support
Support for file access is delivered through the Manila project.
Originally, file services were intended to be delivered as part of Cinder, but in 2013 Manila was forked as a separate project. The use case for files compared to block support is easy to see and exists in public cloud as well as other platforms such as VMware. Typically, block devices are not shareable between virtual machines or instances. File services provide that capability as well as adding other enhanced features like security and better scalability.
Manila aligns to the design of Cinder in that vendors write plugins that provide the automation to provision and map file systems to virtual instances. However, whereas Cinder provides a matrix of features that are supported by each driver, Manila drivers are more bespoke, with specific options for each of the file system vendors supported.
Initially, Manila supported standard file systems, including NFS and CIFS. Over time, this has expanded to include more proprietary solutions such as Gluster, Ceph, Hadoop File System and MapR.
With the Ocata release of OpenStack, a lot more work was done on snapshot features. This included the ability to use mountable (temporary) snapshots for recovering data, the ability to revert file shares back to a previous snapshot and to migrate snapshots from one platform to another. The migration API was extended to a two-phase process, enabling data to be accessed while migrations take place.
As OpenStack runs as a set of independent projects, feature implementations are not always consistent. As we’ve mentioned, the ecosystem of drivers for Cinder is much greater than that for Manila and better documented.
One major issue that stands out with the evolution of all OpenStack storage projects is the need to upgrade the software in order to bring in new vendor drivers or driver features. This introduces a significant overhead in the process of maintaining support for external storage platforms.
The solution to this taken in the container ecosystem has been to develop container storage interface (CSI), which abstracts the driver specification from the driver itself. Individual vendor drivers can then be installed, replaced or upgraded without needing to update the core container software. This kind of design is badly needed in the OpenStack storage projects.
With much of the major work in integrating storage now completed, future development is a transition to adding features and supporting new platforms.
This will mean the OpenStack project being more focused on operational data services than infrastructure. New projects like Freezer (backup and disaster recovery) will be where future developments will take place.