Kalawin - stock.adobe.com

How to implement a multicloud storage strategy

We look at how to build a multicloud storage strategy and benefits such as performance, availability and features, as well as potential limitations such as data mobility

This article can also be found in the Premium Editorial Download: Computer Weekly: Take the pain out of software patching

Not all clouds are built the same. On-premise clouds built from traditional virtualisation software are very different from public clouds in terms of availability, features and operation.

Hybrid cloud and multicloud strategies allow IT departments to make use of these different cloud offerings, but getting data into the right place at the right time is a challenge.

Here, we look at storage in the cloud and how to implement a multicloud strategy.

Defining hybrid and multicloud

Typically, hybrid cloud is used to describe implementations where data and applications are from on-premise into one or more public clouds, for example Amazon Web Services (AWS), Microsoft Azure or Google Cloud Platform (GCP).

Originally, this may have been to “burst” workloads, but increasingly can mean spanning both locations indefinitely. Multicloud takes things a step further, running applications across multiple clouds as part of the design, which may include running nothing on-premise.

Multicloud cost benefits

Why is this of benefit?

The most obvious reason is one of cost. Suppliers continually offer cheaper solutions, with reducing prices and discounts for committed use or cheaper spot pricing.

Although public cloud storage hasn’t really come down in price much recently, savings can be made on virtual instances, especially where applications need to quickly scale up and down, so data needs to be made available quickly in multiple public cloud environments.

Multicloud features

The second, probably more important reason for running multicloud is features.

Amazon, for example, is notorious for its speed of innovation. In 2017, the company released 1,430 features to AWS – almost four per day.

Not all of its releases are major changes or products, but many are continual enhancements to existing offerings. Cloud suppliers are moving to offer more platform-as-a-service (PaaS) and software-as-a-service (SaaS) offerings, such as machine learning and artificial intelligence (AI), that simply need enterprise data made available to them.

We should also remember that cloud choice can be influenced by design and operational benefits. Running multicloud could provide protection against failure of a single supplier.

Ultimately, getting the greatest benefit from a multicloud strategy means balancing the advantages of flexibility with the challenges of implementing security, networking and – key to this discussion – data availability, latency and consistency.

Cloud storage solutions

There are at least four ways in which data can be stored in public cloud solutions: native services, supplier-integrated services, marketplace services and colocated services.

Native services are those implemented by the cloud supplier directly. Typically, all cloud providers will offer block, file, object and some application-specific (eg, database) storage services. Block storage is usually restricted to internal use to connect to virtual instances, whereas file, object and database storage can be exposed to external connectivity. This has been a problem for some cloud users, where object and database data has inadvertently been defaulted to public access, so security settings are important to validate here.

Supplier-integrated services have started to emerge, with NetApp being the most prominent player in the market. Microsoft Azure NetApp Files, for example, provides the benefits of NetApp OnTap, integrated directly into Azure using Azure application programming interfaces (APIs) and security capabilities. Supplier-integrated solutions are usually more feature-rich and higher performing than native services.

Read more about cloud storage

Marketplace services are storage offerings that can be deployed in virtual instances from cloud application marketplaces. There are a huge number of solutions in this , from traditional storage platforms to data protection and analytics. The availability of large cloud compute instances and direct-connect NVMe devices means these solutions are practical for high-performance production use cases. Marketplace offerings are great at providing a consistent look and feel to on-premise and cloud implementations.

Colocated services are deployed by storage suppliers in nearby (or sometimes the same) datacentre to the cloud provider and connected using high-speed networking such as AWS Direct Connect or Azure ExpressRoute. The storage supplier provides the capability to provision storage that looks and feels like a traditional storage solution and can connect to the customer’s own on-premise storage solutions.

With such a range of storage offerings, it seems like there’s a confusing array of choices to be made. So why move away from the native offerings put forward by the cloud providers?

Performance: A big reason to use a third-party solution is that of performance. Suppliers don’t provide any performance or throughput guarantees on native services, other than for block storage. Even here, the assurances only cover throughput and IOPS and are generally in very limited or rigid specifications. Azure NetApp Files, for example, offers much higher performance than native file services and colocated services can deliver features like Quality of Service at a granular level.

Availability: Cloud providers offer fixed levels of availability, typically around three to four “nines” (99.9% to 99.99%) of uptime. Using colocated services could provide higher availability and enable applications to be quickly moved between clouds if an outage occurs. This is because the data is stored outside of the cloud provider equipment.

Features: Although suppliers are developing and enhancing services continually, most don’t have the breadth of features offered by suppliers that have been in the market for many years. Enhanced features may include better security integration and data protection (snapshots and replication).

Building a multicloud strategy

Getting back to the original subject of this article, how can these services be used to build a multicloud storage strategy?

Probably the question to ask is how often data and applications are likely to move between clouds.

If public cloud is being used, for example, for data protection, then applications are unlikely to move around much and so data mobility isn’t a big factor.

However, at the opposite end of the scale, where applications could run in any cloud at any time, then full flexibility is needed.

Data mobility

Implementing data mobility is probably the biggest challenge in deploying multicloud applications.

Data has inertia and takes time to move around. Cloud suppliers charge for egress – data accessed outside of their cloud – so mass migration of data from one cloud platform to another isn’t really a practical solution. As a result, the strategies for multicloud tend to fall into one of three categories: burst on-demand, data replication or abstraction of data.

Burst “on-demand”: Here, application and data are migrated to the public cloud when required. The actual implementation of this scenario could be through permanent virtual machine replication, for example, as achieved via Datrium, Velostrata or Zerto. Alternatively, data can be replicated via storage, as achieved with NetApp Private Storage or HPE Cloud Volumes.

Replicate data: Data can be replicated across clouds, keeping copies closely in synchronisation. File-based solutions such as Elastifile CloudConnect or Qumulo QF2 provide the capability to span on-premise or multiple cloud environments, making it easy to expose on-premise data to, for example, analytics services in the public cloud.

Abstract the data: These solutions separate the presentation of data from the underlying physical storage platform. This provides a single global view of the data, while the physical storage can be across one or many public and private clouds. Examples of this solution are Zenko from Scality or Ctera Enterprise File Services Platform. With full abstraction, data can be physically redirected or replicated between cloud providers to meet the needs of availability, cost and performance.

The key to making these solutions work are features such as automation – being able to run storage services from command line interfaces or APIs. Solutions also need to understand incremental changes in data and be able to move only the changed data between locations as this reduces the impact of egress charges.

DIY

If there’s anything to learn from existing solutions in the market, it’s that they are very much self-design and build in nature. Public cloud providers don’t expose block storage outside of their environment and don’t offer native interfaces for replicating file and block data. At this point in time, there’s no reason to expect that public cloud service providers will change this position.

IT organisations thinking of implementing a multicloud strategy therefore need to look long and hard at the solutions they will use, as the risk of lock-in could be greater in multicloud storage than it ever was in the private datacentre.

Read more on Storage management and strategy

CIO
Security
Networking
Data Center
Data Management
Close