This article is part of our Essential Guide: Essential guide to optimising hybrid IT infrastructure

Hybrid cloud storage: What it is and how to deploy it

Hybrid cloud storage optimises the opportunities provided by the cloud while recognising and working with its limitations

Despite all the hype around cloud storage, it still faces hurdles when it comes to commercial uptake. Those hurdles include concerns over access speed and latency, as well fears around security, compliance and data portability.

So, storage and cloud suppliers have come up with a potential solution. With hybrid cloud storage you do not have to use cloud-hosted storage for all – or indeed any – of your data. Instead, data can reside on-site, in a private cloud or in a public cloud – as appropriate for your performance needs, the economics involved, regulatory compliance and, of course, your risk assessment.

The most common enterprise use for cloud storage today is for off-site backup and archiving, as a relatively inexpensive way to help protect against technology and site failures. These applications are also less sensitive to latency and bandwidth limitations, especially if backup can be done from a snapshot.

However, while this can mean using a public cloud alongside a private cloud, it is about as hybrid as using disk for your primary data and tape for the backup. That is, the two are not integrated, but perform separate roles in the overall IT infrastructure.

Hybrid cloud storage more accurately means using on-premise storage and storage in the public cloud to create a greater overall value – as a kind of mash-up. You could have some data on one and some on the other, depending on its risk classification or its latency and bandwidth needs. Alternatively, you could federate a private storage cloud with a public cloud, using public cloud storage for archive, backup, disaster recovery, workflow sharing and distribution.

This hybrid approach can allow an organisation to take advantage of the scalability and cost-effectiveness of cloud storage without exposing mission-critical data. 

The challenge is to integrate and govern such a system, preferably without altering the existing on-premise infrastructure or the applications. That is especially true when you consider services must be provisioned from different sources, yet must act and interact as a single system. This, in turn, means you need common data and software management tools. 

Different suppliers try to solve this in different ways, including accessing everything via an Internet Small Computer System Interface (iSCSI), integrating primary storage with the cloud or via a cloud gateway of some sort, for example.

One of the most popular routes is a hybrid cloud storage appliance, which has intelligence, software and local storage built into it. Application servers communicate with the appliance and never directly to the cloud. By caching data locally the appliance provides more bandwidth than the wide-area network, reduces bandwidth and storage costs, and minimises the effects of link latency. The appliance can also deduplicate and encrypt data before staging it in the cloud.

Another route – or another element of the hybrid jigsaw – is to hybridise the application. Most mission-critical applications are vertical in nature, with the task moving through a stack of functionality and usually ending up in a database. While this database might be too sensitive (and large) to host in the cloud, other elements – most notably the web-based graphical user interface (GUI) – may be ideal candidates for cloud hosting.

For example, most modern applications are designed with a web front-end process that uses a browser or a series of RESTful application programming interfaces (APIs) to present information to users and obtain updates. This model makes it easier to accommodate different mobile devices or changes to the language, and it could also be cloud-hosted.

A vertical view of your systems opens new ways to take advantage of the cloud, and can also improve the cost and performance of those mission-critical applications

If the application does not have a web front end, you can follow the information flow through the stack and find the specific software component where formatting meets information processing. This is the logical GUI/application service boundary and it is where you could use that component's interfaces and APIs to connect a web front end.

Of course, this all assumes you can host the data yourself and provide adequately fast access to it, most probably via a query-server model. This allows cloud services to send database requests to a server and have it return only the specific data needed. That reduces traffic, delay and cost. You may also need to add load balancing at the service boundaries, deploying extra copies of each service as needed.

For transaction processing systems, however, speed and latency issues mean that running on remote storage is rarely an option.

One route is to do what NetApp has done with its NetApp Private Storage, which is to partner with colocation (colo) provider Equinix to access the high-speed, low-latency local connections that the major cloud providers make available to nearby datacentres. This enables an organisation to host its filers in a colo and connect them directly into the likes of Microsoft Azure or Amazon Web Services (AWS). Your applications can then be cloud-hosted while your data remains on private storage. Alternatively you can replicate your on-premise filers to cloud-connected filers for disaster recovery, so if the primary datacentre is lost the applications can be spun up in the cloud instead.

Again, that relies on segmenting the application into layers and figuring out which layers – and which data – can safely be cloud hosted and which you want to retain control over.

Many planners consider hybrid cloud for mission-critical applications purely for the failover or cloud-bursting opportunities it brings, but a vertical view of your systems opens new ways to take advantage of the cloud, and can also improve the cost and performance of those mission-critical applications.

Hybrid cloud storage suppliers

Several of the major storage suppliers and cloud providers have specific products targeted at building and operating hybrid clouds.


EMC has a number of offerings related to hybrid cloud storage, such as CloudArray. Derived from its TwinStrata acquisition, this works as a cloud storage gateway running both on-premise and in the cloud to provide capacity expansion, data protection and so on, with 256-bit Advanced Encryption Standard encryption. It presents cloud storage as an iSCSI or networked-attached storage device, and can replicate to the cloud and archive cold data there while maintaining on-site access to hot data.

Hybrid storage is also an element of EMC's recently announced Enterprise Hybrid Cloud, which can provide hybrid access between its on-premise storage and clouds belonging to VMware's vCloud Air, Microsoft Azure and AWS. Also, EMC has acquired Maginatics, the developer of a global namespace that can overlay multiple public and private clouds for unified data management.

Although not normally thought of as a storage provider, Microsoft's acquisition of StorSimple gave it a local storage appliance that also works as a cloud storage gateway. Now branded as Azure StorSimple, the device provides local storage for primary data, while moving infrequently-accessed data and snapshots to Azure cloud storage. A cloud-based snapshot can be mounted as if it were a local file system and accessed remotely.


Hitachi's object storage software, the Hitachi Content Platform (HCP), allows enterprises to build multi-tenanted private clouds hosting up to 80PB and to automatically tier data to public clouds. Supported targets include Microsoft Azure, Amazon Simple Storage Service (S3), Google Cloud, Hitachi Cloud Services, and any other S3-enabled store. HCPs can be globally distributed and synchronised for better performance and availability. The company also offers HCP-based file sharing and data ingestion products.


IBM has the goal of making public and private clouds seamless, for example via its Elastic Storage on Cloud (Esoc) service which offers hybrid options. Hosted on IBM's SoftLayer bare-metal cloud and designed to scale beyond 1PB, Esoc – which also supports OpenStack Swift – works as a control plane able to automate snapshots, backups and movement of older data off to cheaper storage. It forms part of the SoftLayer-hosted IBM Platform Computing Cloud Service. IBM also has some hybrid cloud storage capabilities elsewhere in its range, notably in its StorWize and XIV scale-out NAS families.


NetApp promotes its hybrid concept NPS, which allows customer-owned filers to be hosted in colo facilities that have direct low-latency connections to the nearby datacentres of major cloud providers. It also now has a version of its OnTap storage management software that works in the cloud – on an AWS virtual machine, for example – and interoperates with OnTap on-premise to provide dynamic data portability.


Other companies, such as Dell, focus on working with the likes of VMware and OpenStack, and on providing the underlying cloud hardware and software, whether for mid-range private clouds or for enterprise and public clouds.

Red Hat

There are also startups and software developers that tackle hybrid cloud storage. For example, Red Hat says its software-based Red Hat Storage Server can bring together private cloud storage and the Amazon public cloud, unifying data access and creating a hybrid storage cloud. The company also owns Inktank, developer of Ceph Enterprise, an enhanced version of the open-source Ceph massively-scalable storage system.


Avere offers edge filer technology, either as hardware or a virtual server, which makes cloud resources addressable as NAS. The local filer minimises latency to the cloud, and can include Flash, NVRAM and DRAM to further accelerate performance. Existing NAS can also be integrated with cloud into a seamless single storage resource.


On the hardware side, Ctera offers a cloud storage gateway that provides local NAS, plus backup and replication to the cloud and a virtual cloud drive. Ctera Portal adds the ability to manage, synchronise and backup local and public cloud storage, allowing the use of cloud and on-premise storage depending on requirements. Desktop and mobile apps also allow local folders and files to be shared and synchronised to the cloud.


Nasuni provides local filers that act as cache and gateway for a unified hybrid cloud storage service. Files are moved to the cloud for long-term storage, as are regular snapshots. Both file (NAS) and block (SAN) access are supported, along with web-based access and mobile sync.


Taking a different tack, Panzura's Quicksilver cloud storage controllers use a global file system to cover remote cloud storage and local disk or Flash. Quicksilver devices can be federated with others at different locations, while the controllers can work with a wide range of public cloud storage platforms.


Amazon's AWS Storage Gateway can provide hybrid functionality, either caching hot data locally or storing specific primary volumes locally for low-latency access, with asynchronous snapshots backed up to the cloud. Alternatively, the gateway can be configured as a virtual tape library, storing backups in the cloud.


Several other companies also use public cloud storage as a backup or replication tier, including HP Autonomy, which uses the public cloud to back up a private cloud. HP's wider private and hybrid cloud strategy has been firmly based on OpenStack, but in September 2014 HP acquired Eucalyptus, an open-source tool for building AWS-compatible private clouds that can seamlessly burst to Amazon.

Read more on Cloud storage