Private cloud storage is increasingly being seen by organisations as an alternative to existing methods of providing shared storage to business units. But, what defines private cloud storage is not always clear. Sometimes, there’s confusion with pools of virtualised storage provisioned manually to users, but such a setup doesn’t have the attributes required to qualify as a private cloud storage environment.
So, what does make a private cloud? In this article we will discuss how to establish an internal, or private, storage cloud, the features and functionality to expect, and products that deliver a private cloud storage infrastructure.
Defining private cloud storage
Let’s start with a definition of cloud computing. One of the best and most widely accepted is provided by the US National Institute of Standards and Technology (NIST):
“Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (eg, networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”
There are some key elements of this definition that make cloud storage different from simply presenting and provisioning storage within a data centre.
Elasticity, or scalability, is the ability to scale the infrastructure up and down to meet customer demand. Obviously, given overall levels of data growth, downward elasticity is not likely to be a problem for most IT departments. Any capacity freed up by, say, one department ending a project, is likely in most cases to be taken up by another’s demand for more storage, which is a good thing because once you’ve spent money on capacity you can’t get it back. With an internal cloud it’s the internal customer that benefits from elasticity, not the IT department. The key issue for most will be scaling upward, as it requires physically deploying hardware and integrating it into the existing storage networking infrastructure.
One way of managing increased capacity demand is to implement a node-based infrastructure, such as that provided by HP 3PAR (for block storage environments) or EMC Isilon (for file-based environments). These architectures allow the nondisruptive addition of nodes, and therefore additional capacity, into storage clusters. Nearly all the major storage vendors are moving to node- or cluster-based products to meet the need for such scalability.
However, node-based deployments need to offer more than simply scalable capacity to be suitable for cloud storage. They should also offer consistent I/O performance as capacity scales as well as offer high availability and enable components to be added or removed or upgraded with no outage or downtime.
The NIST definition highlights the requirement that cloud resources should be “rapidly provisioned and released with minimal management or effort.” We can see, therefore, that traditional storage management, which relies on trained technicians to provision and decommission resources, will not work for cloud environments. Instead, cloud-ready storage devices should offer APIs to perform business-as-usual storage tasks.
Two good examples of this are the ATMOS product from EMC and recent storage startup SolidFire with its SF3010 array. Both of these products offer a REST (REpresentational State Transfer) API that enables resources to be provisioned and decommissioned without the involvement of a storage administrator. APIs enable orchestration software to automatically create and destroy LUNs or file shares on-demand based on customer requests.
Automated provisioning is a step further than many organisations want to go when delivering storage. It is perceived as relinquishing control, with potential negative side effects on performance, manageability, and keeping control and integrity in a complex environment. But, modern node-based systems are designed specifically around on-demand requirements, while retention of control in a complex, fast-moving environment is handled by the use of a storage catalogue.
In many organisations, requests for storage are processed on an individual and bespoke basis. This is clearly not a scalable modus operandi and is difficult to automate while adhering to published standards of performance and efficiency in a cloud environment. A more effective approach is to provide the customer with a range of offerings from a storage catalogue. This lists the capacity, performance and availability metrics the customer can expect from a number of storage tiers.
The aim of the storage catalogue is to standardise customer requests to a degree that enables them to be automated through orchestration software. Requests not provided for by the catalogue can be processed manually as exceptions.
The service catalogue isn’t supplied by a vendor but is written by the IT department for its environment. It isn’t a product but simply a definition of the offerings presented to internal customers. Storage hardware and software then delivers storage to specifications laid out in the catalogue. ITIL (Information Technology Infrastructure Library) offers guidelines that can help in the creation of service catalogues.
A correctly specified catalogue provides one further benefit: abstraction. If resources are provided in a cloud infrastructure, the underlying technology is abstracted from the customer view. The customer should only be concerned with whether the storage resources meet the agreed service levels.
Cloud storage is by nature multi-tenanted. By that, we mean resources are provided to multiple customers on the same infrastructure. The obvious benefit of multi-tenancy is one of cost reduction by sharing resources; for example, thin provisioning in multi-tenant environments enables higher levels of hardware utilisation to be achieved.
However, most multi-tenant environments are based on notions of security rather than performance. As an example, on block-based arrays, customers can be segregated using zoning and masking. On file-based arrays, features such as NetApp’s MultiStore enable the creation of multiple virtual arrays, each of which can be assigned to an individual customer.
These products are focused on ensuring data security for multiple customers, but equally important is the ability to deliver multiple performance tiers through quality of service (QoS). To date, the only manufacturer offering this feature natively at the LUN level is SolidFire with its Element Operating System, which allows performance to be tuned on each LUN.
Reporting and billing
The final aspect to consider is that of reporting and using that information for billing the customer. Many organisations today still use spreadsheets to track storage resources. This is simply inadequate in cloud environments, where storage is routinely created and destroyed. Cloud storage requires proper billing and tracking, built into the array itself. Billing ties back to the service catalogue, allowing the customer to easily understand how their charges have been calculated.
Most storage resource management tools on the market today offer some form of reporting and billing but may not match exactly to customer requirements. Dedicated reporting tools are available from companies such as Storage Fusion, which provides a service-based reporting engine called SRA (Storage Resource Analysis). And through its acquisition of Novus, IBM offers the SERP (Storage Enterprise Resource Planner), a software platform for reporting, capacity planning and chargeback.
Private, or internal, cloud storage is very different from simply provisioning LUNs from a pool of storage arrays. It requires:
- Elasticity, the ability to scale up or down on demand
- Automated management, which is the rapid provisioning and decommissioning of resources
- Multi-tenancy, or support for multiple clients, with segregation by security and performance
- Reporting and billing, with full disclosure on resource usage and costs
All the above requirements are underpinned by the storage catalogue, which lists the services available in the cloud storage offering.
Many storage vendors, including HP, EMC and Hitachi, offer orchestration of storage resources as part of a converged computing infrastructure. Unfortunately, pure cloud storage solutions still require a large degree of integration by the customer to bring the individual hardware and software compo nents together.
This was first published in July 2011