In traditional hosting models, storage is usually direct attached storage (DAS) to the node serving up the virtual environments (VEs). DAS usually comes in the form of SATA devices with 1.5-3Gb/s interfaces and an approximate sustained bandwidth of around 100MB/s. The great advantage of DAS is that it's fast (100MB/s) and it's scalable (as you add nodes, they come with more local storage). But the local nature of the traditional DAS model is also a disadvantage - because if you want to migrate your virtual environments, you have to take a physical copy of their associated storage, as well. This requirement makes the DAS model inappropriate for dynamic, highly fluid environments, such as those found in the cloud.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The ideal virtual storage solution for hosters offering cloud services is one that provides the speed and scalability advantages of locally attached storage but adds the ability to migrate, scale, and snapshot the storage. In addition, its cost per terabyte must be similar to that of local storage, and it should provide object copy redundancy for higher data reliability.
Parallels, which specialises in providing solutions to hosting companies and service providers worldwide, has surveyed the storage needs of the hosting industry. Its conclusions and recommendations are recapitulated here to provide guidance to hosting firms who are thinking of moving from traditional DAS to cloud storage to enhance their cloud offerings. Assuming you’ll be deploying the cloud storage solution on mostly existing hardware, we’ve divided the requirements into two categories: must have for initial deployment, and nice to have for the future
Must haves for initial deployment
The absolutely critical requirements for your initial cloud storage deployment are:
- Cost-effectiveness. The storage solution should be able to reuse your existing hardware setup, require little extra hardware, and be as light as possible in terms of its resource footprint.
- Multi-node performance. The solution must be spread over enough nodes to be able to deliver the same level of performance as your current locally attached storage.
- Block-based objects. To assure optimal performance and handling, the technology must be based on objects representing roots.
- Cloning and snapshotting. The solution must support copy-on-write use of master images, as well as the ability to freeze the state of the storage at any point in time.
- Hot-plugability. The solution should be easy to expand by simply inserting additional nodes and devices.
- Failure tolerance and redundancy. At a minimum, the solution should protect against single-disk failure. Ideally, it should protect against single-node failure, as well.
- Exclusive object access. The solution should ensure that an object representing a root file system is mounted only once in the cluster at any given time.
Nice to have for future deployments
Some additional features that you may find convenient to add for future deployments are:
- Deduplication, to free up additional storage space.
- Sparse objects (thin provisioning), so you can safely overcommit storage.
- Assistance for shrinking legacy file systems, so customers who are charged per unit of storage can optimize their use of storage.
As the cloud revolution progresses, the ability to separate storage from your physical systems will become increasingly important.
We recommend that hosting companies first understand what their storage requirements are and how well different cloud storage systems match them. After this analysis, we suggest they trial a cloud storage solution that addresses the hosting industry's largest infrastructure problem: The lack of separation of storage and computing.
John Zanni is vice president of service provider marketing and alliances at Parallels