One of the brightest stars in the storage firmament today is thin provisioning. Offering immediate benefits for...
both IT administrators and CFOs, true thin provisioning can be great news in the datacentre. So how can you identify the "real thing"?
Large or small allocation unit?
How much physical capacity is consumed when a write is received?
As data is written, different implementations of thin provisioning will consume varying amounts of capacity. Where the unit of consumption is much greater than the size of the write, the efficiencies are diminished. When scores of megabytes are dedicated on even the smallest of writes, the simple creation of a file system on a thin provisioned volume can fill the volume, eliminating any value of thin provisioning before the first file is even written.
Conversely, fine-grained allocation, where capacity is dedicated in kilobytes, maximises capacity savings and thin provisioning can easily be applied broadly to many host operating systems, file systems and applications.
Reserved or reservationless?
Is physical capacity pre-configured for and reserved upfront into specific thin provisioning pools?
With "reserved" implementations, physical capacity is pre-configured and committed upfront into specific thin provisioning pools, which is a tremendous waste.
With "reservationless" implementations, capacity is drawn and configured in fine increments from a single pool with no pre-dedication.
Manual or autonomic?
Do administrators have to manually configure storage and Raid groups into pools to keep them replenished?
In traditional storage environments, capacity is dramatically over-provisioned to avoid the disadvantages of manual re-provisioning. Manual thin provisioning retains much of the original complexity, where decisions cannot be easily undone, and pool provisioning may be quite conservative, severely mitigating the benefits of thin provisioning.
With "autonomic" thin provisioning, capacity is dedicated and configured without human intervention and just-in-time, eliminating user time and effort and any compensating inefficiencies.
Dual controller or massively scalable?
Can you aggregate lots of distinct workloads? Do you have room to backfill virtual capacity with physical capacity and performance upgrades?
Thin provisioning is about making capacity promises that may have to be kept in the future. Hence array scalability is of utmost concern. Dual controller architectures are disadvantaged in this respect, but scalable systems are ideal for thin provisioning since they are already architected for storage consolidation and growth.
Bolt-on or built-in
Is thin provisioning an afterthought, forcing trade-offs between one benefit and another, or is it fully integrated with all other functionality?
Sometimes thin provisioning is added to a pre-existing hardware and software architecture, leading to unhappy trade-offs such as diminished performance, loss of other functionality and inadequate monitoring. Meanwhile, hardware and software architectures built to support thin provisioning offer full integration for safe, confident operation.
Thin provisioning can be a tremendous boon for most organisations. But the benefits will depend on the technology features, which can vary dramatically from supplier to supplier. The checklist above can serve as a reliable starting point.
● 3PAR's David Scott will be speaking on "Thin Provisioning: Sparking a Green Storage Revolution" in his seminar at Storage Expo 2007