Some organisations took the opportunity to streamline the logical volumes during the re-provisioning process, be that a physical-to-virtual process or new virtual machine build, in order to detach their previous mindset of 2 x 73 GB hard disks = OS + application data capacity. This was made worse by the advent of the 146 GB drives, as organisations scrambled around for legacy hardware -- in the form of smaller capacity drives -- to provide efficient storage platforms for simple infrastructure requirements.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Even in today's virtualised environments we still have storage containers for virtual machines, big buckets of 300 GB to 500 GB or larger volumes, to house the virtual machine data. The net efficiency of these storage containers over DAS may be a vast improvement, but nevertheless there is still overhead in existence within these architectures.
In larger deployments of say 30-plus hosts, where we are expecting to see a conservative consolidation ratio of 8:1, this can mean 240 virtual machines, each with its own 10 GB OS drive, plus a data storage drive of perhaps 20 GB to 60 GB, which would consume a very large piece of storage. In the detailed design, an organisation would be looking at performance optimisation as a critical success factor for the virtualised servers to deliver a service back to the business. This will involve balancing load against disk capacity.
Who are the regular adopters of thin provisioning today? Organisations with a solid grasp of ITIL and capacity planning onboard, and medium-sized businesses who have control over their distributed or regional data centres.
Who should be adopting thin provisioning? Certainly most people where the price is right. Large corporations with core data centres and centralised data warehouses will see the largest ROI, and companies who feel the budget-pinch each year and need to deliver the same with less – go claim back those unused gigabytes, a penny saved is a penny earned.
About the author: Andrew McCreath is an engagement partner with GlassHouse Technologies (UK), a global provider of IT infrastructure services, with more than 16 years experience of Infrastructure and Management Information Systems. Prior to joining GlassHouse, Andrew managed multi-million-dollar projects while employed with Accenture, Credit Suisse First Boston, Kimberly-Clark, Société Générale and EMC. He is currently specialised in server virtualisation and data centre consolidation.