In 2013 virtualisation will drive flash and backup decisions

News

In 2013 virtualisation will drive flash and backup decisions

Antony Adshead

Virtualisation is the big fish in the IT pond right now, with effects that spread through the datacentre, especially to storage and backup.

Desktop and server virtualisation projects demand upgraded storage and backup infrastructures. Capacity requirements balloon as virtual servers proliferate, while storage performance needs increase as massive and random virtual server and desktop I/O hits disk arrays.

At the same time the increase in virtual machines and their data means not only must more information be backed up, but now you have to do so in two completely different environments, virtual and physical.

Both these areas are subject to great change at the moment. In this article we summarise the transformations taking place and suggest the best courses of action to navigate these quickly moving waters.

Storage: virtualisation a disruptive driver

Spending on disk systems is consistently the biggest item in storage budgets, and that will remain the same. Meanwhile, the big six storage companies will also continue to rule the market share roost. That won’t change. The old adage of “no-one gets sacked for buying <insert massive tech vendor here>” applies in storage as much as anywhere.

But the big storage players are getting hot under the collar over flash, which provides the kind of I/O performance needed for the large and unpredictable loads generated by virtual machines. Disruptive flash newcomers have made good headway, acquisition fever is in the air and wars of words have erupted over where best to put flash in the server-storage infrastructure.

The big six are competing to show Wall Street and the competition that they’re on message with flash. For some that means buying startups, such as with EMC’s XtremIO buy, or building competitive technologies. Meanwhile, NetApp said it wasn’t going for an all-flash array, then it was.

The difficulty for vendors with existing array products is that their controller architectures are not built for the speed and throughput of flash. Hence, EMC bought an all-flash array, HDS rebuilt its controller architecture for a flash module in its VSP SAN arrays while others prefer alternative routes altogether, such as Dell, with its Project Hermes that will see server flash shared. Meanwhile, often customers choose to go with startups such as FusionIO, Violin Memory and Whiptail for their flash fix.

So, there are two sets of choices for users: Where to put flash and who to buy it from.

The location choices are: in the array, in the server, or in an appliance in between the two. You may need to make that decision in 2013. Each has their pros, and naturally the cons can be costly, so research well before deciding.

The flash market is in the process of being shaken down. An independent startup may be swallowed by a big vendor in six months time. And a reliable incumbent without the kind of flash you want now may have it in a year’s time.

Meanwhile, another ripple working its way through the storage system landscape from the virtualisation epicentre is the rise of VM-centric storage and so-called hyper-converged storage. These come in a number of forms, but they all marry cheap SATA drives for bulk storage with flash for hot data and deduplication between the two tiers. With Nutanix you get a server into the bargain and build grids of compute and storage. With Tintri, for example, you get storage in native VMware format.

So, the disk system technology scene is exhibiting some flux, and that means the choices for customers seeking storage support for virtualisation are several.

Backup: Virtual and physical now possible on one product

In the world of backup the key tasks for the year ahead again derive from ongoing server and desktop virtualisation and this will mean two things.

Firstly, users will start to backup virtual machines in the most efficient ways possible. Over the past few years virtual server backup has been a confusing and quickly changing scene. For VMware users it is only since the introduction of the vStorage APIs for Data Protection that VM backup has potentially been a smooth and efficient process.

Prior to that there was the awkward two-stage process of VCB and the super-inefficient method of backing up virtual servers as if they are physical ones, using an agent per server. And as of 2011 this was how many still did it, but now all the key mainstream backup products use the VMware APIs and Microsoft Hyper-V equivalents to link directly to the hypervisor there’s no excuse not to do it properly at the next upgrade.

Secondly, there’s the issue of mixed physical and virtual server backup. Almost no-one has no physical servers so the two must be backed up. Some do it using methods more suited to the physical world, while some have two distinct backup environments, perhaps using one of the virtual machine backup specialist products.

Obviously, using agent-based backup on virtual machines is inefficient due to the amount of I/O this will put on physical servers. And so is using two different backup products. But there’s no need to do either any more as all the mainstream backup vendors now support virtual machine and physical server backup from one product. And of the specialist virtual machine backup crowd, the now Dell-owned Quest does too.

So, there’s no excuse for not making your virtual and physical backups as efficient as possible in 2013.

 


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
 

COMMENTS powered by Disqus  //  Commenting policy