Veeam won't get physical (backup)

Antony Adshead | No Comments
| More

Veeam is a backup software company and pioneer of virtual machine backup. Way back when, while most backup software was firmly rooted in the old world of physical servers and an agent on every box, Veeam and a few others (Quest, PHD Virtual et al) began to offer specialised products aimed at protecting VMs.

Gradually, the mainstream incumbents of the backup world (Symantec, CommVault, HP, EMC, IBM TSM et al) incorporated virtual machine backup into their products. This, one might have imagined, could have adversely affected the fortunes of the VM backup specialists.

But it didn't, and Veeam in particular, roared from nowhere into second place in a TechTarget survey question on customers' backup provider preferences. The mystery then became, why would so many customers, who must have backup products from the incumbents deployed, buy Veeam in such large numbers and add another backup product to their infrastructure?

Speaking to Veeam VP for product strategy Doug Hazelman this week ahead of its VeeamON forum in London he gave his interpretation on that phenomenon. In short, he thinks it's often just too much trouble for enterprises to deploy the new virtualisation-focussed features in their products.

He said: "Many want to have a single vendor environment but the reality is they have more than one, probably 2.5 to 3 on average. Do they like that situation? Everyone thinks it's the Holy Grail to standardise on one backup product but in truth they'd lose a lot of features by doing so."

"Virtualisation and the cloud are the future but in many cases we find customers don't take advantage of the virtualisation features in incumbent products. That's because they'd have to retool their entire existing deployment and so they think, 'Why not just look at best of breed?' Also, lots of companies are just not happy with the backup products they have in place."

It's a plausible explanation, though not one that we can easily test, but it would account for Veeam's good showing in the backup product stakes of late.

The other puzzler for me in backup recently is why server backup software providers do not backup mobile and endpoint devices. Currently you can backup laptops with some of the mainstream backup products but none as far I know extend to smartphones, tablets and the like.

Meanwhile, however, companies such as Druva make a good living by offering endpoint/BYOD backup etc. And it's an area that appears necessary to achieve compliance as data stored anywhere may face a requirement to be retained for legal e-discovery etc.

So, why doesn't Veeam consider it an area it needs to address?

Hazelman's view was this. "If people bring their own stuff into the workplace it's difficult to manage from a backup perspective. Personally, I don't need backup for my own laptop. If I lost it today I wouldn't lose data as it's all also somewhere else. It's a better approach to protect what's on servers than what's on laptops."

I'm a little less than convinced by this. But it'll be interesting to hear the view of others interested in backup and data protection. 

EMC: Open source for the ViPR isn't sauce for the Golden Goose

Antony Adshead | No Comments
| More

This week I spoke to EMC marketing VP Josh Goldstein about XtremIO 4.0.

EMC's flagship all-flash array has seen it move from third place in the all-flash stakes to first, with Gartner in 2013 ranking it at 11.1% market share behind IBM (24.6%) and Pure Storage (17.1%).

By 2014 EMC had achieved pole position with 31.1% market share. Pure Storage was second with 19%, and IBM third with 16% while no other vendor had more than 7%.

That's remarkable progress indeed.

Recently we've also heard much of EMC's forays into open source, with the opening of ViPR code to developers in Project CoprHD, as well as continued commitment to software-defined storage.

In a blog post coinciding with EMC's open sourcing of ViPR Manuvir Das, engineering VP with EMC's advanced software division, spoke of how software development has changed, how open source has become mainstream, how its accelerates innovation and how excited EMC is about its "open source strategy".

So caught up in the excitement was I that when I spoke to Goldstein about XtremIO I thought I'd ask him if it too would become open source.

Of course I already knew the answer and Josh's immediate response was laughter. I pressed further and he said, "There is no plan to open source XtremIO. I'm not sure what the business model would be."

It's not like open source business models are a great mystery. There is a core of software code around a project/product and its open for free use and modification to anyone. Meanwhile, commercial distributions take in updates and changes in a more controlled fashion and charge customers for installation, configuration, support etc.

It could quite easily be applied to any product, even an all-flash array product like XtremIO.

But of course it won't be because it is precisely XtremIO's software that is the goose laying the golden egg for EMC and the upgrade to version 4.0 demonstrated further innovation on the fundamentals of the XtremIO XIOS operating system (OS).

In the upgrades announced at EMC World in May the XtremIO headlines were topped by an increase in X-Brick capacity to 40TB and totals for a rack topping 1PB.

But the other key announcements all built on XIOS's block storage technology, which sees data deduplicated and compressed on ingest. Building on that fundamental characteristic, EMC announced:

  • Integration of XtremIO and its RecoverPoint data protection software with leverage of the same dedupe and compression as in the XIOS engine to speed data transmission over the wire.
  • Copy data management with, for example, versions of Oracle, SQL Server and SAP databases generated over a lifecycle able to reside on XtremIO by use of space-efficient copies based, again, on that dedupe and compression algorithm in XIOS.

So, really, it's not like EMC couldn't open source XtremIO like it has done with ViPR. It's just that XtremIO's software is a goose that's laying a golden all-flash egg for EMC, while ViPR can show no similar market share figures and has been open sourced in an apparent attempt to breathe life into it.

It's obvious really, but EMC could never come out and say this.

 

PS: Goldstein also said there is no plan to make XtremIO available as a software-only product for customers to install on their chosen X86 hardware.

"It's a choice," said Goldstein. "Our customers like pre-packaged arrays. They're standard X86 servers and CPUs anyway and all the value is in the software so there are no big savings to be made by buying your own hardware."

 

ViPR and CoprHD: EMC forced to swim with the open source tide

Antony Adshead | No Comments
| More

EMC's move to make its ViPR software-defined storage platform open source (as Project CoprHD) appears a bold one, but I can't help thinking it looks like the giant from Hopkinton, MA, has been been subject to forces too strong to resist.

 

ViPR - which was launched two years ago at EMC World in 2013 - allows customers to build pools of web-scale storage from heterogenous storage media; EMC or third-party storage vendor arrays or even commodity hardware. ViPR uses its own Object Data Services that can be accessed via REST APIs including Amazon S3 or HDFS to enable analytics of data under its management.

 

At the time we noted that ViPR's success would be pinned on, as Freeform Dynamics' Tony Lock, put it: ". . . getting third parties involved. That could make or break it. It'll take political will and diplomacy."

 

And that diplomatic success has proved to be limited, with native support from other storage vendors restricted to NetApp and HDS and other makers' arrays accessible only via the OpenStack Cinder plugin.

 

In some ways, support from other vendors is the first hurdle, but not necessarily the most important one. Key of course are actual ViPR customers, and those seem to have been limited to short footnotes in EMC press releases.

 

Meanwhile, of course, open source in the datacentre has come on leaps and bounds, in particular via OpenStack's cloud infrastructure platform and its storage modules. Open source in general has improved its image from beard-and-sandals to near mainstream in the enterprise.

 

No more proof of that is EMC's conversion to open source, and its keenness to parade its credentials in that respect.

 

Open source development gains from the involvement of the community and its feedback, in terms of practical improvement to the product and in terms of buy-in and credibility.

 

Clearly then, EMC has viewed the relative rise of open source as a threat. It initiated ViPR as an EMC product that would tie together a storage and analytics infrastructure over heterogeneous storage.

 

Two years later EMC has had to bow to that threat, or seize the opportunity, depending on how you wish to spin it, and now ViPR has spawned Project CoprHD.

 

The Massachusetts storage giant will continue to sell ViPR as a commercial project, but now it will hope its open source alter ego will help its develop and gain respectability in a way it has failed to so far.

 

 

Software-defined storage: Not for us says Nimble Storage CEO

Antony Adshead | No Comments
| More

Separating software from hardware is an emerging trend in data storage. Under the phrase software-defined storage it has been at the forefront of vendor hype in recent years.

The appeal is that it potentially allows customers to build their own storage arrays by deploying storage software - which is after all where the main smarts in a storage system reside - on commodity server hardware.

So, it was interesting this week to come across one array maker - hybrid flash specialist Nimble Storage - which said it would not go down the road of offering a software version of its product.

In some ways that goes counter to a rising trend.

Vendors that offer storage software range from hardcore hardware pushers such as HP with its StoreVirtual VSA, virtualisation giant VMware with its Virtual Storage Appliance and VSAN, suppliers that have made their name with storage software such as Nexenta and DataCore, and more recently startup flash array providers like SolidFire that now offers its Element X operating system (OS) as a virtual appliance. Even hardware giant EMC offers software version of its VNX and Celerra products for lab use.

But Nimble Storage says it won't go down that road. CEO Suresh Vasudevan told me this week that while its engineers use software versions of the Nimble OS in test and dev, the company would not offer a software-defined storage product to customers.

Basically, he said there's nothing in it for Nimble. His argument went like this:

Vasudevan said: "If, for example, a storage system costs $100 and the hardware from China is $40 of that, here's how the rest is spent: $28 on sales and marketing, $12 on engineering and R&D, $5 on company admin. So, if I sell just software and the cost of the hardware is the same or possibly more then we would have to sell software at much less. But I still have to pay the same amount to sales and marketing people and to engineers, so it's really not clear there's a benefit."

Vasudevan also said that the software-only hyper-converged model - where server and storage reside together on the same box - is likely to lead to increased customer costs due to the need to protect numerous discrete compute/storage instances.

"When you have software on commodity servers the belief is it will lower costs, but in fact it often leads to overprovisioning. That's because the way people protect data on nodes without redundancy features such as dual controllers is to mirror data, often with triple replication," said Vasudevan.

So, that's why one array supplier will not go down the route of software-defined storage. I guess the argument works, for them, but it could be argued that they could at give the customer that choice. The most compelling part of Vasudevan's case is probably that the hardware will cost the same or more to a customer, but for the largest organisations out there that may not be the case.

World War Datacentre: The VMware empire and its rivals

Antony Adshead | No Comments
| More

In the real world we've seen centuries of wars, largely the result of old empires fading and new contenders arising. And the world of IT show similar parallels, albeit on much reduced timescales and with much less bloodshed.

Currently the VMware empire - keen to expand and therefore cement its grip on large swathes of enterprise IT - is fighting some vigorous skirmishes, most notably with hyperconverged storage supplier Nutanix, while manoeuvres aimed at open source cloud infrastructure OpenStack have also taken place, and while it keeps an eye on the horizon for container technology supplier Docker.

VMware's fight with Nutanix has the character of that between incumbent empire and new, upstart rival (eg, the British Empire vs Germany in the first half of the 20th century) as well as that between hegemonic empire and regional power (such as the USA and Iran in recent times).

On the one hand Nutanix is believed to be developing a hypervisor to rival VMware's. To date Nutanix is most noted for being a supplier of converged server/storage products that marry disk capacity and server power with support for VMware, Hyper-V and KVM hypervisor formats.

VMware's "ecosystem" is, how can we say this, rather comprehensive, and the company bets on customers adopting it from top to bottom of the datacentre and then finding it difficult to disengage. The development of alternative hypervisors by the likes of Nutanix threatens this.

At the same time VMware has developed an existential threat to Nutanix in the form of its own hyper-converged server/storage appliances, EVO:Rail, and has partnered with all the key server/storage vendors to bring that to market.

A recent major outbreak of hostilities came when Nutanix (as well as Amazon and Citrix) managed to halt a pending $1.6 billion VMware contract with the US Department of Defense (DoD). The DoD deal - involving more than two million product licences over five years - is/was something of a flagship for VMware and you can bet Nutanix et al's protests with the US Government Accountability Office went down like breakfast at Pearl Harbour on 7 December 1941.  

Meanwhile, VMware faces another imperial-scale rival in OpenStack, the open source private cloud infrastructure. Between the two there is both a fundamental philosophical difference (proprietary vs open source) and but also some complimentarity. There is no Nazi Germany vs USSR fight to the death implicit in this conflict.

VMware provides its virtualisation hypervisor, to which is appended a rich array of features, including the ability to orchestrate the delivery of IT as a cloud. It has also added storage functionality, for example via its storage APIs that connect to third party storage vendor products and more recently via its Virtual SAN (VSAN), as well as Virtual Volumes (VVOLs) that pin performance to specific virtual machines.

OpenStack is an open source cloud environment, and doesn't provide a virtualisation hypervisor. It does provide components including cloud orchestration, block, file and object storage, networking, security etc.

But despite the lack of core conflict it is possible to replace some VMware components with OpenStack, and this is what seems to have happened in highly publicised cases such as at PayPal, which has reportedly ditched a lot of VMware in favour of the open source cloud infrastructure.

The benefits of this type of move would be to give the customer more freedom, being able to mix and match OpenStack modules, use VMware or another hypervisor where needed, and avoid being locked into a tightly controlled VMware ecosystem.

So, for the time being, to VMware, OpenStack is a large occasionally dangerous rival, able to infiltrate and supplant its efforts to colonise datacentres. Perhaps a politico-military parallel would be the current phase of the relationship of the developed west and China, with the latter quietly making inroads into the worlds markets, African minerals etc, but with no physical showdown on the horizon (yet).

Finally, there is the as yet not quite quantifiable threat to VMware from the container technology typified by Docker, but also offered by Google and Microsoft. This is a form of virtualisation, but which does away with the hypervisor and instead allows apps to sit directly on the host operating system (OS) without the need for multiple guest OSs.

Containers address some shortcomings of hypervisors, namely their use of multiple duplicate running processes, boot volumes etc, and their use of dedicated memory and storage resources.

In the long term containers may hold a grave challenge to virtualisation hypervisors such as VMware's, but for the time being the technology is relatively immature, and lacks, for example, key storage features - such as the ablity to share data between containers, container portability with regard to storage etc - and in terms of security.

So, for now containers/Docker is not a pressing concern to VMware, but in the long term that could develop into a deeper, existential rivalry; perhaps more like the China of 50 years in the future to the developed west.







EVO:Rail and the missing acquisition phase of hyperconverged storage

Antony Adshead | No Comments
| More

It's rare that a storage-related technology arises without there being any acquisitions.

Look at flash. The startups emerged and pushed the big six to either buy some of them or to develop flash architectures from the ground up. Mostly big storage bought its way into the space, witness EMC/XtremIO, IBM/Texas Memory Sytems etc.

But hyperconverged compute/storage has seemingly sidestepped that familiar cycle. It's a hot area that emerged a couple of years ago from pioneers such as Nutanix, Simplivity and Scale Computing. They combine processing and storage in one box, with scale-out capability that allows the customer to grow capacity in grid-like fashion.

It allows easy setup and administration and with a VM-friendly architecture.

Then the font of software-defined everything, VMware, came along and brought out EVO:Rail.

EVO: Rail runs VMware virtualises CPUs with vSphere to, storage with VSAN, networks with NSX, while vCenter Log Insight and the EVO Engine handle deployment, configuration and management of resources.

Currently, EVO:Rail only scales to a 48TB four node cluster, but more is promised when the bigger EVO:Rack hits the market.

Right now, hardware makers that offer pre-configured EVO:Rail appliances include Dell, EMC, NetApp, Hitachi Data Systems, Fujitsu and HP. That's all the top seven storage vendors except IBM.

And so, with a big player weighing in from early on, ie VMware doing a good job of providing the software heart of hyperconverged computing, the usual cycle of acquisitions of startups has been bypassed.

So, will the pioneer hyperconverged players be eclipsed? You could imagine they might feel a bit lonely. They got to the dance first, knew all the trick moves but never got picked. VMware came along and swept all the big boys off their feet with EVO.

That's quite unusual in a world where startups not only innovate but often go on to form the core IP of a big players offerings when they are acquired.

But, the hyperconverged pioneers are not necessarily destined to stay as wallflowers. For a start they offer a choice of the hypervisors they run. Nutanix runs Microsoft Hyper-V and it looks like Simplivity has it planned. Meanwhile, Scale Computing's HC3 products are based on the open source KVM hypervisor.

So, despite the hyperconverged storage and compute market bypassing the usual acquisition phase, it looks like the pioneer startups have a comfortable niche in which to sit, especially given this is mostly an SME to midrange play where they can compete in terms of scale and functionality.

Hybrid flash horse is a winner, but did Dell mean to back it?

Antony Adshead | No Comments
| More

When reflecting on the rise of flash storage over the last couple of years it's easy to come to the conclusion that some storage vendors have been more active, more free-spending, more progressive in the marketplace.

I have in mind the likes of EMC and IBM, with their big ticket flash startup acquisitions and the million-plus IOPS products that lead their all-flash offerings.

At the same time it has been easy to regard, for example, Dell (and others that have opted to retrofit existing arrays), that have not invested in dedicated all-flash arrays in the same way as EMC and IBM, as perhaps lagging behind somewhat.

But it has become apparent that hybrid flash is the most common application of SSD and that all-flash arrays are very much a minority interest. In a blog last year I cited a 451 Group survey that showed 67% of respondents had installed flash in SAN or NAS arrays while only 8% had all-flash arrays deployed.

So, as time has passed it has started to look like Dell may have hit the right spot in the market. Or maybe the right spot in the market has hit Dell?

Dell has all-flash array offerings; the Dell Compellent SC4020 can be completely populated with SSD. But its flash forays have largely been into the world of hybrid. The SC4020 can also house HDDs and there is also an EqualLogic iSCSI hybrid flash array, the PS6210XS.

It looks like Dell has been the beneficiary of a happy accident, but according to storage general manager, Alan Atkinson, Dell's bet in the hybrid vs all-flash stakes was a cert for a long time.

"It's a bit of a philosophical question," said Atkinson. "And we have taken sides."

"We considered all the options and honestly believed we didn't need to buy a flash startup or develop flash systems from the ground up. And I think the market has demonstrated that we didn't need to."

Atkinson's view is that flash is a disruptive technology but that it has transitioned from being most suited to high performance use cases.

"All-flash arrays started off expensive and were targeted at use cases that didn't need the full range of features like replication etc."

"That's a market and it is what it is. But what's happened is that as flash prices have decreased it has become a more general purpose storage medium."

Luckily for Dell, according to Atkinson, its Compellent storage operating system (OS), with its tiering capabilities, lent itself well to uses where flash is mixed with spinning media, and was re-written to some extent to optimise it for these use cases.

So, says Atkinson, Dell has no need for a separate all-flash array platform, especially as its customers want a common interface across HDD and SSD storage, or as he put it (IT marketing euphemism alert!), "all the wood behind one arrow," which, of course, you don't get with EMC or IBM, whose all-flash arrays run on discrete OSs.

It's quite a compelling argument, and the net result is that Dell has solid product offerings in flash in all but the most high-performance use cases. But was it what was intended all along? Hmm, well, the jury is still out on that one for me.

Cloud storage future shown by current limitations. Chapter 96: Ctera

Antony Adshead | No Comments
| More

I've written recently that despite relatively low enterprise take-up of cloud storage - due to concerns over bandwidth, availability, security, compliance, data portability etc - vendors big and not so big are making moves into cloud storage; usually hybrid cloud storage, and always limited by the current capabilities of the cloud.

That's because cloud storage is not fully ready, for all the above reasons, for mission-critical enterprise use.

A vendor whose work in the space illustrates this is CTERA, which finds a way to exploit the currently-usable facets of the cloud to provide branch office storage and backup plus file sync and share for mobile users.

Sure, neither of these represent a full-blow enterprise storage use of cloud storage, but are an adaptation what the cloud offers at present.

CTERA's offer centres on CTERA Portal, a platform that allows users to connect to private and public cloud services with linkages to cloud storage environments from the key vendors.

With CTERA Portal customers can deliver, for example, storage and backup services to remote office users via an on-premises NAS appliance, and file and sync services for mobile users via a software agent on the endpoint device.

The appliance offers local disk capacity in what CTERA marketing VP Rani Osnat calls "disk-to-disk-to-cloud", AKA hybrid cloud, and does so with source and global deduplication.

The company's offerings allow enterprises to deliver remote office and/or mobile device data protection or they can be used by service providers to deliver to customers.

What is significant about them? It's another case where an enterprising business has spotted opportunities to exploit the cloud for storage, whilst at the same time recognising its limitations.

CTERA doesn't offer enterprise-class primary storage in the cloud. It can't. The cloud can't provide that - yet. Instead, it allows businesses to exploit cloud storage for the likes of backup and file sync and share.

In the case of its backup appliances, they are hybrid cloud. They have to be because you must stage to disk locally if you can't guarantee throughput across public networks. Mobile sync and share, meanwhile, deals in small volumes of data that are not time critical and dependent on LAN-like levels of connectivity.

So, for now we have numerous vendors exploiting the opportunities and limitations of cloud storage. It's an interesting space to watch and will be increasingly so as the boundaries of possibility change over time.

One day we may see the likes of CTERA - not to mention EMC and NetApp - bringing products that can truly offer primary data-capable enterprise cloud storage services to market.

But that will then raise the question: If the cloud becomes feasible as a tier 1 storage location, what will become of the existing set of vendors, tied as they are to a world of on-premises arrays?

Zadara enterprise bolt-ons show the limits of cloud storage

Antony Adshead | No Comments
| More

Here at Powering the Cloud (formerly Storage Networking World Europe) in Frankfurt, many of the vendors and products actually highlight the current limits of the cloud.

One such example is Zadara, a company that's making a successful fist of exploiting the shortcomings of the cloud for enterprise customers.

It provides what it calls enterprise-storage-as-a-service. In other words it provides all we'd expect of the cloud - hourly billing, elasticity, zero downtime and no need for the customer to deal with the dark underbelly of storage infrastructure.

It does this with its own hardware - supplied on-premises, to colo facilities or linked to existing cloud facilities - that is built from commodity servers and its own fully cloud-featured storage software and with it offers services that the likes of Amazon Web Services and Microsoft Azure can't provide.

Cloud storage services from the big beasts are full of shortcomings for an enterprise customer, with volumes limited in number and size, likewise with clusters, a lack of snapshots, replication and thin provisioning and encryption keys in the hands of the cloud provider. And all this comes on top of potential issues with bandwidth and latency.

Zadara aims at filling these gaps, as a bolt-on service that provides so-called Virtual Private Storage Arrays with guaranteed RAM, CPU, drive and networking levels of service plus encryption that's totally in the hands of the customer.

Amazon and Microsoft refer their "more choosy" customers to Zadara because they know their own shortcomings for enterprise users, says Zadara VP for business development, Noam Shendar.

"The big cloud providers are built for millions of customers, and aim for the lowest common denominator," says Shendar. "We're built to provide enterprise service for the select few."

That appears to be true, but while such a situation remains it shows that the cloud is far from go-to option for enterprise users and while it is so companies like Zadara will fill the gaps.

Ideas EMC probably hates #96 - Seagate Kinetic drives

Antony Adshead | No Comments
| More

In storage one of the key fault lines we see is between vendors keen to ensure it's their product that builds in the intelligence needed for storage or storage-related operations to take place.

I guess that's not an earth-shattering observation. The basic functions storage are in a way quite mundane - data is stored on pretty dumb drives - so it's the bells and whistles that count. The vendors usually call this "adding value".

On the one hand array makers are keen to maintain its location in the controller, while one the other hand hypervisor sellers try to drag it into their software (eg, VMware and its various storage and backup-related APIs).

You can probably count the latter among, "ideas that EMC, NetApp etc wish didn't really exist". And there is another idea that Big Storage would probably like to see un-invented, which is Seagate's Kinetic, a drive that cuts out the need for a storage array controller and associated hardware altogether.

It does this in Kinetic drives by building in the intelligence required for object storage data access to the drives themselves. Kinetic drives replace the storage controller as well as SAS, SATA controllers, RAID controllers etc with key value store capability that can scale to well in excess of the number of atoms in the universe.

In doing so they interface directly with object storage environments, including Ceph, OpenStack Swift and Scality. And all that is required is a JBOD enclosure with Ethernet connectivity to house Kinetic drives.

Sure, it's an object storage technology - definitely very much a minority interest currently - and only really useful for large-scale relatively slow access use cases, but in such cases customers can potentially exploit it to drive out cost in capital outlay on storage array controller hardware as well as in operational costs for power and cooling.

But in future - as data volumes increase and the technology matures - object storage is likely to come of age and widen this fault line. Add to that the growing tendency of storage software and hardware to separate at the level of controller and the big storage incumbent array makers have a challenging few years ahead.







VMworld 2014. No limits! Err, yes there are!

Antony Adshead | No Comments
| More

"NO LIMITS" shout the 10ft high ad boards around the vast conference centre.

At VMworld Europe in Barcelona the virtualisation giant is keen to stress how rapidly moving and changeable is the world in which we live. It's a "liquid world", says CEO Pat Gelsinger, one where we have moved from "rigid structures to liquid business" where the old adage "built to last" must become "built to change".

In the world of IT this calls, says Gelsinger, for "agility, bravery, change". That's because IT is simply awash with barriers; between cloud and traditional applications, between on-premises and off-premises infrastructure etc etc. We must, said the Pennsylvania farm boy (his description) turned tech CEO, "conquer the silos" and change "or to and".

VMware does its best to hammer the message home with speaker-straining basslines, dry ice and dancers at the 9am keynote, and backs it up with apparently supporting quotes from the likes of real geniuses Isaac Newton and Arthur C Clarke.

But in reality the world of VMware is all about limits. It's not an open source community. It's a commercial business whose bottom line is its, err, bottom line, and its modus operandi is all about making sure there are limits; that once you're locked into the VMware ecosystem you're more or less stuck there.

In the small corner of VMware that interests me as storage editor the limits are all too apparent. Its VSAN storage virtualisation tool, for example, only works with VMware's vSphere platform (never mind other hypervisors or physical servers) and it only provides storage to VMware virtual machines via a proprietary protocol.

And in VMware's recently announced EVO:Rail hyper-converged appliance we have a product that only works with VMware virtual machines and at present is constrained by very definite limits of scalability.

All of which leaves aside the big picture that once you've deployed VMware as a virtualisation environment you're going to have to spend a lot of time and money to provide things like storage and backup for the new environment.

Why do companies like VMware try so hard to convince us they're some kind of altruistic foundation, when we all know they exist to make profits and the best way they can do that is to lock people in? And really, why over-egg the pudding with the language of freedom and liberation?

Please, tell us how you think your products are the best at what they do but do drop the pretence that you're providing IT without limits.

Look out Symantec! Virtual server backup specialist Veeam is behind you

Antony Adshead | No Comments
| More

One of the big surprises of the recent TechTarget European Purchasing Intentions survey was the appearance of Veeam as number two customer choice of backup software product.

In the survey Symantec was recorded as the most popular backup product among the 511 storage professionals questioned, with 20% saying they had deployed it. Veeam came next with 19%, followed by HP (15%), EMC's NetWorker and Avamar products (13%), IBM Tivoli Storage Manager (TSM) (12%), then CommVault, CA, and Microsoft, all with 6% of respondents.

Veeam gained a healthy increase in share compared to the 2013 survey. Last year it was behind Symantec (30%), HP (16%) and IBM TSM (15%) and registered 12.5% of customer deployments.

Three years ago, in 2011, Veeam didn't show up at all. That year Symantec was top dog (37.5%), with HP (17.5%) and EMC (14%) second and third.

Veeam's position is surprising on a number of levels. Not because Veeam isn't a good product. There's no doubt it is. But it is surprising in some ways that a product that specialises in the backup of virtual machines only should reach such levels of popularity.

That's because enterprise backup products, from the likes of Symantec, EMC, IBM (Tivoli Storage Manager), HP, CommVault et al, have universally achieved the ability to backup virtual servers as well as physical ones, often with tight integration to the hypervisor and use of advanced features within.

It's also a surprise because, while most organisations (87%) have virtualised servers it's also commonplace for IT environments to be mixed between virtual and physical devices. And it makes sense - doesn't it? - to use a backup product that can operate across the two environments.

Or maybe it doesn't matter to many organisations that are happy to run different backup software for physical and virtual servers. Or maybe those that ticked Veeam are SME organisations where it's more likely that the server environment is 100% virtualised?

I've no cast iron solutions to this particular conundrum, and am happy to congratulate Veeam on its success. 

All-flash arrays: Will time run out for mainstream acceptance?

Antony Adshead | No Comments
| More

The all-flash array has been the flavour of recent times in storageland. But, has the hype exceeded the reality?

It may well have done. Or maybe it's just its timing is off.

If we look at our recent Purchasing Intentions survey there's no doubt that flash storage is popular. More than half of respondents indicated they had flash in use (36%) or planned to implement (7%) or evaluate it (25%) this year.

That's a fair amount of traction for flash, but not so much of that kudos can stick to all-flash arrays, according to another survey we ran this summer on ComputerWeekly.com, this time from 451 Research.

Its survey gained more granularity on the flash question and found for most respondents (67%) flash in use now is installed in existing SAN/NAS storage arrays while 25% have put it in servers. A mere 8% reported having deployed an all-flash array.

What that shows is that for the most part IT departments see the addition of flash to existing storage or to servers as the best way to accelerate I/O performance for key applications. That should be no surprise - all-flash arrays don't come cheap and with constrained budgets it's clearly best to target fast access media where you need it.

But it is in contrast to storage industry hype, and perhaps more importantly, the billions spent to develop or buy all-flash arrays, such as EMC's purchase of XtremIO, IBM's Texas Memory Systems acquisition and its pledge to invest $1 billion in flash.

The survey results also show that for many applications right now, disk is quite adequate. Compare the percentage of those in the ComputerWeekly.com survey that have virtualised servers (87%) with the numbers that have flash in place (36%) and it looks like there isn't a rigid driving shaft between the deployment of virtualisation and the need for flash.  

And so, the all-flash array could turn out to be something that takes its time to become a must-have. There's little doubt that disk will one day be superceded by solid state media, but in the meantime alternatives to flash are being developed. The hope must be in Big Storage Towers that flash is still the solid state media of choice when disk has finally had its day.

Death of the LUN: Another nail in the coffin from Gridstore?

Antony Adshead | No Comments
| More

Has the LUN had its day? It has been the de facto method of creating logical volumes on physical storage for decades, but in this era of virtualisation that may be becoming a thing of the past.

In VMware or Hyper-V environment the LUN still exists but only as a single large pool in which the virtualisation platforms' virtual drives are created.

But Gridstore, which substitutes so-called vLUNs for LUNs while providing storage for Microsoft Hyper-V virtual machines, claims to have done away with the LUN altogether, in something like the way Tintri does for VMware environments.

Gridstore combines storage arrays largely comprising cost efficient 3TB or 4TB SATA drives and MLC flash (500GB or 1TB in its performance nodes) with software that lives in Microsoft System Center

That software comprises a "vController" that matches Hyper-V virtual machines to vLUNs and provides quality of service (QoS) on storage provision. The vController, Gridstore says, emulates the single app-single server-DAS setup of the physical server world with data put in queues and sent in bursts rather than randomly as they occur.

"Virtual environments and the LUN are an architectural mismatch," says George Symons CEO at Gridstore, which makes scale-out array nodes in 12TB and 48TB base units, expandable to petabytes.

"A LUN must cater to many servers of different types of workload and are the site of the I/O blender effect", he says, referring to the way many and random I/O requests from virtual machines can overload physical storage.

In Gridstore arrays the vController can match storage performance to the needs of the VM, should that be sequential or random and can make sure "noisy" virtual machines do not disrupt others, alerting the admin if a VM isn't getting the gold, silver, bronze levels of performance set.

Gridstore claims 40,000 IOPS for the minimum three-node configuration of its 12TB devices. It doesn't sound a lot, when you think of the 500,000 and 1 million IOPS boasted by the all-flash provider.

But you don't need that, says Symons. "They talk of one and two million IOPS but 40,000 IOPS covers the needs of most people. To be honest most customers don't know what they need and most are in the 1,000 IOPS to 5,000 IOPS range. But in any case we can scale to 100,000 IOPS on nine nodes.

EMC's DSSD introduces the PCIe flash appliance

Antony Adshead | No Comments
| More

It's always interesting when a new storage technology comes along, not least because we have to figure out what exactly we're looking at.

Under the microscope this time is the fruit of EMC's acquisition of DSSD, a Silicon Valley startup bought last month.

EMC calls it "rack-scale flash" and Dan Cobb, CTO of the flash division at EMC, told me DSSD had tried to do three things in its short existence.

These are, he said: "Achieve massive storage density, using flash and other components to build an all-flash appliance - as distinct from an array - in terms of the number of chips that can be co-located."

"Build a connection to hosts that is massively parallel - using the Gen 3 PCIe connect - to connect between 8,000 and 16,000 independent flash dies directly to the host with tremendously low latency, compared to one drive that that typically holds 16 dies via a single SAS or SATA interface."

"With all flash management, wear levelling, garbage collection etc integrated into system software to achieve incredible effectiveness."

What we have here is a PCIe-attached flash appliance with capacity of "hundreds of TB, approaching petabytes" that can operate as direct-attached storage (DAS) or, it is claimed, as an extension to RAM.

EMC is aiming it at in-memory database use and big data for real time operations.

Cobb said EMC would be working on three forms of connectivity for DSSD. These would be:

·         As a traditional block interface using NVME (non-volatile memory express) to connect via PCIe.

·         Via custom APIs. DSSD will have developed for it new API primitives, for example a plugin for HDFS low latency operations tailored for specific applications.

·         In-memory database use - for example with MongoDB - that will result in virtual memory primitives that allow the database to see one giant memory store in DSSD.

So, it looks like we have a new beast on our hands, a PCIe-connected flash appliance for use as extremely low latency DAS and/or as a RAM extension.

It's a bit like a server-side flash store but with capacity that massively outscales existing products, which will be able to, as Cobb put it, be used as "a very fast failover, for example running multiple SAP HANA instances."

And at the same time it might be something like the new memory channel storage products now emerging. Sure, it's not DIMM-connected but the very low latency claimed by EMC may allow it to become a RAM extension.

Anyway, more will certainly become clear over the next year, with EMC planning to "harden" the product and go for some kind of product launch this time next year.







Actifio gets funding, but what's the future for a good idea in storage?

Antony Adshead | No Comments
| More

Actifio this week announced it had gained another $100m in funding, adding to a previous round of $107m, and according to Ash Ashutosh, founder and chief executive of Actifio, that pushes its market valuation to $1bn.

Actifio's Copy Data Storage Platform is the latest iteration of what we might call file virtualisation.

In the Actifio scheme numerous, isolated, many-times-duplicated versions of files are rationalised into the smallest number of copies required for the various requirements of the organisation - file access, backup, archiving and/or disaster recovery.

Whereas most businesses suffer the unwanted and unplanned multiplication of files as users copy, email, etc information between them, Actifio slims data down to a "golden copy", which is in practice the nearest one to the application that created or updated it.

Other copies are held elsewhere. They may be needed in production by other geographically located datacentres, or may be at different stages in their lifecycle, being backed up or archived, for example, and are updated from the golden copy so that all are eventually synchronised. Copies are retained with snapshot functionality, ie they can be rolled back to any point in time where changes were made.

Actifio targets the data protection and disaster recovery market and hopes to replace existing replication products, including at the storage array. It supplies the product as software or as an appliance on an x86 server. When customers deploy it Actifio discovers all the apps in the environment and policies can be set for their data - how many copies, on what tier of storage media it should be kept, etc.

It all sounds like the way you'd do file storage if you were thinking it up from scratch.

But there could be obstacles.

For a start, with 300 customers gathered over five years it hasn't exactly set the world on fire. And while the Actifio scheme is a clever one that can save a lot of disk space, re-architecting an existing environment might be a big ask for a lot of customers and a nerve-jangling prospect.

Perhaps that's why more than half its customers have deployed Actifio where data is clearly separated from production data - 6% use it for analytics and 17% for test and dev - or into relatively new, greenfield, environments at the 30% of its customers who are service providers.

Then there's the fact that there are many vested interests in storage that work against the idea of reducing the need for disk capacity. Ashutosh says the market it is playing in is worth $46bn but how much of that will take a swipe at disk vendors' revenues?

Whatever happens, the future for Actifio looks like one of going public with an IPO or being bought. Let's hope if it's the latter that it's not bought by a disk array maker that puts it out to pasture. 

Enhanced by Zemanta

BYOD backup: A looming Bring-Your-Own-Disaster?

Antony Adshead | No Comments
| More

Recently I blogged my thoughts on why mainstream backup products don't protect BYOD devices - laptops, tablets, smartphones etc - and came to the conclusions a) BYOD backup is a different beast to mainstream fixed source backup and is only provided by some specialised suppliers, and b) many BYOD users are probably using Dropbox et al.

However, it turns out I was off the mark, especially on the second point. In fact most organisations are not backing up these portable devices at all. That's the conclusion to be drawn from last spring's SearchStorage.com purchasing intentions survey. It found that more than two thirds of tablets and smartphones and nearly half of all laptops are not backed up (see chart below).

I banged on about potential BYOD compliance risks to this in my previous blog. That, you would have thought, would be sufficient impetus for users to rectify this situation. Maybe they want to. But, even if the desire is there enterprise and midrange backup products simply don't protect these types of device.

All of which leads to the conclusion that the backup suppliers really are missing a trick. It is quite literally a huge unfulfilled market. If the backup software makers had BYOD backup functionality in their products they could deploy one of IT marketing's greatest weapons - fear. So, why they don't remains a mystery.


BYODbackup.jpg

Enhanced by Zemanta

Cold Storage, Helium and HAMR. Can they save the spinning disk HDD?

Antony Adshead | No Comments
| More

While super-fast flash storage has hogged the headlines, recent months have seen the available capacity of spinning disk HDDs increase to 6TB with the shipping since last September of HGST's helium-filled SAS and SATA 3.5" HelioSeal drives. This is a 50% increase on the previously available 4TB drives.

HGST has been able to do this because it has got a jump on its rivals by patenting a method of sealing helium into drives instead of air. Helium, famously, due to its ability to produce a funny voice when sucked from a party balloon, is about 1/7th the density of air.

This reduces friction against spinning components in the HDD, when they start up and as they run, and brings, says HGST - which thinks it has an 18 month/two year lead on the competition - a 36% decrease in power usage plus, crucially for capacity, the ability to run seven (thinner) platters in the drive rather than the usual five.

As if a 50% boost in capacity was not enough, we're looking at the possibility of HDDs shipping with between 7TB and 10TB (with helium) by the end of this year and into 2015.

That's down to the adoption of new ways of writing data to the surface of platters and consequent increases in areal density as the HDD makers move from the current standard of perpendicular magnetic recording (PMR) to the next generation shingled magnetic recording (SMR).

Then, two or three years down the road we're looking at a tripling of current HDD capacities to around 12TB (with helium) with heat assisted magnetic recording (HAMR), which does what it says on the tin really, by heating up the surface of the drive and increasing the density of its storage capabilities.

It all sounds like great news until you think of the RAID rebuild times. These can currently stretch to days for 4TB drives that use the parity-based RAID levels (5 and 6) and will only get worse with capacities that double or triple that.

"It's not good," says HGST's EMEA sales VP Nigel Edwards. "As capacities increase so will RAID rebuild times. It is an issue, but we are seeing huge demand and are being pushed for larger capacity drives."

According to Edwards the future of such ultra-high capacity HDDs is in "cold storage", ie archiving that sacrifices access times for ultra-low cost per TB. Here, if HDD makers can bring the cost per TB price of spinning disk down to that of tape, service providers will offer data archive services using vast amounts of disk drives that are spun up as customer access needs dictate.

It's a plausible case. And it'll be interesting to watch how it plays out. Because, as the HDD makers drive for ever-higher capacity disk the tape makers too - with a head start in terms of capacities/densities - are also looking at more archive-friendly technologies, such as LTFS and SpectraLogic's Black Pearl implementation.

Oft-heard soundbite used to proclaim "tape's is/isn't dead". Now it seems there's a current of "disk isn't dead" emerging and finding use cases to ensure its survival.

Why don't backup products protect mobile/BYOD?

Antony Adshead | No Comments
| More

It's a contemporary mystery. Why, when the trend towards ever-greater mobile device use is such a prominent one, do the mainstream backup software suppliers almost all fail to provide for such use cases?

You'd need to have lived in a cave (without a broadband connection) for two years not to have seen the rise of mobile, tablets, smartphones and the BYOD phenomenon. And even if you had dwelled in such a place you could still learn that, for example, of the top 5 IT projects planned by ComputerWeekly.com readers in 2014, three of them are "tablet PCs", "smartphones" and "mobility".

Of course, there's a huge need for these devices to be protected, and for a variety of reasons. They may contain data that's valuable in business terms or sensitive for data protection reasons. Meanwhile, many types of data held or generated on mobile devices can be the subject of legal e-discovery requests.

Businesses go to great lengths to ensure virtual and physical server data is protected and a relatively small number of backup software suppliers make a healthy living from it, but curiously, almost none of them includes mobile device support in their products.

You can, if buying enterprise or midrange backup software from Symantec, CommVault, EMC, IBM, HP, Acronis, CA or Microsoft, ensure various levels of granularity of backup with at least the two biggest virtual machine hypervisors, probably integrate with their management consoles and with numerous applications as well use the cloud as a backup target.

But one thing you won't be able to do with most of those products is to backup tablets, smartphones, and possibly not even laptops.

There are specialist suppliers, however, such as Druva, that do make specialised backup products for mobile and laptop devices. Meanwhile, midrange backup supplier Acronis has indicated it wants to travel in this direction with the purchase of file sharing and collaboration tool GroupLogic, but this is more a DropBox-type tool than backup. In this space too HP has its Connected Backup (that's not integrated with HP's Data Protector backup product).

There are also of course DropBox itself and Box and other file sharing and collaboration tools aimed at mobile users.

So, we have a yawning chasm. On the one side we have the backup products aimed at virtual and physical server estates and on the other we have some specialist mobile/laptop backup products and the file sharing tools.

That can't be good for users. It is surely preferable to be able to deal with backups for all devices from one product that covers fixed and mobile.

And you would think vendors are missing a trick too. After all, with the proliferation of mobile devices that's a huge potential pool of licence sales to be tapped.

Perhaps it boils down to the nature of backup and data protection in the two spheres. On the one hand larger SME and datacentre backup needs an application that can schedule, manage and monitor the movement of large amounts of data on a regular basis. Meanwhile, mobile device use patterns dictate more atomised, individual levels of service on an irregular basis and of relatively small amounts and simply don't require the need to deal with scale in the same way.

So, perhaps never the twain shall meet and the world of backup is destined to remain a fragmented one. For me, it's a puzzle. If you have any clues what's holding the world of mainstream backup products back from sweeping up all those new mobile/BYOD users then please feel free to comment.

Startup watch: Gridstore targets Hyper-V storage market

Antony Adshead | No Comments
| More

This week I spoke to Gridstore, a startup that offers storage arrays that integrate with the Microsoft Hyper-V virtualisation environment and with on-board smarts it claims boosts I/O for Hyper-V traffic.

Gridstore is delivered as a hardware appliance with its software configured on Dell servers it calls nodes. You can get two types of node - in a minimum of three - the H-Class high performance version with a mixture of PCIe flash and 12TB of spinning disk and the C-Class, aimed at less-demanding I/O applications that comes with spinning disk only. Building out these nodes can take you to a maximum capacity of around 3PB.

So far so box-standard array, but where the smarts come in is that Gridstore applies intelligence to the I/O queues. Normally, of course, virtualisation traffic creates the so-called I/O blender effect, which is when numerous virtual machines try to access disk and the potentially large number and random nature of the calls overwhelms the media.

What Gridstore does is to examine I/O coming in from different VMs and put it into queues and make that access sequential rather than random. The user can define the priority of access to storage for different VMs in a quality of service (QoS) gold, silver, bronze approach.

George Symons, CEO at Gridstore, says it "cleans up contention", and agrees its something like what Virsto (acquired by VMware in Feb 2013) does, but at the storage not the host.

What's also interesting is that Gridstore's controller software is integrated with Hyper-V at the level of its operating software, working like a driver specific to that OS, so can't be used with VMware.

"To be able to clean up I/O we need to operate at that level. When Hyper-V goes to write to the drive, we intercept it and deal with it. Hyper-V thinks it's dealing with a local drive," says Symons.

Due to this tight integration with Hyper-V, Gridstore can use advanced features such as data deduplication, thin provisioning, replication and snapshots in Windows Server 2012 that are usually the preserve of high-end storage hardware.

But being so tightly integrated into a specific hypervisor also means that a VMware ESX version is some way off, at least 12 months, says Symons.

The company (California-based) is making its first European push with the appointment of former NexSan man Andy Hill to VP for EMEA sales. The company is at B round in its funding cycle and late last year held a beta product programme of the block-based Hyper-V storage hardware with around 20 customers after switching its focus from scale-out NAS.

All of which means it's early days yet for Gridstore, but it's certainly bringing an interesting product to market, although tightly targeted at users of the, for now, minority hypervisor.