StorPool's Linux-based software-defined storage dodges VMware and Hyper-V, for now

Antony Adshead | No Comments
| More

Software-defined storage products seem to be coming thick and fast at the moment.

This week I spoke with another, the Bulgaria-based StorPool, which offers storage software that can be deployed on commodity servers to provide SAN-like block storage from distributed disk but which can sit alongside app processing tasks to provide hyper-converged storage.

StorPool requires at least three instances of server hardware be deployed. These can be for storage only, or - by using only 5% to 10% of CPU performance for storage tasks - can co-reside with compute. Software also has to be deployed on the client side to manage storage consumption.

With an upgrade this week StorPool now claims performance of 250,000 random read IOPS and 4,200 MBps sequential reads on three servers, with these figures increasing as hardware nodes are added with scale-out capability.

Currently however, StorPool is limited to deployment in hardware running Linux operating systems (OS), from where it can be provisioned for Linux-friendly virtualisation platforms such as KVM, Xen or Docker.

CEO Boyan Ivanov said the ability to deploy StorPool into Windows servers is currently under development and will be available within months.

For the moment the Linux-only abilities of StorPool are something to which the company is cutting its cloth to suit. And so the bulk of its target customers are service providers which already run a lot of Linux - among whom it claims deployments of up to several hundred TB - although it also aims at devops use cases in wider verticals.

As Ivanov puts it: "We are a young vendor so we have to do one thing very well."

So, for now it's not targeting the enterprise market with its preferences for VMware, to which StorPool is not natively compatible, though it can run in VMware "with performance degradation," says Ivanov.

Disk types accommodated can be HDD, flash or a combination of the two and storage features include synchronous replication, snapshots and thin provisioning.

Write-back cache enables write operations to be acknowledged by the storage system, but stored in memory and then flushed to the hard disk at a later stage.

Like other software-defined storage products we've looked at recently, StorPool protects data by replicating between a minimum of three hardware instances.

Will 3D XPoint kill flash and leave disk (and tape) standing?

Antony Adshead | No Comments
| More

With the recent announcement of 3D XPoint memory by Intel and Micron, there are, inevitably, more questions than answers.


On the one hand there is the big question for computing architectures at a general level. IE, what will become of the current paradigm of CPU plus RAM and (tiers of) storage if a new medium 1,000x faster and with adequate endurance arises?


On the face of things 3D XPoint will make RAM redundant in compute architectures.


But what will it mean for enterprise storage hardware architectures?


3D XPoint will deal a heavy, possibly fatal, blow to existing NAND flash products. With the Intel-Micron brainchild's 1,000x performance premium and equal or better lifetime endurance, NAND flash's days are numbered. But we knew that anyway (and there are other possible successors).


It's the timing of that eclipse that is currently un-knowable, being dependent on the cost and availability of 3D XPoint. And at present we know nothing certain of either except that the first products are likely to ship in 2016. But will they be ready for mainstream enterprise adoption?


On both counts it's doubtful, for some time at least. We will see 3D XPoint in the much more voluminous consumer market at first. And in fact it's the fabrication plant economies of scale that come with consumer product adoption that help bring the price down to levels acceptable to the relatively minority use case of enterprise storage.


So, for some time 3D XPoint will likely be an exotic and costly but rapid storage medium, and so available in small quantities to enterprise storage and server hardware makers. In that case, what will be it's role?


Probably more or less that of RAM right now. As a fast access cache or tier - in server or array - on top of NAND flash and possibly spinning disk.


But that phase will pass and as the cost comes down and the ubiquity of 3D XPoint, its clones or functional equivalents rises so the process and economies of silicon production probably mean flash will suffer a relatively sudden death.


Ironically, that might leave spinning disk -- and tape! -- still standing after flash has died, with a niche but valuable role at the glacial end of archival storage.

Infinidat stakes claim to the future of hyperscale storage in the enterprise

Antony Adshead | No Comments
| More

Enterprises need hyperscale or webscale storage. They look at the likes of Google and Facebook and need to do what they're doing; processing and storing huge amounts of data with transactions and analysis on the fly. But they lack storage products to do it, and existing enterprise storage isn't up to the job.


Those are the views of Infinidat, which came out of stealth this year and which believes enterprises are trying to solve the problems of the future datacentre with storage architectures from the past.


Infinidat CTO Brian Carmody said: "What we're seeing is a competitive advantage by a small number of technically advanced companies - Facebook, Google etc - who are trying to do what they do with incumbent technology. The experience of these apex big data companies demonstrate a failure by the storage industry."


What Infinidat offers is high capacity (2PB in a 42U rack), high performance (up to 750,000 IOPS, throughput of 12GBps), highly available (99.99999% - seven nines), which it calls mainframe levels of performance and availability but built for enterprise users to handle webscale operations.


It does all this by breaking the mould for enterprise storage in a number of ways. its Infinibox comes with three controllers (in contrast to the standard dual setup) in an active-active-active architecture. These contain DRAM and flash, with all active data held in these two layers, while below that are huge amounts - 480 nearline-SAS HDDs - of spinning disk.


It spreads the workload across all three nodes and out to the massive number of drives. To do that, Inifindat had to throw out the existing Linux SAS drivers and re-write the way the controller nodes handle data to the storage media.


"Each node has a multipath connection to 480 spindles and that is many times more concurrent SAS connections than Linux can handle," said Carmody.


For now, Infinibox is block access only, but NFS file access is planned "shortly". Mainframe access will be available by the end of the year and object "shortly after that", said Carmody.


To achieve such levels of claimed availability Infinidat breaks I/O down into 64kb objects it calls sections with all host I/O being dealt with to and from cache. This is then spread out across the HDDs in a 16-way stripe with parity, based on the RAID 6 model but dubbed Infiniraid. Data is given an "activity vector score" to rank it on whether hot or cold, sequential or random etc. This helps with pre- and de-staging data between tiers and also for rebuilds.


"At hyperscale n+1 is not sufficient," said Carmody. "We have three of everything. It's what hyperscale in the enterprise will look like."


That's an intriguing claim. Watch this space.

Quobyte apes Google storage model and plans a tilt at NetApp and EMC

Antony Adshead | No Comments
| More

Does the world need another software-defined storage/storage virtualisation product? There are plenty to choose from already, such as DataCore, Nexenta, Open-E as well as software storage products from big hitters like VMware with its VSAN.


That's what I thought before speaking to Germany-based startup Quobyte, but its case is quite compelling. It's aim is to provide Google-like webscale server-based grids of hugely scalable storage that are manageable by relatively few IT staff.


Quobyte aims to do this - and to attack the customer bases of the likes of NetApp and EMC Isilon - by decoupling hardware from the software intelligence that runs the storage grid, said founder and CEO Bjoern Kolbeck, who is a former Google employee and contributor to the EU-funded Xtreem OS Linux grid environment.


"At Google we saw how large scale infrastructures worked and it inspired us to take Xtreem FS [the file system from Xtreem OS] and build in ideas around operations and automation from Google."


"Google has datacentres in which a very small team look after storage and don't need to interact with the software team. It's a very nice operational model that scales; if you add more data you don't need to add more people."


Kolbeck explained that in the Quobyte view its software-defined storage system needs to provide fault tolerance of "split brain" situations, so that, for example traffic can be automatically re-routed if a broken top-of-rack switch where that might cause database writes to differ between instances. The same fault tolerance also comes into play for day-to-day maintenance. "If a server is down that should not be an exception," said Kolbeck.


Underlying this is the nub of what Quobyte is all about - namely quorum-based replication. IE, there are three copies of replicated data and there must always be a majority, so two out of three copies and it's OK to shut down one copy.


Quobyte's quorum-based triple-copy replication is based on the Paxos lease algorithm, which sees replicated files communicate and decide on a master copy and use that master to ensure data integrity between them. "Then, if that master fails the nodes elect a new master," said Kolbeck.


Currently, Quobyte lacks features such as synchronous replication that can help it break into the enterprise storage market, but that is planned for later this year. Right now, at least three instances of Quobyte have to be deployed in the same datacentre or metro area for latency reasons, said Kolbeck.


At present Quobyte has implementations aimed at HPC, big data (eg, Hadoop) and OpenStack customer deployments but in the long run it will target NetApp filer users and scale-out storage customers of EMC Isilon, said Kolbeck.


"Asynchronous geo-replication is on the roadmap for enterprise users but we can already be a good fit for virtual machine storage and are running some proofs-of-concept where we are looking to replace NetApp," said Kolbeck.


It'll be interesting to watch the progress of this marrying of software-defined/storage virtualisation and hyperscale storage.

Veeam won't get physical (backup)

Antony Adshead | No Comments
| More

Veeam is a backup software company and pioneer of virtual machine backup. Way back when, while most backup software was firmly rooted in the old world of physical servers and an agent on every box, Veeam and a few others (Quest, PHD Virtual et al) began to offer specialised products aimed at protecting VMs.

Gradually, the mainstream incumbents of the backup world (Symantec, CommVault, HP, EMC, IBM TSM et al) incorporated virtual machine backup into their products. This, one might have imagined, could have adversely affected the fortunes of the VM backup specialists.

But it didn't, and Veeam in particular, roared from nowhere into second place in a TechTarget survey question on customers' backup provider preferences. The mystery then became, why would so many customers, who must have backup products from the incumbents deployed, buy Veeam in such large numbers and add another backup product to their infrastructure?

Speaking to Veeam VP for product strategy Doug Hazelman this week ahead of its VeeamON forum in London he gave his interpretation on that phenomenon. In short, he thinks it's often just too much trouble for enterprises to deploy the new virtualisation-focussed features in their products.

He said: "Many want to have a single vendor environment but the reality is they have more than one, probably 2.5 to 3 on average. Do they like that situation? Everyone thinks it's the Holy Grail to standardise on one backup product but in truth they'd lose a lot of features by doing so."

"Virtualisation and the cloud are the future but in many cases we find customers don't take advantage of the virtualisation features in incumbent products. That's because they'd have to retool their entire existing deployment and so they think, 'Why not just look at best of breed?' Also, lots of companies are just not happy with the backup products they have in place."

It's a plausible explanation, though not one that we can easily test, but it would account for Veeam's good showing in the backup product stakes of late.

The other puzzler for me in backup recently is why server backup software providers do not backup mobile and endpoint devices. Currently you can backup laptops with some of the mainstream backup products but none as far I know extend to smartphones, tablets and the like.

Meanwhile, however, companies such as Druva make a good living by offering endpoint/BYOD backup etc. And it's an area that appears necessary to achieve compliance as data stored anywhere may face a requirement to be retained for legal e-discovery etc.

So, why doesn't Veeam consider it an area it needs to address?

Hazelman's view was this. "If people bring their own stuff into the workplace it's difficult to manage from a backup perspective. Personally, I don't need backup for my own laptop. If I lost it today I wouldn't lose data as it's all also somewhere else. It's a better approach to protect what's on servers than what's on laptops."

I'm a little less than convinced by this. But it'll be interesting to hear the view of others interested in backup and data protection. 







EMC: Open source for the ViPR isn't sauce for the Golden Goose

Antony Adshead | No Comments
| More

This week I spoke to EMC marketing VP Josh Goldstein about XtremIO 4.0.

EMC's flagship all-flash array has seen it move from third place in the all-flash stakes to first, with Gartner in 2013 ranking it at 11.1% market share behind IBM (24.6%) and Pure Storage (17.1%).

By 2014 EMC had achieved pole position with 31.1% market share. Pure Storage was second with 19%, and IBM third with 16% while no other vendor had more than 7%.

That's remarkable progress indeed.

Recently we've also heard much of EMC's forays into open source, with the opening of ViPR code to developers in Project CoprHD, as well as continued commitment to software-defined storage.

In a blog post coinciding with EMC's open sourcing of ViPR Manuvir Das, engineering VP with EMC's advanced software division, spoke of how software development has changed, how open source has become mainstream, how its accelerates innovation and how excited EMC is about its "open source strategy".

So caught up in the excitement was I that when I spoke to Goldstein about XtremIO I thought I'd ask him if it too would become open source.

Of course I already knew the answer and Josh's immediate response was laughter. I pressed further and he said, "There is no plan to open source XtremIO. I'm not sure what the business model would be."

It's not like open source business models are a great mystery. There is a core of software code around a project/product and its open for free use and modification to anyone. Meanwhile, commercial distributions take in updates and changes in a more controlled fashion and charge customers for installation, configuration, support etc.

It could quite easily be applied to any product, even an all-flash array product like XtremIO.

But of course it won't be because it is precisely XtremIO's software that is the goose laying the golden egg for EMC and the upgrade to version 4.0 demonstrated further innovation on the fundamentals of the XtremIO XIOS operating system (OS).

In the upgrades announced at EMC World in May the XtremIO headlines were topped by an increase in X-Brick capacity to 40TB and totals for a rack topping 1PB.

But the other key announcements all built on XIOS's block storage technology, which sees data deduplicated and compressed on ingest. Building on that fundamental characteristic, EMC announced:

  • Integration of XtremIO and its RecoverPoint data protection software with leverage of the same dedupe and compression as in the XIOS engine to speed data transmission over the wire.
  • Copy data management with, for example, versions of Oracle, SQL Server and SAP databases generated over a lifecycle able to reside on XtremIO by use of space-efficient copies based, again, on that dedupe and compression algorithm in XIOS.

So, really, it's not like EMC couldn't open source XtremIO like it has done with ViPR. It's just that XtremIO's software is a goose that's laying a golden all-flash egg for EMC, while ViPR can show no similar market share figures and has been open sourced in an apparent attempt to breathe life into it.

It's obvious really, but EMC could never come out and say this.

 

PS: Goldstein also said there is no plan to make XtremIO available as a software-only product for customers to install on their chosen X86 hardware.

"It's a choice," said Goldstein. "Our customers like pre-packaged arrays. They're standard X86 servers and CPUs anyway and all the value is in the software so there are no big savings to be made by buying your own hardware."

 

ViPR and CoprHD: EMC forced to swim with the open source tide

Antony Adshead | No Comments
| More

EMC's move to make its ViPR software-defined storage platform open source (as Project CoprHD) appears a bold one, but I can't help thinking it looks like the giant from Hopkinton, MA, has been been subject to forces too strong to resist.

 

ViPR - which was launched two years ago at EMC World in 2013 - allows customers to build pools of web-scale storage from heterogenous storage media; EMC or third-party storage vendor arrays or even commodity hardware. ViPR uses its own Object Data Services that can be accessed via REST APIs including Amazon S3 or HDFS to enable analytics of data under its management.

 

At the time we noted that ViPR's success would be pinned on, as Freeform Dynamics' Tony Lock, put it: ". . . getting third parties involved. That could make or break it. It'll take political will and diplomacy."

 

And that diplomatic success has proved to be limited, with native support from other storage vendors restricted to NetApp and HDS and other makers' arrays accessible only via the OpenStack Cinder plugin.

 

In some ways, support from other vendors is the first hurdle, but not necessarily the most important one. Key of course are actual ViPR customers, and those seem to have been limited to short footnotes in EMC press releases.

 

Meanwhile, of course, open source in the datacentre has come on leaps and bounds, in particular via OpenStack's cloud infrastructure platform and its storage modules. Open source in general has improved its image from beard-and-sandals to near mainstream in the enterprise.

 

No more proof of that is EMC's conversion to open source, and its keenness to parade its credentials in that respect.

 

Open source development gains from the involvement of the community and its feedback, in terms of practical improvement to the product and in terms of buy-in and credibility.

 

Clearly then, EMC has viewed the relative rise of open source as a threat. It initiated ViPR as an EMC product that would tie together a storage and analytics infrastructure over heterogeneous storage.

 

Two years later EMC has had to bow to that threat, or seize the opportunity, depending on how you wish to spin it, and now ViPR has spawned Project CoprHD.

 

The Massachusetts storage giant will continue to sell ViPR as a commercial project, but now it will hope its open source alter ego will help its develop and gain respectability in a way it has failed to so far.

 

 

Software-defined storage: Not for us says Nimble Storage CEO

Antony Adshead | No Comments
| More

Separating software from hardware is an emerging trend in data storage. Under the phrase software-defined storage it has been at the forefront of vendor hype in recent years.

The appeal is that it potentially allows customers to build their own storage arrays by deploying storage software - which is after all where the main smarts in a storage system reside - on commodity server hardware.

So, it was interesting this week to come across one array maker - hybrid flash specialist Nimble Storage - which said it would not go down the road of offering a software version of its product.

In some ways that goes counter to a rising trend.

Vendors that offer storage software range from hardcore hardware pushers such as HP with its StoreVirtual VSA, virtualisation giant VMware with its Virtual Storage Appliance and VSAN, suppliers that have made their name with storage software such as Nexenta and DataCore, and more recently startup flash array providers like SolidFire that now offers its Element X operating system (OS) as a virtual appliance. Even hardware giant EMC offers software version of its VNX and Celerra products for lab use.

But Nimble Storage says it won't go down that road. CEO Suresh Vasudevan told me this week that while its engineers use software versions of the Nimble OS in test and dev, the company would not offer a software-defined storage product to customers.

Basically, he said there's nothing in it for Nimble. His argument went like this:

Vasudevan said: "If, for example, a storage system costs $100 and the hardware from China is $40 of that, here's how the rest is spent: $28 on sales and marketing, $12 on engineering and R&D, $5 on company admin. So, if I sell just software and the cost of the hardware is the same or possibly more then we would have to sell software at much less. But I still have to pay the same amount to sales and marketing people and to engineers, so it's really not clear there's a benefit."

Vasudevan also said that the software-only hyper-converged model - where server and storage reside together on the same box - is likely to lead to increased customer costs due to the need to protect numerous discrete compute/storage instances.

"When you have software on commodity servers the belief is it will lower costs, but in fact it often leads to overprovisioning. That's because the way people protect data on nodes without redundancy features such as dual controllers is to mirror data, often with triple replication," said Vasudevan.

So, that's why one array supplier will not go down the route of software-defined storage. I guess the argument works, for them, but it could be argued that they could at give the customer that choice. The most compelling part of Vasudevan's case is probably that the hardware will cost the same or more to a customer, but for the largest organisations out there that may not be the case.

World War Datacentre: The VMware empire and its rivals

Antony Adshead | No Comments
| More

In the real world we've seen centuries of wars, largely the result of old empires fading and new contenders arising. And the world of IT show similar parallels, albeit on much reduced timescales and with much less bloodshed.

Currently the VMware empire - keen to expand and therefore cement its grip on large swathes of enterprise IT - is fighting some vigorous skirmishes, most notably with hyperconverged storage supplier Nutanix, while manoeuvres aimed at open source cloud infrastructure OpenStack have also taken place, and while it keeps an eye on the horizon for container technology supplier Docker.

VMware's fight with Nutanix has the character of that between incumbent empire and new, upstart rival (eg, the British Empire vs Germany in the first half of the 20th century) as well as that between hegemonic empire and regional power (such as the USA and Iran in recent times).

On the one hand Nutanix is believed to be developing a hypervisor to rival VMware's. To date Nutanix is most noted for being a supplier of converged server/storage products that marry disk capacity and server power with support for VMware, Hyper-V and KVM hypervisor formats.

VMware's "ecosystem" is, how can we say this, rather comprehensive, and the company bets on customers adopting it from top to bottom of the datacentre and then finding it difficult to disengage. The development of alternative hypervisors by the likes of Nutanix threatens this.

At the same time VMware has developed an existential threat to Nutanix in the form of its own hyper-converged server/storage appliances, EVO:Rail, and has partnered with all the key server/storage vendors to bring that to market.

A recent major outbreak of hostilities came when Nutanix (as well as Amazon and Citrix) managed to halt a pending $1.6 billion VMware contract with the US Department of Defense (DoD). The DoD deal - involving more than two million product licences over five years - is/was something of a flagship for VMware and you can bet Nutanix et al's protests with the US Government Accountability Office went down like breakfast at Pearl Harbour on 7 December 1941.  

Meanwhile, VMware faces another imperial-scale rival in OpenStack, the open source private cloud infrastructure. Between the two there is both a fundamental philosophical difference (proprietary vs open source) and but also some complimentarity. There is no Nazi Germany vs USSR fight to the death implicit in this conflict.

VMware provides its virtualisation hypervisor, to which is appended a rich array of features, including the ability to orchestrate the delivery of IT as a cloud. It has also added storage functionality, for example via its storage APIs that connect to third party storage vendor products and more recently via its Virtual SAN (VSAN), as well as Virtual Volumes (VVOLs) that pin performance to specific virtual machines.

OpenStack is an open source cloud environment, and doesn't provide a virtualisation hypervisor. It does provide components including cloud orchestration, block, file and object storage, networking, security etc.

But despite the lack of core conflict it is possible to replace some VMware components with OpenStack, and this is what seems to have happened in highly publicised cases such as at PayPal, which has reportedly ditched a lot of VMware in favour of the open source cloud infrastructure.

The benefits of this type of move would be to give the customer more freedom, being able to mix and match OpenStack modules, use VMware or another hypervisor where needed, and avoid being locked into a tightly controlled VMware ecosystem.

So, for the time being, to VMware, OpenStack is a large occasionally dangerous rival, able to infiltrate and supplant its efforts to colonise datacentres. Perhaps a politico-military parallel would be the current phase of the relationship of the developed west and China, with the latter quietly making inroads into the worlds markets, African minerals etc, but with no physical showdown on the horizon (yet).

Finally, there is the as yet not quite quantifiable threat to VMware from the container technology typified by Docker, but also offered by Google and Microsoft. This is a form of virtualisation, but which does away with the hypervisor and instead allows apps to sit directly on the host operating system (OS) without the need for multiple guest OSs.

Containers address some shortcomings of hypervisors, namely their use of multiple duplicate running processes, boot volumes etc, and their use of dedicated memory and storage resources.

In the long term containers may hold a grave challenge to virtualisation hypervisors such as VMware's, but for the time being the technology is relatively immature, and lacks, for example, key storage features - such as the ablity to share data between containers, container portability with regard to storage etc - and in terms of security.

So, for now containers/Docker is not a pressing concern to VMware, but in the long term that could develop into a deeper, existential rivalry; perhaps more like the China of 50 years in the future to the developed west.

EVO:Rail and the missing acquisition phase of hyperconverged storage

Antony Adshead | No Comments
| More

It's rare that a storage-related technology arises without there being any acquisitions.

Look at flash. The startups emerged and pushed the big six to either buy some of them or to develop flash architectures from the ground up. Mostly big storage bought its way into the space, witness EMC/XtremIO, IBM/Texas Memory Sytems etc.

But hyperconverged compute/storage has seemingly sidestepped that familiar cycle. It's a hot area that emerged a couple of years ago from pioneers such as Nutanix, Simplivity and Scale Computing. They combine processing and storage in one box, with scale-out capability that allows the customer to grow capacity in grid-like fashion.

It allows easy setup and administration and with a VM-friendly architecture.

Then the font of software-defined everything, VMware, came along and brought out EVO:Rail.

EVO: Rail runs VMware virtualises CPUs with vSphere to, storage with VSAN, networks with NSX, while vCenter Log Insight and the EVO Engine handle deployment, configuration and management of resources.

Currently, EVO:Rail only scales to a 48TB four node cluster, but more is promised when the bigger EVO:Rack hits the market.

Right now, hardware makers that offer pre-configured EVO:Rail appliances include Dell, EMC, NetApp, Hitachi Data Systems, Fujitsu and HP. That's all the top seven storage vendors except IBM.

And so, with a big player weighing in from early on, ie VMware doing a good job of providing the software heart of hyperconverged computing, the usual cycle of acquisitions of startups has been bypassed.

So, will the pioneer hyperconverged players be eclipsed? You could imagine they might feel a bit lonely. They got to the dance first, knew all the trick moves but never got picked. VMware came along and swept all the big boys off their feet with EVO.

That's quite unusual in a world where startups not only innovate but often go on to form the core IP of a big players offerings when they are acquired.

But, the hyperconverged pioneers are not necessarily destined to stay as wallflowers. For a start they offer a choice of the hypervisors they run. Nutanix runs Microsoft Hyper-V and it looks like Simplivity has it planned. Meanwhile, Scale Computing's HC3 products are based on the open source KVM hypervisor.

So, despite the hyperconverged storage and compute market bypassing the usual acquisition phase, it looks like the pioneer startups have a comfortable niche in which to sit, especially given this is mostly an SME to midrange play where they can compete in terms of scale and functionality.







Hybrid flash horse is a winner, but did Dell mean to back it?

Antony Adshead | No Comments
| More

When reflecting on the rise of flash storage over the last couple of years it's easy to come to the conclusion that some storage vendors have been more active, more free-spending, more progressive in the marketplace.

I have in mind the likes of EMC and IBM, with their big ticket flash startup acquisitions and the million-plus IOPS products that lead their all-flash offerings.

At the same time it has been easy to regard, for example, Dell (and others that have opted to retrofit existing arrays), that have not invested in dedicated all-flash arrays in the same way as EMC and IBM, as perhaps lagging behind somewhat.

But it has become apparent that hybrid flash is the most common application of SSD and that all-flash arrays are very much a minority interest. In a blog last year I cited a 451 Group survey that showed 67% of respondents had installed flash in SAN or NAS arrays while only 8% had all-flash arrays deployed.

So, as time has passed it has started to look like Dell may have hit the right spot in the market. Or maybe the right spot in the market has hit Dell?

Dell has all-flash array offerings; the Dell Compellent SC4020 can be completely populated with SSD. But its flash forays have largely been into the world of hybrid. The SC4020 can also house HDDs and there is also an EqualLogic iSCSI hybrid flash array, the PS6210XS.

It looks like Dell has been the beneficiary of a happy accident, but according to storage general manager, Alan Atkinson, Dell's bet in the hybrid vs all-flash stakes was a cert for a long time.

"It's a bit of a philosophical question," said Atkinson. "And we have taken sides."

"We considered all the options and honestly believed we didn't need to buy a flash startup or develop flash systems from the ground up. And I think the market has demonstrated that we didn't need to."

Atkinson's view is that flash is a disruptive technology but that it has transitioned from being most suited to high performance use cases.

"All-flash arrays started off expensive and were targeted at use cases that didn't need the full range of features like replication etc."

"That's a market and it is what it is. But what's happened is that as flash prices have decreased it has become a more general purpose storage medium."

Luckily for Dell, according to Atkinson, its Compellent storage operating system (OS), with its tiering capabilities, lent itself well to uses where flash is mixed with spinning media, and was re-written to some extent to optimise it for these use cases.

So, says Atkinson, Dell has no need for a separate all-flash array platform, especially as its customers want a common interface across HDD and SSD storage, or as he put it (IT marketing euphemism alert!), "all the wood behind one arrow," which, of course, you don't get with EMC or IBM, whose all-flash arrays run on discrete OSs.

It's quite a compelling argument, and the net result is that Dell has solid product offerings in flash in all but the most high-performance use cases. But was it what was intended all along? Hmm, well, the jury is still out on that one for me.

Cloud storage future shown by current limitations. Chapter 96: Ctera

Antony Adshead | No Comments
| More

I've written recently that despite relatively low enterprise take-up of cloud storage - due to concerns over bandwidth, availability, security, compliance, data portability etc - vendors big and not so big are making moves into cloud storage; usually hybrid cloud storage, and always limited by the current capabilities of the cloud.

That's because cloud storage is not fully ready, for all the above reasons, for mission-critical enterprise use.

A vendor whose work in the space illustrates this is CTERA, which finds a way to exploit the currently-usable facets of the cloud to provide branch office storage and backup plus file sync and share for mobile users.

Sure, neither of these represent a full-blow enterprise storage use of cloud storage, but are an adaptation what the cloud offers at present.

CTERA's offer centres on CTERA Portal, a platform that allows users to connect to private and public cloud services with linkages to cloud storage environments from the key vendors.

With CTERA Portal customers can deliver, for example, storage and backup services to remote office users via an on-premises NAS appliance, and file and sync services for mobile users via a software agent on the endpoint device.

The appliance offers local disk capacity in what CTERA marketing VP Rani Osnat calls "disk-to-disk-to-cloud", AKA hybrid cloud, and does so with source and global deduplication.

The company's offerings allow enterprises to deliver remote office and/or mobile device data protection or they can be used by service providers to deliver to customers.

What is significant about them? It's another case where an enterprising business has spotted opportunities to exploit the cloud for storage, whilst at the same time recognising its limitations.

CTERA doesn't offer enterprise-class primary storage in the cloud. It can't. The cloud can't provide that - yet. Instead, it allows businesses to exploit cloud storage for the likes of backup and file sync and share.

In the case of its backup appliances, they are hybrid cloud. They have to be because you must stage to disk locally if you can't guarantee throughput across public networks. Mobile sync and share, meanwhile, deals in small volumes of data that are not time critical and dependent on LAN-like levels of connectivity.

So, for now we have numerous vendors exploiting the opportunities and limitations of cloud storage. It's an interesting space to watch and will be increasingly so as the boundaries of possibility change over time.

One day we may see the likes of CTERA - not to mention EMC and NetApp - bringing products that can truly offer primary data-capable enterprise cloud storage services to market.

But that will then raise the question: If the cloud becomes feasible as a tier 1 storage location, what will become of the existing set of vendors, tied as they are to a world of on-premises arrays?

Zadara enterprise bolt-ons show the limits of cloud storage

Antony Adshead | No Comments
| More

Here at Powering the Cloud (formerly Storage Networking World Europe) in Frankfurt, many of the vendors and products actually highlight the current limits of the cloud.

One such example is Zadara, a company that's making a successful fist of exploiting the shortcomings of the cloud for enterprise customers.

It provides what it calls enterprise-storage-as-a-service. In other words it provides all we'd expect of the cloud - hourly billing, elasticity, zero downtime and no need for the customer to deal with the dark underbelly of storage infrastructure.

It does this with its own hardware - supplied on-premises, to colo facilities or linked to existing cloud facilities - that is built from commodity servers and its own fully cloud-featured storage software and with it offers services that the likes of Amazon Web Services and Microsoft Azure can't provide.

Cloud storage services from the big beasts are full of shortcomings for an enterprise customer, with volumes limited in number and size, likewise with clusters, a lack of snapshots, replication and thin provisioning and encryption keys in the hands of the cloud provider. And all this comes on top of potential issues with bandwidth and latency.

Zadara aims at filling these gaps, as a bolt-on service that provides so-called Virtual Private Storage Arrays with guaranteed RAM, CPU, drive and networking levels of service plus encryption that's totally in the hands of the customer.

Amazon and Microsoft refer their "more choosy" customers to Zadara because they know their own shortcomings for enterprise users, says Zadara VP for business development, Noam Shendar.

"The big cloud providers are built for millions of customers, and aim for the lowest common denominator," says Shendar. "We're built to provide enterprise service for the select few."

That appears to be true, but while such a situation remains it shows that the cloud is far from go-to option for enterprise users and while it is so companies like Zadara will fill the gaps.

Ideas EMC probably hates #96 - Seagate Kinetic drives

Antony Adshead | No Comments
| More

In storage one of the key fault lines we see is between vendors keen to ensure it's their product that builds in the intelligence needed for storage or storage-related operations to take place.

I guess that's not an earth-shattering observation. The basic functions storage are in a way quite mundane - data is stored on pretty dumb drives - so it's the bells and whistles that count. The vendors usually call this "adding value".

On the one hand array makers are keen to maintain its location in the controller, while one the other hand hypervisor sellers try to drag it into their software (eg, VMware and its various storage and backup-related APIs).

You can probably count the latter among, "ideas that EMC, NetApp etc wish didn't really exist". And there is another idea that Big Storage would probably like to see un-invented, which is Seagate's Kinetic, a drive that cuts out the need for a storage array controller and associated hardware altogether.

It does this in Kinetic drives by building in the intelligence required for object storage data access to the drives themselves. Kinetic drives replace the storage controller as well as SAS, SATA controllers, RAID controllers etc with key value store capability that can scale to well in excess of the number of atoms in the universe.

In doing so they interface directly with object storage environments, including Ceph, OpenStack Swift and Scality. And all that is required is a JBOD enclosure with Ethernet connectivity to house Kinetic drives.

Sure, it's an object storage technology - definitely very much a minority interest currently - and only really useful for large-scale relatively slow access use cases, but in such cases customers can potentially exploit it to drive out cost in capital outlay on storage array controller hardware as well as in operational costs for power and cooling.

But in future - as data volumes increase and the technology matures - object storage is likely to come of age and widen this fault line. Add to that the growing tendency of storage software and hardware to separate at the level of controller and the big storage incumbent array makers have a challenging few years ahead.

VMworld 2014. No limits! Err, yes there are!

Antony Adshead | No Comments
| More

"NO LIMITS" shout the 10ft high ad boards around the vast conference centre.

At VMworld Europe in Barcelona the virtualisation giant is keen to stress how rapidly moving and changeable is the world in which we live. It's a "liquid world", says CEO Pat Gelsinger, one where we have moved from "rigid structures to liquid business" where the old adage "built to last" must become "built to change".

In the world of IT this calls, says Gelsinger, for "agility, bravery, change". That's because IT is simply awash with barriers; between cloud and traditional applications, between on-premises and off-premises infrastructure etc etc. We must, said the Pennsylvania farm boy (his description) turned tech CEO, "conquer the silos" and change "or to and".

VMware does its best to hammer the message home with speaker-straining basslines, dry ice and dancers at the 9am keynote, and backs it up with apparently supporting quotes from the likes of real geniuses Isaac Newton and Arthur C Clarke.

But in reality the world of VMware is all about limits. It's not an open source community. It's a commercial business whose bottom line is its, err, bottom line, and its modus operandi is all about making sure there are limits; that once you're locked into the VMware ecosystem you're more or less stuck there.

In the small corner of VMware that interests me as storage editor the limits are all too apparent. Its VSAN storage virtualisation tool, for example, only works with VMware's vSphere platform (never mind other hypervisors or physical servers) and it only provides storage to VMware virtual machines via a proprietary protocol.

And in VMware's recently announced EVO:Rail hyper-converged appliance we have a product that only works with VMware virtual machines and at present is constrained by very definite limits of scalability.

All of which leaves aside the big picture that once you've deployed VMware as a virtualisation environment you're going to have to spend a lot of time and money to provide things like storage and backup for the new environment.

Why do companies like VMware try so hard to convince us they're some kind of altruistic foundation, when we all know they exist to make profits and the best way they can do that is to lock people in? And really, why over-egg the pudding with the language of freedom and liberation?

Please, tell us how you think your products are the best at what they do but do drop the pretence that you're providing IT without limits.






Look out Symantec! Virtual server backup specialist Veeam is behind you

Antony Adshead | No Comments
| More

One of the big surprises of the recent TechTarget European Purchasing Intentions survey was the appearance of Veeam as number two customer choice of backup software product.

In the survey Symantec was recorded as the most popular backup product among the 511 storage professionals questioned, with 20% saying they had deployed it. Veeam came next with 19%, followed by HP (15%), EMC's NetWorker and Avamar products (13%), IBM Tivoli Storage Manager (TSM) (12%), then CommVault, CA, and Microsoft, all with 6% of respondents.

Veeam gained a healthy increase in share compared to the 2013 survey. Last year it was behind Symantec (30%), HP (16%) and IBM TSM (15%) and registered 12.5% of customer deployments.

Three years ago, in 2011, Veeam didn't show up at all. That year Symantec was top dog (37.5%), with HP (17.5%) and EMC (14%) second and third.

Veeam's position is surprising on a number of levels. Not because Veeam isn't a good product. There's no doubt it is. But it is surprising in some ways that a product that specialises in the backup of virtual machines only should reach such levels of popularity.

That's because enterprise backup products, from the likes of Symantec, EMC, IBM (Tivoli Storage Manager), HP, CommVault et al, have universally achieved the ability to backup virtual servers as well as physical ones, often with tight integration to the hypervisor and use of advanced features within.

It's also a surprise because, while most organisations (87%) have virtualised servers it's also commonplace for IT environments to be mixed between virtual and physical devices. And it makes sense - doesn't it? - to use a backup product that can operate across the two environments.

Or maybe it doesn't matter to many organisations that are happy to run different backup software for physical and virtual servers. Or maybe those that ticked Veeam are SME organisations where it's more likely that the server environment is 100% virtualised?

I've no cast iron solutions to this particular conundrum, and am happy to congratulate Veeam on its success. 

All-flash arrays: Will time run out for mainstream acceptance?

Antony Adshead | No Comments
| More

The all-flash array has been the flavour of recent times in storageland. But, has the hype exceeded the reality?

It may well have done. Or maybe it's just its timing is off.

If we look at our recent Purchasing Intentions survey there's no doubt that flash storage is popular. More than half of respondents indicated they had flash in use (36%) or planned to implement (7%) or evaluate it (25%) this year.

That's a fair amount of traction for flash, but not so much of that kudos can stick to all-flash arrays, according to another survey we ran this summer on ComputerWeekly.com, this time from 451 Research.

Its survey gained more granularity on the flash question and found for most respondents (67%) flash in use now is installed in existing SAN/NAS storage arrays while 25% have put it in servers. A mere 8% reported having deployed an all-flash array.

What that shows is that for the most part IT departments see the addition of flash to existing storage or to servers as the best way to accelerate I/O performance for key applications. That should be no surprise - all-flash arrays don't come cheap and with constrained budgets it's clearly best to target fast access media where you need it.

But it is in contrast to storage industry hype, and perhaps more importantly, the billions spent to develop or buy all-flash arrays, such as EMC's purchase of XtremIO, IBM's Texas Memory Systems acquisition and its pledge to invest $1 billion in flash.

The survey results also show that for many applications right now, disk is quite adequate. Compare the percentage of those in the ComputerWeekly.com survey that have virtualised servers (87%) with the numbers that have flash in place (36%) and it looks like there isn't a rigid driving shaft between the deployment of virtualisation and the need for flash.  

And so, the all-flash array could turn out to be something that takes its time to become a must-have. There's little doubt that disk will one day be superceded by solid state media, but in the meantime alternatives to flash are being developed. The hope must be in Big Storage Towers that flash is still the solid state media of choice when disk has finally had its day.

Death of the LUN: Another nail in the coffin from Gridstore?

Antony Adshead | No Comments
| More

Has the LUN had its day? It has been the de facto method of creating logical volumes on physical storage for decades, but in this era of virtualisation that may be becoming a thing of the past.

In VMware or Hyper-V environment the LUN still exists but only as a single large pool in which the virtualisation platforms' virtual drives are created.

But Gridstore, which substitutes so-called vLUNs for LUNs while providing storage for Microsoft Hyper-V virtual machines, claims to have done away with the LUN altogether, in something like the way Tintri does for VMware environments.

Gridstore combines storage arrays largely comprising cost efficient 3TB or 4TB SATA drives and MLC flash (500GB or 1TB in its performance nodes) with software that lives in Microsoft System Center

That software comprises a "vController" that matches Hyper-V virtual machines to vLUNs and provides quality of service (QoS) on storage provision. The vController, Gridstore says, emulates the single app-single server-DAS setup of the physical server world with data put in queues and sent in bursts rather than randomly as they occur.

"Virtual environments and the LUN are an architectural mismatch," says George Symons CEO at Gridstore, which makes scale-out array nodes in 12TB and 48TB base units, expandable to petabytes.

"A LUN must cater to many servers of different types of workload and are the site of the I/O blender effect", he says, referring to the way many and random I/O requests from virtual machines can overload physical storage.

In Gridstore arrays the vController can match storage performance to the needs of the VM, should that be sequential or random and can make sure "noisy" virtual machines do not disrupt others, alerting the admin if a VM isn't getting the gold, silver, bronze levels of performance set.

Gridstore claims 40,000 IOPS for the minimum three-node configuration of its 12TB devices. It doesn't sound a lot, when you think of the 500,000 and 1 million IOPS boasted by the all-flash provider.

But you don't need that, says Symons. "They talk of one and two million IOPS but 40,000 IOPS covers the needs of most people. To be honest most customers don't know what they need and most are in the 1,000 IOPS to 5,000 IOPS range. But in any case we can scale to 100,000 IOPS on nine nodes.

EMC's DSSD introduces the PCIe flash appliance

Antony Adshead | No Comments
| More

It's always interesting when a new storage technology comes along, not least because we have to figure out what exactly we're looking at.

Under the microscope this time is the fruit of EMC's acquisition of DSSD, a Silicon Valley startup bought last month.

EMC calls it "rack-scale flash" and Dan Cobb, CTO of the flash division at EMC, told me DSSD had tried to do three things in its short existence.

These are, he said: "Achieve massive storage density, using flash and other components to build an all-flash appliance - as distinct from an array - in terms of the number of chips that can be co-located."

"Build a connection to hosts that is massively parallel - using the Gen 3 PCIe connect - to connect between 8,000 and 16,000 independent flash dies directly to the host with tremendously low latency, compared to one drive that that typically holds 16 dies via a single SAS or SATA interface."

"With all flash management, wear levelling, garbage collection etc integrated into system software to achieve incredible effectiveness."

What we have here is a PCIe-attached flash appliance with capacity of "hundreds of TB, approaching petabytes" that can operate as direct-attached storage (DAS) or, it is claimed, as an extension to RAM.

EMC is aiming it at in-memory database use and big data for real time operations.

Cobb said EMC would be working on three forms of connectivity for DSSD. These would be:

·         As a traditional block interface using NVME (non-volatile memory express) to connect via PCIe.

·         Via custom APIs. DSSD will have developed for it new API primitives, for example a plugin for HDFS low latency operations tailored for specific applications.

·         In-memory database use - for example with MongoDB - that will result in virtual memory primitives that allow the database to see one giant memory store in DSSD.

So, it looks like we have a new beast on our hands, a PCIe-connected flash appliance for use as extremely low latency DAS and/or as a RAM extension.

It's a bit like a server-side flash store but with capacity that massively outscales existing products, which will be able to, as Cobb put it, be used as "a very fast failover, for example running multiple SAP HANA instances."

And at the same time it might be something like the new memory channel storage products now emerging. Sure, it's not DIMM-connected but the very low latency claimed by EMC may allow it to become a RAM extension.

Anyway, more will certainly become clear over the next year, with EMC planning to "harden" the product and go for some kind of product launch this time next year.

Actifio gets funding, but what's the future for a good idea in storage?

Antony Adshead | No Comments
| More

Actifio this week announced it had gained another $100m in funding, adding to a previous round of $107m, and according to Ash Ashutosh, founder and chief executive of Actifio, that pushes its market valuation to $1bn.

Actifio's Copy Data Storage Platform is the latest iteration of what we might call file virtualisation.

In the Actifio scheme numerous, isolated, many-times-duplicated versions of files are rationalised into the smallest number of copies required for the various requirements of the organisation - file access, backup, archiving and/or disaster recovery.

Whereas most businesses suffer the unwanted and unplanned multiplication of files as users copy, email, etc information between them, Actifio slims data down to a "golden copy", which is in practice the nearest one to the application that created or updated it.

Other copies are held elsewhere. They may be needed in production by other geographically located datacentres, or may be at different stages in their lifecycle, being backed up or archived, for example, and are updated from the golden copy so that all are eventually synchronised. Copies are retained with snapshot functionality, ie they can be rolled back to any point in time where changes were made.

Actifio targets the data protection and disaster recovery market and hopes to replace existing replication products, including at the storage array. It supplies the product as software or as an appliance on an x86 server. When customers deploy it Actifio discovers all the apps in the environment and policies can be set for their data - how many copies, on what tier of storage media it should be kept, etc.

It all sounds like the way you'd do file storage if you were thinking it up from scratch.

But there could be obstacles.

For a start, with 300 customers gathered over five years it hasn't exactly set the world on fire. And while the Actifio scheme is a clever one that can save a lot of disk space, re-architecting an existing environment might be a big ask for a lot of customers and a nerve-jangling prospect.

Perhaps that's why more than half its customers have deployed Actifio where data is clearly separated from production data - 6% use it for analytics and 17% for test and dev - or into relatively new, greenfield, environments at the 30% of its customers who are service providers.

Then there's the fact that there are many vested interests in storage that work against the idea of reducing the need for disk capacity. Ashutosh says the market it is playing in is worth $46bn but how much of that will take a swipe at disk vendors' revenues?

Whatever happens, the future for Actifio looks like one of going public with an IPO or being bought. Let's hope if it's the latter that it's not bought by a disk array maker that puts it out to pasture. 

Enhanced by Zemanta