This content is part of the Conference Coverage: Microsoft WPC 2016 conference coverage
This article is part of our Essential Guide: Data storage considerations for a DevOps environment

Storage roundtable: The impact of DevOps and cloud

MicroScope gathered together a group of storage industry representatives to discuss how DevOps and cloud will affect the channel

Microscope recently gathered together representatives from across the storage industry to take the pulse of the market and discover what the channel needs to be doing to put itself in the best position for the future. In the first part of our coverage of the roundtable debate, our panel of experts looked at the the EU's General Data Protection Regulation (GDPR). In this second part, they talk about DevOps and cloud.

Some of the biggest increases in secondary storage are from those investing in hybrid cloud or DevOps. It seems to be more about managing data than just backup and recovery. Is that happening in the market?

Nigel Tozer: For Commvault, almost every deal involves cloud these days. In the secondary storage market, there are still a lot of organisations that want to keep operations and data on-premise, but even so, a large number that choose to stay on-premise will also replicate the data in the cloud for off-site purposes. We also see a lot of organisations choosing to back up directly to the cloud. With fast networking technologies and the ability to recover directly in the cloud, it makes perfect sense. Cloud-only customers always use the cloud for secondary storage, of course.

Jerry Rijnbeek: We’re certainly moving forward. What’s changing is that when we see the history of backup and recovery, customers used to simply want a copy of their data – that’s it. Now, with people thinking about hybrid cloud models, or even switching to full public cloud models, data needs a lot more than just a copy. You want to analyse it, you want to secure it, you want to do audit trailing, forecasting – those kinds of things. So it’s not just a copy of the data any more, data needs to be transportable.

Maybe I’m in AWS [Amazon Web Services] now, but I might want to move to [Microsoft] Azure tomorrow. So, they’re looking for a secondary storage/backup recovery platform that does all of that – transportability, forecasting, access control. 

Ezat Dayeh: I agree, as we’re seeing exactly the same thing. There’s also a lot of drive around simplicity – people and businesses want to simplify the big processes and the management of data, while also reducing costs. Not many people know this, but backup and recovery can cost more than your primary storage. So when you add up all of the costs associated with it, it comes as shock to a lot of customers. Things like DevOps as well, and archiving, being able to migrate to the cloud and migrate around different clouds.

We are aiming to help our customers with these issues. We simplify secondary storage as well as consolidate it. The more you can consolidate it and use backup and recovery to drive use cases for other parts of the business, you’ll have a win-win situation. Legacy backup and recovery, for example, is just an insurance policy. It’s something very costly that sits there in the corner, doing pretty much nothing but copying data. Thankfully, organisations are starting to ask the right questions, such as, ‘Why can’t I put this data to use?’ The more you can consolidate, the better it’s going to be for customers, and that’s why they’re showing a lot of interest here.

James Hall: If you think about what’s happening in the primary storage market – the move to all-flash – the reason people are moving to all-flash is not because of performance, because you get that anyway. They’re doing it because it’s large-scale consolidation. Lots of hybrid arrays, lots of space, lots of power, lots of cooling – in a much smaller footprint. This means the risk domain gets much bigger, so the old method of taking a day or three to back up and get it back is not acceptable when I have that many things in one bucket. You have to rethink the backup strategy because the eggs are in a much smaller basket.

And the other thing is why design a superfast infrastructure to consolidate if you are not going to change everything else around it? It doesn’t make any sense to do that. So the onslaught of flash is great for environmental reasons and everything else. And it’s dragging the entire datacentre architecture with it. We end up consolidating everything and everyone’s driver is ‘smaller is better’. The smaller it is, the bigger the risk domain. You need a way to monitor it, you need a way to back up and recover it quickly, and you need a way to make sure that if you are going into public cloud, you can move it around.

Roundtable attendees

  • Dan Chester, regional sales manager (UK & Ireland), Cloudian
  • Ezat Dayeh, regional technical architect, Cohesity
  • James Hall, EMEA strategist (storage), HPE
  • Nicolas Maigne, senior business development manager, EMEA – storage, Micron
  • Jerry Rijnbeek, director of sales engineering, EMEA & APAC, Rubrik
  • Andy Corcoran, UK sales director, channel, Dell
  • Mathias Grobet, managing partner, Velocity Business Design
  • Chris James, marketing director, EMEA, Virtual Instruments
  • Bob Plumridge, CTO and member of board of directors, SNIA Europe
  •  Nigel Tozer, solutions marketing director, EMEA, Commvault

Chris James: We’re seeing an interesting thing with monitoring – what we call the ‘noisy neighbour’ issue. In modern shared infrastructures you get an application that decides to kick off some batch run or backup run, but it’s sharing the same infrastructure with a Tier 0 app and it’s dragging that down. The typical application performance tools can’t see what’s going on in the infrastructure – they just know application performance is dying, but are not sure why or where. When you had one massive infrastructure, that wasn’t a problem. Now you are getting more shared infrastructures, it’s becoming a daily issue.

Ezat Dayeh: We are seeing some of the primary storage vendors develop quality of service policies, and being able to encapsulate those types of workloads, but it’s not perfect. There is still more work to be done here. It’s something that you can also try to control at the hypervisor layer. Somebody has made that move, but it doesn’t necessarily have the functionality.

Jerry Rijnbeek: What’s really interesting to see is how many customers are actually going green field, even those with sensitive environments such as governments or hospitals. Thankfully, most customers are smart enough to say, ‘Instead of trying to convert our old infrastructure, we’ve actually got to go new, green field and fully virtualised, fully orchestrated, hyper-converged, and we’re going to think about portals, processes and workflows instead of servers.’

Ezat Dayeh: I’ve seen people talking about it and you see it most deployed with service providers, for example. It’s a clear, great use for that type of environment. The challenge in customer environments is that if you automate everything and you give the application owner the ability to do everything, you kind of just talk yourself out of a job.

James Hall: For an automated or orchestrated environment to work, it needs a shepherd. So the person who was the storage expert doesn’t necessarily lose their job, they end up doing something else because orchestrated systems need someone constantly making sure the workflows work.

The other problem we get is if you get to build an orchestrated environment, you have to get past the silos, and it’s not just an issue in large organisations but even in smaller organisations – there’s every flavour of Linux and every flavour of this and that, and it is really hard. The automate and orchestrate is good, but it doesn’t mean someone is going to lose their job because that still needs to be managed.

Bob Plumridge: On the management of those sites, surely, what we are seeing is that individuals are able to manage so much more. They might lose their job, but they might be managing an infrastructure five times as big.

James Hall: A hyper-converged infrastructure is great – everything is in a single package, and that’s really nice. But if you walk into a Tier 1 bank or service provider and there is a Unix town or Windows town, and the networking town, the storage town, who runs that? So then you just get into a whole set of new challenges.

Nigel Tozer: Due to the accelerated shift towards cloud, we have introduced a cloud-specific training course to help operations teams for exactly this reason. They are often traditional IT folks, who now all of sudden are being asked to manage and facilitate the transition to the cloud.

James Hall: Traditional SAN [storage area networks] and NAS [network-attached storage] will never die. They are definitely shrinking and every organisation I’ve seen in Europe says it wants to shrink that, but there are certain things that will stay there. ERP [enterprise resource planning], CRM [customer relationship management] – things that run the business will never transition out of there, and that will be a very small percentage of the written data. 

There are lots of choices for the channel to present to customers, and they seem to be increasing. Does that make it harder for partners?

Andy Corcoran: The channel in the UK is fairly mature, and I think we can be really confident that they still focus on what the problem is rather than just saying hyper-converged is the answer. I think they will always continue to do that, which is why we agree that primary storage around traditional NAS and SAN can’t disappear. It will be those new workloads that drive the move towards the new generation of storage architectures, whether that be software-defined datacentres or anything else. I’m confident that channel partners will keep close enough to their customers to make sure the requirements fit.

James Hall: We need a sensible partner to say, ‘Yes, it’s really good in the right use case, but what are you trying to achieve?’  Because, in my view, most customers will end up, depending on their size, with a little bit of traditional and a lot of something else. At that point, their monitoring tools and automations, all those things, become important.

We spent years moving from distributed storage and silos into centralised, and we’re almost saying we’ll probably end up with a bit both again. We need the partners to be able to articulate why both products or both technologies are needed and how they help customers manage their silos effectively, and then how we make those portable. Because one platform is very different and talks in a different language to the other, portability then, and being able to analyse across platforms or being able to take it from both platforms and send it to the cloud, becomes really important.

Ezat Dayeh: It’s the same when you look at the cloud. You need to decide what you are going to use it for. You want to know what technology is the best fit and solves your issues, as an organisation. For me, the cloud is like a hotel, where you can go and stay, have access to all the available services, and you don’t have to do anything yourself but pay. But the reality is, we don’t live in hotels, we all live in our own homes – for me, that’s the equivalent of on-premise. It’s all about where it fits and what’s the correct use case to implement a certain solution.

Have you seen some people checking out of that hotel and choosing to take data back on-premise after having been in the cloud?

Ezat Dayeh: It usually happens when somebody in the boardroom says, ‘I heard a lot about this cloud stuff. It sounds really good. Let’s do it.’ Making the move because it’s fashionable, without studying the needs of the organisation and looking for solutions and technologies that serve it best, is far from ideal. It’s not a utopia for everything, but marketing around the cloud is so good, it makes it look like the perfect solution for all organisations. That’s how businesses end up wanting to repatriate from the cloud.

Jerry Rijnbeek: But if you look at hyper-convergence, at least in the past two years I have seen customers that are over 95% hyper-converged, including hospitals with their patient management systems. So although a large-scale bank still may have many legacy applications, even they are starting to migrate away from them. But from a general partner’s perspective, many customers are going 100% hyper-converged successfully.

Andy Corcoran: I think people recognise it’s not necessarily new either, and it’s not the first iteration we’ve seen – the idea that we want to deliver a utility-type service in the datacentre. We’ve got better capabilities, we’ve got better tools, we’ve got better infrastructure to be able to provide that. And I think what’s also happening is some of the vendors are providing very easy ways to consume the hyper-converged industry these days.

James Hall: I think budget cycles have to run together unless there’s a specific use case you can find in that organisation – VDI [virtual desktop infrastructure] or something – that would drive the requirement for more storage or more compute and therefore makes sense to have that in a single container. If you’re just replacing your storage to do that, and go and buy some hyper-converged infrastructure and compute which you don’t necessarily need, I think that is probably the thing that’s holding the hyper-converged onslaught back a bit.

For a lot of people the storage asset will run for five years and a server asset runs for about two, maybe three, so they’re slightly out of sync. When they’re in sync, there’s an opportunity to do a hyper-converged infrastructure replacement; where they’re not quite, then you have to build up the right kind of TCO [total cost of ownership] and business case to really wipe both out at the same time – or find that specific use case to get it on the floor so that the service is there.

Bob Plumridge: The other thing that’s driving demand is most customers are seeing their storage lifecycle shorten and I think that’s worrying them more than the server products because of the total disruption of having to migrate multiple petabytes of data from platforms every two years.

Jerry Rijnbeek: I’ve seen that happen a couple of times over the past 12 months, where a customer’s board of directors are pushing software vendors, saying if you don’t support our new virtualised or hyper-converged platform and stick with your physical requirements, then you might be out. So it works in both directions.

Nicholas Maigne: On the flash perspective, it is for specific use cases, and more often than not it is all-flash because the hyper-converged concept is about bringing the data closer to the compute, and flash is the best way to do it. A couple of years ago it was a mid-market thing, but now it is going to the enterprises, in those greenfield projects and specific use cases.

Jerry Rijnbeek: I think it’s changing too. I see a lot of large enterprise customers going fully hyper-converged, including VDI, CRM and patient management systems. For channel partners there are a massive number of customers that are just ready to go. That’s why they want a simple platform because instead of managing five or six potential different vendors and products to run an infrastructure, switching to maybe one or two, and building efficient self-service portals on top of that, simplifies the infrastructure.

Ezat Dayeh: With analytics we’re seeing that’s one of the drivers. It’s always better to have the data locally rather than have to drag it over the network. So, the more you can do - all in one consolidated system - the better.

Chris James: It’s quite interesting to see the clash that you’re getting from modern IT. You’re talking about millisecond [ms], microsecond [µm], application response across the infrastructure and then you look at the accounts department that’s signing this off and they’ve got a software budget, a hardware budget and maybe a storage budget and a services budget – where does application-centric infrastructure performance monitoring fit in?

Jerry Rijnbeek: We see a lot of partners skilling up. They’re retraining their consultants and their sales organisations to be able to interconnect the hardware with their workflows and service management portal, and so on.

James Hall: In general, there’s a huge opportunity for vendors and partners. There’s a huge service opportunity for skilled coders to say we have five or six APIs [application programming interfaces], we’ll tie this together for you and that’s three months’ worth – boom, they’re in, they’re done. If you look at a large Wipro or an HCL, when they’re taking on a contract and they look at all the gaps in the process, they’re thinking these 100 engineers to fix that gap and they just fix it. From the service’s point of view, for us or for anyone or for our partners, it is a massive opportunity. In the world we’re moving into, of composability, there’s a huge opportunity.

Jerry Rijnbeek: For the engineers we hire, it’s a mandatory skill, and our field engineers have to be able to do that.

Is the channel able to talk about DevOps?

Andy Corcoran: I think they want to do the same level of consulting around it that they would want for the GDPR conversation. A lot of partners see the advantage of providing a message to their customers around transformational storage, whatever sort of workload they might be referring to.

Jerry Rijnbeek: The traditional reseller often comes from a world where they were integrating four or five different platforms into one, which were the very profitable projects for them. They’re now going in three directions. One is that they are offering ‘as-a-service’ models: DR [disaster recovery] as a service, backup as a service, replication as a service, etc. The second is that a lot of resellers are switching to the DevOps part by retraining their engineers to do that, and the final one is to focus on GDPR. You see almost every single channel partner going in one of these directions, and some go in all three.

In the third and final part of our coverage of the MicroScope storage roundtable, the discussion turned to storage hardware and flash, and we find out how vendors are helping their channel partners to prepare for the future.

Read the first part of our rounbdtable coverage, on GDPR, here.

Read more on Cloud Platforms