cherezoff -

Inside NetApp’s transition into a data management firm

NetApp’s leaders in Asia-Pacific discuss the company’s pivot into data services and its traction in the region

About five years ago, NetApp conceived the idea of a data fabric as a means to provide applications with access to data no matter where they were hosted, marking its transition into a data management company.

NetApp has since reorganised itself to realise that vision, including the creation of a cloud software unit. It has also inked key partnerships with leading cloud suppliers that have been getting a growing slice of the world’s enterprise workloads.

In an interview with Computer Weekly on the sidelines of the inaugural NetApp Innovation Day in Singapore, Sanjay Rohatgi, NetApp’s senior vice-president and general manager in Asia-Pacific (APAC) and Matthew Swinbourne, its CTO for cloud architecture in the region, discuss the company’s business in APAC, its pivot into data management and where storage plays in its future.

Where do you see the market going in the Asia-Pacific region? Having been in the job for six months, how has it been so far? And what are some of your key priorities in growing NetApp’s business in the region?

Sanjay Rohatgi: It’s been very exciting and a new domain for me, coming from Cisco and Symantec. As I talk to our customers and partners, it’s very clear that hybrid, multicloud will be the de facto architecture. That’s why we announced a very tight partnership with Microsoft on Azure NetApp Files (ANF), along with similar partnerships with Google Cloud and Amazon Web Services.  

But we’re seeing some cloud repatriation as well. The cloud economics may not work out in some cases, so some customers are starting to move from the public cloud back to private cloud on their own datacentres.

I think we are in a very strong position as a company, particularly around helping customers to build their own data fabric for the hybrid multicloud world. That’s going well for us and has started to produce good results for us in the region.

Another thing we’re seeing – and that’s why we came out with Keystone – is that capex (capital expenditure) buying is not going to be the only way people are going to consume technology. While there will still be an element of that, organisations are at the same time looking at a cloud-like consumption model, whether it’s on a subscription basis or as a metered utility.

I'll give you an example. In Australia, we had a similar model for what we call data fabric as a service, which was being used by many of our key customers in Australia. So, Australia was leading the world for us, and we’ve now globalised the service under the Keystone banner. We are now going to replicate that success in other markets, such as Japan, Singapore, India, Taiwan and Korea. I think if we don’t chime in with the business model of our customers, we will not be successful.

Storage is still very important – don’t get me wrong, but the narrative is changing on how you make the storage smart enough in the new paradigm.
Sanjay Rohatgi, NetApp

In addition, we’re giving customers the flexibility to continue buying some technology from us on a capex model. I think none of our friends in the industry have done this to the same extent that we have with Keystone, which was the missing piece for us. The combination of Keystone and being the data authority for the hybrid multicloud, coupled with our tight partnerships with hyperscalers, positions us uniquely in the industry.

Could you talk about what being a data authority means to NetApp and its customers? Also, in the APAC region, multicloud adoption is still in is early stages. How will that affect your growth prospects?

Rohatgi: I think the name of the game in our industry is that it’s all about capturing enterprise workloads, you know, the SAP-type, high-performance compute, video surveillance and core database workloads, either through our SAN (storage area network) footprint in enterprises or our cloud partners.

You’re right that multicloud adoption is still in its initial stages in APAC, although it’s not the same everywhere. In developed markets like Australia and Japan, we are seeing a lot of those transitions. Singapore is catching up, and India is another big market where we’re starting to see enterprise workload migrations happening faster than you think. So, while I agree with you that it’s in early stages, those that are slower in adopting multicloud for enterprise workloads will catch up very quickly.

NetApp’s transition towards being a data management company started five years ago. How has that transition been progressing over the years? Will there be a time when you might say that you’ve met that goal already?

Matthew Swinbourne: I don’t think we will ever say that. I think the nature of the new economics and the new world is that we will just keep finding the next challenge. It will just be ever evolving. It’s very different working at NetApp before and after we started to become a data management company. For example, we used to release something new every 12 months, but now we talk to customers to find out what problems they have, and we use those inputs in our product development cycle.

We will also propose ideas to them. In fact, data fabric was the result of us talking to some customers who wanted to have all that goodness that we gave them in the datacentre, in the cloud. So we built our transformation around that jointly with customers.

Rohatgi: Just to elaborate, we have recently started that exercise with our business units where leaders like Matt offer field insights on the features and functionalities we need to develop and that customers are willing to pay for. The days of saying we’ve got this unique feature are gone – you need to be customer-centric in whatever you develop. That, I think, is the name of the game. We have midnight calls to go through that for every single product we have. And that’s unique, because that will fundamentally help us to differentiate ourselves in the market, where some of our competitors still have that autocratic mindset. We are very strong in technology, but we are shifting to become a very customer-centric company.

Swinbourne: Your question about being a data authority seems counterintuitive to what we are saying about being customer-centric. But we become that authority by gleaning this customer inside, washing it, creating solutions and then presenting them to prospective customers. So together, we are creating the future. It’s not a case of us dictating to our customers what they will buy and how they will work.

How are you ensuring that the organisation is moving along that transition and that the sales force is accustomed to selling services, as opposed to storage boxes?

Rohatgi: We are changing the narrative of our salespeople and how they approach customers. We now talk to customers about what they are trying to achieve, rather than say this is the best storage box in the world that you can buy from us. It’s about trying to help deliver business priorities for the customer.

For mid-market customers, we have launched a portal for them to explore what’s the right technology for them. They can go through a series of questions and get some recommendations on the right solutions from NetApp that they can procure. It literally takes a few minutes. And you know, APAC has millions of mid-market customers, so the portal will make life simpler for that segment of our customers who don’t have a big IT house or talent inside to make the right decisions.

Against this backdrop that we just talked about, what is the role of storage, then, in the larger scheme of things? Will NetApp not sell storage hardware one day?

Rohatgi: I think the whole idea is that we lead from the cloud to sell our portfolio. One of the first things we’re doing on the storage side is moving from disk to flash. In every market we operate in, we’re the number one or number two player. In major APAC markets, we’re also seeing disk-to-flash migration.

To answer your question, at the end of the day, you will still need storage with the right performance tiers and efficiency guarantees. That’s what we are focused on. Because at the end of the day, you will need very smart storage, whether you want to do a new app or business model. The idea is to make things smarter so that customers get the performance guarantees and the efficiencies from our boxes.

Even with Azure, it’s all about ANF, which offers very high-performance guarantees, high-efficiency guarantees and very low latencies. So, when you migrate all those heavy enterprise workloads, all these parameters come into play. That’s why each time Azure sells big enterprise workloads, they push ANF as well. Storage is still very important – don’t get me wrong, but the narrative is changing on how you make the storage smart enough in the new paradigm.

Swinbourne: Our innovation – if I look back at our history – has always been about software, not about hardware. As hardware changes, we embrace new hardware, but we innovate in software. These days, if you’re not innovating in software, you are making a mistake. With our HCI (hyperconverged infrastructure) platform, we are innovating with cloud solutions, Kubernetes services, compliance engines, those sorts of things. We want that same experience to be available to private cloud and hybrid cloud users. That needs to run on something and so, today, we provision that experience onto NetApp HCI.

I understand it is possible to run multiple workloads on NetApp HCI. Could you elaborate on how that works? Are there any overheads?

Swinbourne: I’d say we’ve got far less overheads than our competitors. We have two classes of boxes, if you like – a compute box and a storage box. When a customer wants to scale one or the other, there is no tax imposed on them by us to scale everything at the same time. If I look at some of our friends in the industry, they require you to scale compute, memory and storage all together. They might have compute-heavy or storage-heavy boxes, but everything makes a jump together.

And our competitors design their solutions for VDI (virtual desktop infrastructure), SAP or Oracle workloads. Our platform has a quality service engine built into them. It’s a fundamental part of the operating system. So, when we run SAP on our platform, we guarantee a minimum throughput. We can ensure that quality is maintained, no matter what is happening elsewhere on the infrastructure. That’s the big difference. A lot of our competitors offer services to quieten the noisy neighbours. We offer that, of course, but all our data platforms offer performance and efficiency guarantees.

And the overhead isn’t there. We don’t have to manage another management engine, like a lot of the other HCI platforms. There’s no extra software to load on – it’s a data service; it’s configurable through VMware and other hypervisors and it’s all cloud integrated. Our fabric orchestrator orchestrates between the hybrid cloud and multiple HCI instances to protect data. And it’s all machine and policy-driven, which is important in the world of containers and microservices because you don’t have time to manage infrastructure.

From an engineering perspective, are you working with the major hyperscalers and the open source community to advance some of the capabilities in Kubernetes? For example, automatically orchestrating backup and recovery isn’t really built into Kubernetes.

Swinbourne: Absolutely. First, the number one problem in Kubernetes is persistent data management. It’s a massive problem, not just providing data, but backing up and replicating it. There are some capabilities already in our fabric orchestrator, but there are plans to make that even more automatic. Today, even though fabric orchestrator is in private preview, we can do things like automatic replication, so when an application is provisioned, the policy kicks off and we start making a snapshot schedule to protect that persistent data. And we can build that policy up to replicate that to another site if we want.

We are also contributing code to the community – Trident, our persistent data plugin, is open source. Our NetApp Kubernetes Service is also upstream Kubernetes, so it is Google’s version of Kubernetes. If I look at a lot of our competition, some of them take upstream Kubernetes and plug a whole bunch of code into it. The command lines are different. The Ansible integration is different. All the integration is custom and different. We don’t do that. We believe in the open source community and we don't want to dictate to our customers what they should do.

Read more about storage and data management in APAC

Read more on Storage management and strategy

Data Center
Data Management