Gorodenkoff - stock.adobe.com

VMware doubles up on Kubernetes play

New Tanzu portfolio reflects efforts by VMware to better meet the needs of enterprises that are warming to containers and Kubernetes when building new cloud-native applications

VMware Tanzu, the new portfolio of cloud offerings from the virtualisation pioneer, is arguably one of the biggest bets the company is making to support the growing adoption of containers and Kubernetes in the enterprise.

But managing Kubernetes clusters can be incredibly complex, especially for enterprises running their software on a variety of application platforms hosted on hybrid environments comprising public cloud and on-premise infrastructure.

In an interview with Computer Weekly on the sidelines of vForum 2019 in Singapore, Raghu Raghuram, VMware’s chief operating officer for products and cloud services, elaborates on the company’s efforts to support Kubernetes workloads and how it fits into its overall cloud strategy.

With the Tanzu portfolio, VMware is clearly doubling up on its Kubernetes play. Could you elaborate on VMware’s overall strategy around supporting Kubernetes workloads?

Raghuram: Over the last three years, we’ve been embarking on a multi-cloud strategy, building on our efforts since 2010 to help customers build software-defined datacentres through VMware Cloud Foundation. Then, around 2015, our customers saw hybrid cloud as the future of their datacentres and, at the time, Amazon Web Services [AWS] and Microsoft Azure were just coming up.

They also told us that the best hybrid cloud solution would be a combination of VMware and AWS, and that’s why we created VMware Cloud on AWS, which has proved very successful. Over time, as more customers started to use Azure and other hyperscale cloud providers, we started to broaden our focus on multi-cloud.

The question is, what kinds of applications are running on public clouds and in on-premise datacentres? Almost every one of our customers has applications they want to run in a modern datacentre, move to the cloud or rewrite as microservices.

For the first two types of applications, we had a great solution with VMware Cloud Foundation, enabling them to move applications to the cloud without refactoring them or changing their operational model. But if they wanted to build new cloud-native applications or refactor them for the cloud, then that’s when Kubernetes comes in.

We got serious about Kubernetes in 2017 when it became clear that the container orchestration platform was becoming the infrastructure runtime for modern applications. At the time, we teamed up with Pivotal to create Pivotal Container Service (PKS), which was built in two flavours – one was optimised for vSphere, while the other could run on any cloud.

While PKS was starting to be successful, we saw that customers had a problem in managing all their Kubernetes clusters. That led us to acquire Heptio, which was building what has now become Tanzu Mission Control.

Many of our large customers have Kubernetes clusters on vSphere, Amazon EC2 and sometimes bare metal. These are managed by different teams, making it difficult to manage and control everything. That was a problem we wanted to solve.

Then comes the next question on how we can help customers build and deploy new applications. Historically, we’ve relied on Pivotal as a partner to help customers modernise their applications. While Pivotal Cloud Foundry is a great platform, Pivotal last year decided to use Kubernetes as the default runtime for their developer platform.

Meanwhile, Spring Boot was becoming the de facto way by which people built microservices. So, we felt that by bringing Pivotal into the family, we could offer a very comprehensive solution to help customers build, run and manage their modern applications. That’s how the Tanzu portfolio came about.

One of the important pieces of Tanzu is that when we started building PKS, it was also clear to us that if we could reimagine vSphere as a Kubernetes system, then it will be even more powerful, and that led us to Project Pacific.

So, our Tanzu portfolio consists of the build layer, which comprises the Pivotal and Bitnami tools, the run layer, which is Project Pacific for vSphere and enterprise PKS, as well as Tanzu Mission Control for management.

With this move, VMware is pitting itself against Red Hat’s OpenShift. There are organisations that use both Pivotal Cloud Foundry and OpenShift at the same time for different applications. How can an enterprise manage everything?

Raghuram: That’s the beauty of the Tanzu portfolio. While we talked about Tanzu in terms of build, run and manage, you can use any of its components separately. You might say I’m better on OpenShift, but the rest of my enterprise is running on vSphere, so I’m going to run OpenShift on vSphere. Or there could be a group running vanilla Kubernetes on Amazon EC2 and another group running Pivotal Application Service or Spring Boot. You can use Tanzu Mission Control to manage all of that. Customers can choose what’s best for them.

At the same time, we think that, over time, customers will want some standards and we will be competing for those standards. Project Pacific will be easier to run, deploy and manage than any other Kubernetes solution in the datacentre, simply because it’s vSphere. Tanzu Mission Control will have unique elements that OpenShift does not have, such as Wavefront, CloudHealth and Secure State, giving you a broader and richer management environment.

Let’s talk about the Carbon Black acquisition. Some analysts have pointed out that the acquisition will benefit VMware customers more than non-customers. What are your thoughts on that view?

Raghuram: Carbon Black is an independent technology today and the way it works is it puts sensors all over the place to collect and process data on the cloud to gain insights. With the acquisition, they don’t have to deploy sensors to customers using Workspace One, AirWatch or vSphere on the workload side. We will be automatically instrumenting those things to deliver the kind of data that Carbon Black needs.

From that point of view, it’s easier to have a VMware product in the mix, but that doesn’t change the back-end functions, such as behaviour detection. We certainly want to make these things work better together, but if a Carbon Black customer does not have Workspace One or Airwatch, it can still deploy the agents.

What are your observations about the implementation journey of customers that have adopted VMware Cloud on AWS?

Raghuram: Worldwide, when customers move from on-premise datacentres to the cloud, there are two approaches. One, they can take the application as it is, or second, rewrite the application. If they want to rewrite an application, the return on investment (ROI) must be there to pull together a new development team. That applies to a small set of applications, but in most cases, there is no ROI in rewriting an application just to run it on the cloud.

For example, Freddie Mac, a large US mortgage company, is moving 10,000 virtual machines (VMs) running 600 to 700 applications to VMware Cloud on AWS. Their CTO stood on stage at VMworld in San Francisco earlier this year and said there was no way they were going to rewrite a hundred million lines of code. So, unless there is a real need, what do you do in those cases?

Read more about VMware in APAC

The conventional approach without VMware has been to hire consultants, repackage the applications, and find the right cloud infrastructure to run and retest them. Then, they need to get the IT operations team to learn all the new cloud tools. That’s an enormous amount of work – and at a cost of roughly $2,000 per VM – whereas with VMware Cloud on AWS, you can literally move the application while it’s running, using a combination of VMware HCX and vMotion.

And because it’s the same environment in both places, there’s no need for retesting or a change in operational tools if you’re already using vCenter or third-party tools that work with vCenter. It allows the business to move fast and build or refactor applications while moving to the cloud. In the case of Freddie Mac, they wanted to get out of their datacentre, which is a common use case. Disaster recovery and running virtual desktops in the cloud are other common use cases.

And the beauty of the VMware Cloud on AWS architecture is that we can connect to AWS services. So, if you want to get out of the business of operating databases, you can connect to Amazon RDS or Aurora. Next year, when we have Project Pacific in there, you can run modern Kubernetes applications as well.

How does Tanzu support the internet of things (IoT) and edge computing use cases? In some cases, developers still find themselves recompiling an edge application whenever they add a new sensor to a sensor node.

Raghuram: There are different types of edge use cases, from sensors and gateways to branch offices. And the compute capabilities are very different across those. But what’s emerging is people want to run containers at the edge and they will want to use Kubernetes. We’re able to run Tanzu on native clouds or in places where vSphere is being used as an edge solution. It will have Project Pacific and we can run containers there. And one of the design goals of Tanzu Mission Control is to be able manage these edges. We’re very excited by IoT applications exactly for the problem you’ve described. I’d much rather ship a container to the edge as opposed to compiling it from scratch.

Read more on Virtualisation management strategy

CIO
Security
Networking
Data Center
Data Management
Close