Billed as Linux for the cloud, Google’s Anthos multi-cloud management platform promises to make it easier for enterprises to manage their applications regardless of where they choose to run them – on different public cloud services or on-premise.
Previously known as Google Cloud Services Platform, Anthos was announced at the recent Google Cloud Next’19 event, bringing open source technologies such as Istio and Kubernetes into a managed cloud technology stack.
In an exclusive interview with Computer Weekly, Urs Hölzle, senior vice-president for technical infrastructure at Google, talks up the rationale behind Anthos, the platform’s similarity with Linux and the future of cloud.
What’s your thinking around Anthos and cloud infrastructure stacks?
Urs Hölzle: The way we look at Anthos is in the evolution of software stacks. The last time things fundamentally changed was in the mid-90s when we had Linux, Windows Server, Java, Ethernet networks and the World Wide Web, which all came together to push IT in a different way.
At that time, two stacks emerged – one was the Lamp stack and the other one was around Windows Server. But they were focused on a single node, while cloud is like a sea of systems where you manage services out of it.
Today, there’s no common stack for cloud. Amazon Web Services and Google Cloud Platform are not stacks. It’s silly that something as simple as starting a container is different everywhere, so we’re saying let’s standardise that through open source which is what we’ve done with Kubernetes.
The software ecosystem behind Anthos is much bigger, with two to three dozen different things that do service management, service discovery, security and so on. Most of these have been done before, but they’re done in many different ways. With Anthos, we have an open source way of doing it with different adaptors that connect to the underlying environment. So as a user, you just learn one way of, say, configuring a service and you’re done.
I often compare Anthos to Linux for the cloud. Although it is not an operating system, it has the same properties as Linux. Today, if you choose Linux, you can choose what runs underneath and on top of it. Our proposition is if you choose Anthos, it’s like choosing Linux. It’s open, high quality and you can run it anywhere. It’s the first cloud stack that’s actually a cloud stack – it’s an extension of Linux and Kubernetes, and all the open source systems fit in nicely.
I also expect a second cloud stack to emerge around Windows, because there are so many Windows workloads that need to go somewhere. Maybe they’ll go into Anthos through containerised Windows, with applications mostly around security and compliance provided by Google running on top of it.
You likened Anthos to Linux for cloud – do you expect forking to emerge at some point?
Hölzle: If you look at the underlying open source ecosystem which doesn’t have a name – I think we’ll probably get one at some point – I’d say around half of the projects are already supported by the Linux Foundation and the CNCF (Cloud Native Computing Foundation).
The other half are not yet – Istio, for example, is a still a collaboration between Google, Red Hat, IBM and Lyft. While it’s not yet in the CNCF, it’s a matter of maturity and will eventually end up in an open source foundation. Anthos is what GKE (Google Kubernetes Engine) is to Kubernetes – it manages open source Kubernetes. All the packages are open source and managed by Anthos, but you can run them if you want to. And it’s fully open, with contributions from other companies, not just Google.
How is Anthos different from what Red Hat is doing with OpenShift and OpenStack?
Hölzle: What we’re doing with Anthos overlaps with Red Hat by only 10%, mostly in container management. We’re natural partners, and Anthos needs to sit on OpenStack or VMware. It needs something that turns bare metal into a cluster where you can run virtual machines (VMs) and containers.
Also, 70% of functionalities we deliver are based on open source and can run on OpenStack. The same goes for VMware. We’re not replacing VMs which you can use without the need to containerise applications.
Urs Hölzle, Google
We’re partner-friendly and we don’t see Anthos conflicting with what our partners are doing. For instance, OpenShift doesn’t do auto-update of things on top of it and it has applications which we don’t have. It sounds similar because one of the things we promised is cross-system portability, but that’s only because open source is inherently portable.
At Next’19, Google demonstrated how legacy applications can be migrated to the cloud with Anthos Migrate. It appears to address the lift-and-shift migration strategy at first glance. What about enterprises that wish to decompose and rewrite those applications for the cloud?
Hölzle: We didn’t have time to show everything, but it’s not just lift-and-shift. It’s really more of modernising or containerising applications. This demo will work with GKE on premise – you can take an on-premise VM, containerise it and keep it on premise. It’s really about moving it into the Anthos ecosystem and being able to configure it and see the service graphs.
The power of Anthos Migrate is that the destination can be anywhere that runs Anthos. The key thing is that it’s about modernising applications either in place or while you migrate, and moving them into a container ecosystem along with the security benefits.
The choice of where you’d run the container application is yours – it’s not about taking a VM and moving it to the cloud. It’s about moving it to an Anthos-managed cloud. It’s a modernisation tool, not a migration tool per se. When you standardise on Anthos, it’s a modernisation move, not just a migration move.
There have been debates on VMs and containers – do you think in the fullness of time, we will all go with containers rather than VMs?
Hölzle: To me, containers are really about service management and bundling services in a release mechanism. You can run one container per VM, with much stronger security boundaries around a workload.
In the future, this would just be a configuration question of deploying a service in a VM boundary around one instance, or auto-scale instances. You can change your mind about the implementation, whether it’s around containers or VMs, but you don’t change your code and nobody will know – that’s the key thing.
It is hidden pretty deep in the configuration, and it does not influence your programming model or service discovery. There are advantages and disadvantages of containers and VMs that are complementary, so I don’t think it’s going to be all one or the other. There’s no conflict and you’ll still need both.
There has been much buzz in the industry on hybrid cloud but do you think all workloads will eventually move to the public cloud?
Hölzle: I won’t say all, but most workloads will move to the public cloud. It will take a long time because the economically rational thing is for it to take a long time. Meanwhile, we’re trying to support hybrid cloud in a natural state, because for many large companies hybrid could take five to 10 years.
That said, some workloads will remain on-premise, such as factory controls that require millisecond responses. The cloud is just too far away. But it does not mean you should use different programming, deployment and security models just because it’s in the factory. Our answer with Anthos is you can use the same models, even if you change your mind about where to run your workloads.
What does that mean from a skills perspective? Does that mean companies need to unlearn what they’ve learned?
Hölzle: Most organisations have not learned enough. This is a big opportunity, because the best talent today is being trained on multiple environments. A security person, for example, would need to understand not just on-premise security about also security in different public clouds.
With this open source layer plus Anthos, you only have one skills target. It’s no longer HP-UX vs Solaris – it is Linux and you get more ROI [return on investment] for your training because you’re training to remain relevant for a longer time. It’s huge opportunity to get up the curve because now you don’t need to have separate teams for cloud and on-premise.
Read more about cloud in APAC
- Google Cloud’s new CEO, Thomas Kurian, unveils plans to turn his company into a bigger player in Asia’s booming cloud computing market.
- Dell Boomi has expanded into Southeast Asia following its debut in Australia about two years ago.
- In the next phase of cloud adoption, enterprises will need to manage multiple cloud environments, as well as data to scale up their use of AI, says IBM’s APAC CEO Harriet Green.
- Fox Sports Australia has rolled out a machine learning model powered by Google Cloud’s AutoML Tables service to predict when wickets might fall in live cricket matches.