Fotolia

Interview: IBM on breaking hyperscaler lock-in

Big Blue’s general manager for Asia-Pacific Hans Dekkers talks up why APAC enterprises are rethinking their reliance on public cloud, the resurgence of the mainframe via LinuxOne, and how it used its own AI tools to cut internal costs

For a long time, the adage in the IT world was that “nobody gets fired for buying IBM”. Today, the technology giant is trying to carve a new reputation as a neutral broker in a fragmented, multi-cloud world.

With rising geopolitical tensions and data sovereignty laws in the Asia-Pacific region, CIOs are increasingly wary of being tethered too tightly to a handful of major cloud hyperscalers. Hans Dekkers, IBM’s general manager for Asia-Pacific (APAC), believes this anxiety presents opportunities for the company’s open hybrid cloud and domain-specific AI strategy.

Unlike Amazon Web Services (AWS), Microsoft or Google, IBM is not vying to be a general-purpose public cloud supplier. Instead, it is pitching itself as the connective tissue – using Red Hat’s open-source software to allow applications to move freely between on-premise servers and various public clouds.

Computer Weekly spoke with Dekkers about the challenges facing APAC boardrooms, why he compares Red Hat to blood types, and the company’s client-zero approach to proving the returns from investments in artificial intelligence (AI).

Editor’s note: This interview was edited for clarity and brevity.

What are the most critical issues keeping CIOs awake at night right now?

I see five foundational challenges everywhere. The first is that every big enterprise, and even government, now manages a very heterogeneous environment – on-premises, off-premises, and multiple hyperscalers.

The second is the explosion of data; very few companies have actually monetised that data or extracted the right value from it. The third is automation – if data is exploding and your infrastructure is everywhere, how do you contain it without automation? The fourth is a lack of skills, especially regarding emerging technologies.

These challenges compound. If you don’t organise these four well, you open yourself up for the fifth challenge: security. The Covid-19 pandemic showed us that everything is connected; if something breaks, the whole ecosystem breaks.

Now, with geopolitical pressures, companies really need to rethink the great tech risk regarding how they construct their architectures. When you look at data sovereignty, AI models, and complying with local laws, reliance on a single external ecosystem becomes risky. There is a deep belief across the region that the independence they want is an increasing priority on the boardroom agenda.

How would you advise clients to minimise that great tech risk?

There’s an analogy I like to use regarding humans. We have different blood types – A, B, and so on – but there is one blood type that is universal: O negative. If you are rushed to the hospital, doctors give you O negative because it is compatible with everyone.

Red Hat is the O negative in enterprise IT. That’s the reason IBM bought it. It is open source and can be deployed on AWS, Google, Microsoft, on-premises, on the mainframe, or on an x86 blade.

If you create your applications and data on Red Hat OpenShift (Kubernetes), you don’t get the underlying “blood type” of the vendor. From a hyperscaler’s perspective, they want you on their native architecture because it is very sticky. I advise clients to deploy on something that gives them flexibility and control. If the cost suddenly becomes too high, or a new data legislation passes, you want the flexibility to move. This O negative layer gives you that.

What clients are telling us is that they understand IBM much better today than they did in the past. We’ve lost our way a little bit in the last decade or so, but we are coming back and being seen again as a deep technology company
Hans Dekkers, IBM

Some organisations are also warming to sovereign cloud to minimise that risk, but they may not have the skills to do so. Is this a barrier to gaining that independence?

You don’t also need to run your own sovereign cloud to have independence. It goes back to control and flexibility. You have to use the innovation of the vendors. But if you don’t want a certain vendor anymore, or a vendor does something you don’t like, as long you think through your architecture correctly, you are in good shape. That could mean running on-premises or using a different hyperscaler. If the underlying architecture allows you to do that, then life’s good. You’ve basically secured the next decade of operations.

The same goes for AI. You can use the AI models from the big vendors, but we believe in small domain-specific models that are created and owned by the enterprise. And if you want to run those models on-premises and not in a hyperscaler’s cloud in a different country, we can guide you in those decisions.

IBM advocates for open source, yet it still sells proprietary systems like the System Z. Isn’t that a contradiction for customers worried about lock-in?

I have to keep telling this story because IBM is often known for our past, not our future. The Z mainframe is a proprietary system for good reasons – it’s hyper-resilient and runs critical infrastructure for banks and airlines.

However, we also have LinuxOne, which is basically mainframe hardware running Linux instead of proprietary Z software. You can deploy thousands of Red Hat containers on a LinuxOne machine.

Imagine you have thousands of containers in a hyperscaler’s cloud, and the cost of ingress and bandwidth becomes too expensive. If you deployed on Kubernetes, you move that workload to a cost-effective LinuxOne platform. You get the hardware resilience, encryption, and performance of a mainframe, but you are running it on an open Linux platform.

So, everything we do – AI our software and even our hardware – will be compatible with this open world. There are few areas where we believe in proprietary because of the nature of the workload, but I would say that 90% of what we do is completely open.

What is IBM’s strategy when it comes to public cloud? It appears that IBM’s public cloud footprint is limited to a handful of locations in the APAC region.

We are not a hyperscaler like AWS or Microsoft – we’re not in that game. However, we are in the specialised game of providing cloud services for regulated industries and bringing sovereign cloud capabilities to local markets.

A great example is our partnership with Bharti Airtel in India. They are a massive telco with phenomenal infrastructure, but they don’t have the enterprise stack to service their clients or governments. We’ve partnered with them to bring the IBM open hybrid cloud stack to their infrastructure.

We are seeing similar partnerships, such as with Telkom in Indonesia. This resonates because we aren’t going with the lens of “you need to move your data and compute to my cloud.” We meet the client where they are.

IBM reported a strong third quarter, and the stock market has responded positively. What’s driving that growth, specifically in the APAC region?

The company has been doing really well under [CEO] Arvind Krishna’s leadership. While it’s great to see the stock market react, I’m more interested in the voice of the client.

What clients are telling us is that they understand IBM much better today than they did in the past. We’ve lost our way a little bit in the last decade or so, but we are coming back and being seen again as a deep technology company.

The other factor, which is very different from 10 years ago, is that we are independent. We don’t have a public cloud in the same way AWS or Microsoft do; we have no interest to pump your data and compute into a certain cloud. Our interest is serving the enterprise.

We are one of the few tech suppliers that can provide independent advice because we don’t have a vested interest in what you use. That proposition is being understood by the market and rewarded by investors.

There is a lot of noise around AI right now. Everybody has a strategy slide, but few seem to be executing at scale. What are you seeing on the ground?

There is a lot of noise and confusion. The skill sets to truly understand this technology, even at the board level, are often missing.

The problem is that organisations are still organised by the GE management model from 1946 – siloed by product or geography. AI offers the most benefit when it cuts across multiple silos and integrates workflows.

We have applied this to ourselves in a programme we call client zero. We started about three years ago, applying AI to our own supply chain, HR, customer service and client interactions. By integrating over 70 workflows with this technology, we have been able to reduce our internal costs by $4.5bn. That is remarkable, and it wouldn’t be possible without taking a workflow approach rather than a siloed organisational approach.

Is AI increasingly driving IBM’s engagements with customers in the region?

IBM is a pretty simple company today. We are focused on three emerging technologies: hybrid cloud, AI for enterprises, and quantum computing. These three are deeply connected, because if you have your hybrid cloud strategy in order, you can implement your AI strategy much better and faster. And in the future, quantum will be a natural extension to that.

In terms of AI, our customers all have a different problem statement. Some come at it from a sovereignty perspective. Others come in through cost, saying, “I want to use AI, but my IT budget is burned because certain vendors are raising prices.”

Essentially, they have lost their ability to be agile. We are helping clients use AI to rethink workflows, but sometimes that also means halving their hyperscaler bill by moving workloads back on-premises. Other times, it could be about replacing old technology or automating processes. There’s a lot we do depending on the pain point of our clients, but all of them have to do with regaining flexibility and control. 

Do you see AI fundamentally changing enterprise software, perhaps diminishing the role of applications like SAP?

I think all companies will create their own agents to automate workflows. IBM has created an orchestration layer called Watsonx Orchestrate to govern these agents from different software vendors – we support about 100 out of the box now – to ensure they work together.

Five to 10 years from now, I think a lot more interaction will be machine-to-machine rather than human-to-software. There will be an interface in between, but the governance layer will be through these agents. It’s an interesting philosophical question of where that goes, but for now, the focus is on integrating these workflows to regain agility.

Read more about IT in APAC

Read more on IT strategy