Elnur - stock.adobe.com

Red Hat CEO on AI moves and source code kerfuffle

Matt Hicks talks up Red Hat’s efforts to support generative AI adoption through OpenShift AI and weighs in on the issues surrounding the company’s decision to limit access to RHEL source code

This article can also be found in the Premium Editorial Download: Computer Weekly: The dangers of breaking encryption

At the Red Hat Summit earlier this year, Red Hat deepened its platform capabilities with OpenShift AI to address the needs of organisations that are set to add more artificial intelligence (AI) workloads into the mix of applications that run on OpenShift.

The move is a natural extension of the company’s goal to be the platform of choice for application developers and infrastructure operators to build and run applications in a distributed IT environment that spans public and private clouds, as well as at the edge of the network.

With OpenShift AI, Red Hat is providing a standardised foundation for creating production AI and machine learning models. It has also teamed up with IBM on Ansible Lightspeed, with Big Blue training its Watson Code Assistant to write Ansible automation playbooks.

Red Hat’s AI moves, however, were somewhat overshadowed by the reaction from the open source community over its decision to limit access to the source code of Red Hat Enterprise Linux (RHEL) to its customers. The decision, announced about a month after the summit, was aimed at preventing rebuilders from profiting from RHEL code without adding value to the software.

In an interview with Computer Weekly, Red Hat CEO Matt Hicks talks up the company’s efforts to support the use of generative AI across the hybrid cloud environment and the competitive landscape for machine learning operations (MLOps) tooling. He also weighs in on the RHEL source code kerfuffle, and how Red Hat is addressing community concerns over the decision.

Could you unpack some of the key announcements at the recent Red Hat Summit and what they mean for the company moving forward?

Hicks: I’ll start with AI and go backwards because I think it has become pretty clear that AI, by nature, is going to be a hybrid workload. You’re probably going to train models in large environments, and then you’re going to run those models as close to your users as you can. We’ve believed in open hybrid cloud for a long time, and that’s an exciting workload that gets customers in the hybrid architecture mentality.

Most enterprise customers, because of things like ChatGPT, are trying to figure out the impact of AI on their business. It gets them to think about how they can do hybrid well, and the bulk of our summit announcements is about setting the foundation for hybrid, whether it’s for traditional apps, cloud-native apps or AI workloads.

We do this in a couple of different ways, starting with the secure supply chain work we’re doing. As you know, technology stacks are changing quickly, and so when you’re delivering a foundation, whether it’s on-premise, in public cloud or eventually towards the edge, understanding the provenance of that foundation and knowing that it’s secure is critical, especially as stuff moves out of your datacentre.

Someone can always go into CentOS and all the code is there to recompose, but our preference for Linux distributions is to add something novel or specialised to make a distribution better in ways that were not before versus reproducing our operating system as close as you can
Matt Hicks, Red Hat

Service Interconnect is the second piece we announced. It makes it easier for applications to connect to components across the hybrid cloud through SSH [secure shell protocol] tunnels and VPNs [virtual private networks]. We’re really excited about that because we believe that AI is not going to exist by itself – it’ll be running next to applications which have to interconnect from training environments to where your business runs today.

The third piece is the developer hub. We’ve seen many enterprises that use OpenShift build their own portals to collect their assets and point their developers to where they should start, such as the images and services to use. That work is so common that if you have a secure foundation, and you build applications that span multiple locations, being able to publish and consume those to enable broader development teams is equally critical.

What about addressing some of the challenges with AI, such as explainability, particularly with large language models that have billions, even trillions, of parameters?

Hicks: There are two parts to that – there’s one part we do, and the other part depends on the model creator. I’ll talk about the work we did on Ansible Lightspeed with IBM to deliver domain-specific AI generation capabilities where you can ask for a playbook, and we’ll generate it for you. While ChatGPT is very broad, this is very specific and suits Ansible really well.

And to your point, one of the things we highlighted was sourcing, specifically where the AI recommendations came from, because we’re in the business of open source. Licensing, copyright and trademark rules are important – you just can’t take any code you want and put it in any other code. We wanted to make sure we demonstrated what could be possible.

Where this breaks down is actually across the two stacks. With OpenShift, we help to support a whole class of work in DevOps – source code management, peer review, publishing of code, tagging, knowing your release modules, pipelines and then publishing code. That’s what we do really well in OpenShift. We can take this whole collection of stuff and move code from laptops to production.

Read more about open source in APAC

AI models are not all that different. In terms of the discipline required, you need to know the model you started from. If it’s generative AI, you need to know exactly what data you brought in and how you trained or did refinement training or prompt engineering. You need to be able to track the output and test against it before you publish it into production, so if a result changes, you know where it came from. This is the tricky part as data changes so quickly that you can’t just publish it and not retrain it. Retraining is going to be as constant as code generation.

So, what we do in OpenShift AI is MLOps – pulling in data, training models, and using very similar pipelines as you would with code. But you need to have a foundation model, and this leads to how the model was trained in the first place, which is something Red Hat does not do. It’s done by the likes of IBM, Meta, OpenAI and other model generators, and within Hugging Face, there’s also a lot of open source model generation.

In the case of IBM, they tightly control their model because it was domain-specific to Ansible. They tightly controlled what they trained against so they could drive that core attribution at the end. There are two different camps – some train on everything publicly available, giving you those massive parameter models where attribution will always be a challenge. Then there’s Hugging Face, which has a lot of specialised models that may start with a foundation model but are bounded to domains.

Our goal is to make sure we can add that discipline to what you started with. What did you change in terms of data? How did you retrain? What were the results and where was it published? There’s a lot of training right now, but in the next year or two, we think we’ll move more into the inference space and how you iterate becomes critical.

Are there plans to work with other players in the market apart from IBM? Also, Red Hat has deep relationships with hyperscalers which also have MLOps capabilities – what are your thoughts on the competitive landscape?

Hicks: One of the reasons why we don’t do models is that we want to make sure we’re a platform company. Our job is to run the best models in the best way possible. How can we use RHEL and OpenShift to bridge a model – whichever one it is – to Nvidia, Intel or AMD hardware to drive training and inference? Not being in the model space makes us a natural partner with everybody and it really becomes a hardware statement. How can we get the most out of the training environment on OpenShift distributed computing, and then inference, which a lot of times comes closer to core RHEL or maybe a smaller OpenShift instance? So, that’s the first layer.

The second layer when we look at OpenShift AI is that we partner with a lot of other companies today that add specialised capabilities, whether it’s Starburst that’s looking at function array and others. It’s exciting to see the work that IBM is doing on Watsonx. They’ve utilised OpenShift AI heavily, but they were comfortable with OpenShift to start with. Our goal is to make sure that as a platform company, we have that neutrality and independence. I’m glad we can serve IBM, but there will be other partners as well because there’s just so much specialisation and so many niche offerings in this space.

We've had this incredible run and opportunity of making Linux, OpenShift and Ansible successful in the enterprise. But the walls of the datacentre are shifting, and new technologies are changing how enterprises build things. That's our next opportunity to tackle and there's still plenty of work going from datacentres to cloud
Matt Hicks, Red Hat

I met with SUSE’s CEO recently and we spoke about the recent decision by Red Hat to limit access to RHEL’s source code to its customers. A lot has been written by Red Hat executives to explain the rationale, but how are you framing the issue for customers and addressing community concerns over the decision?

Hicks: I’ll tackle that in two ways. On the community concerns, I think half of it is people just starting to realise that we brought access to RHEL, whether it’s RHEL for teams or multiple instances available for non-production use or free RHEL available to individuals and hobbyists. Our goal first is, if you are a contributor to Linux, we never want to stand in your way of using our products. And I think we probably removed a lot of those barriers almost a year ago. Is there room to improve as people use RHEL more? Absolutely, and that’s one part of making sure RHEL is available to that audience.

When we get to communities that want to build a specialised Linux or start from some of the work that we’ve done but take it in a different direction, our argument would be that CentOS Stream, in terms of the next version of RHEL, provides you with everything you need. If you want to make more aggressive changes to it, Fedora provides you with everything you would need. Your contributions there can then flow into RHEL, if we choose.

The bit-for-bit rebuilding of RHEL just doesn’t serve a use for us. Now, someone can always go into CentOS and all the code is there to recompose, but our preference for Linux distributions is to add something novel or specialised to make a distribution better in ways that were not before versus reproducing our operating system as close as you can.

As for our customers, most of them don’t live in the same world as the community builders. Our source policy with RHEL covers our customer base extremely well, because if you need the source code as a customer, you’ll get them. We have customers that have used RHEL and CentOS, and that’s certainly a decision point for them. But Linux is the most available operating system on the planet, and so they have plenty of options to choose from. We’ll always want to make sure we can serve them with RHEL, but it hasn’t really been a customer challenge.

I’d say the challenge is communities feeling like we took something away from them. Half of that is just not being super familiar with CentOS Stream and not being familiar with the ways that RHEL is available to them. We’ve been in open source for a while and any change you make in open source tends to get really strong reactions. We still hold true to open source – we still open source everything we do, and we still contribute back a tremendous amount for every dollar we make.

In your letter to Red Hat employees about the recent layoffs, you mentioned the importance of focusing on things that Red Hat does really well. Can you elaborate on what those things are and what you hope to achieve?

Hicks: It’s a great question and I start almost every company meeting saying, ‘Let’s be comfortable that we are a platform company’. We’re going to sit above hardware, and in the world of edge, on new consolidated boxes outside the datacentre. And we’re going to connect that to applications, whether they are traditional applications or new cloud-native apps. And then you’re going to have AI workloads going forward.

Our job, from the RHEL days to middleware with JBoss to OpenShift and distributed computing, is to make sure that developers who want to build with our platform have the widest reach possible. That’s important because there are so many things changing right now. When you look at just the intersection of edge and AI, to be a platform company, we have to serve that market and that class of workload, which means we have to invest in engineering and sales.

We’ve had this incredible run and opportunity of making Linux, OpenShift and Ansible successful in the enterprise. But the walls of the datacentre are shifting, and new technologies are changing how enterprises build things. That’s our next opportunity to tackle, and there’s still plenty of work going from datacentres to cloud. But we have to keep that relentless focus on being a platform and serving those use cases.

That’s what we want to do very well, and some of that work is in the operating system space in areas like securing the software supply chain, PyTorch optimisations or Nvidia integration. Some work will go into distributed computing, which is what we’re doing with OpenShift, and there’s a whole lot of work in orchestration around Ansible.

We will certainly invest in areas outside of those three, but if I do my job right, you’ll never see us invest in an area that you can’t pin back to that platform use case. I think that’s a pretty big market for us right now. We know the dynamics of this market, we know how to sell into this market, and we have the talent in this market. There’s enough opportunity in these evolving areas to focus on.

Next Steps

Red Hat Summit news and conference guide

Read more on Open source software

CIO
Security
Networking
Data Center
Data Management
Close