Andrey Popov - stock.adobe.com

Interview: GitLab CTO on freeing developers for innovation with AI

Sabrina Farmer explains how GitLab’s platform for the software development lifecycle is using artificial intelligence to help eliminate developer toil and drive innovation

Since becoming chief technology officer (CTO) of GitLab in 2024, Sabrina Farmer has been focused on harnessing artificial intelligence (AI) to help developers automate mundane tasks, enabling them to focus more on innovative projects to take their businesses forward.

And in doing so, Farmer, who spent 19 years at Google where she last served as vice-president of engineering for core infrastructure, believes GitLab has a unique advantage with its DevSecOps platform, where all data – from code and continuous integration/continuous delivery (CI/CD) pipelines to issues and merge requests – resides in one place. This, she contends, enables GitLab to build AI tools that can reason across the entire software development lifecycle (SDLC).

In a recent interview with Computer Weekly on the sidelines of the GitLab Epic conference in Singapore, Farmer discussed GitLab’s AI strategy, the challenges of AI adoption even within her own teams, why leaders should be sceptical of AI-generated answers, and how the metrics of business growth might be about to change.

Editor’s note: This interview was edited for clarity and brevity.

You’ve taken the helm as CTO at a time when AI is fundamentally reshaping software development. What’s your core philosophy on the role of AI, and how is that shaping GitLab’s strategy across the SDLC?

Our goal with all our customers is to help them create their AI strategy. I spend a lot of time helping them understand that it’s not just about the entire SDLC, but it’s so much more than that. Ultimately, what I want is to help companies have more time for the next generation of innovation. That’s how I think about AI.

My teams are spread across 58 countries, so we are thinking about this at a global scale. We make decisions asynchronously so they can happen wherever in the world people are working. But how do you truly leverage that, especially when technology is evolving so quickly? It’s even challenging us to get everyone on the same page. So, I’m spending a lot of time on change management, making sure people are following on the journey.

What’s good about the way we work is that we always record our meetings, and our team members are rapidly prototyping and creating videos of their work as they go along. We have demos coming in every day from around the world. We’re also creating technology to amplify that. We have a GitLab Knowledge Graph, which takes your entire code base and creates a graph showing all the dependencies, enabling us to speed up onboarding of developers.

Think about how people used to onboard new team members and get them familiar with the code base. This could take six months, even a year or two, for a reasonably sized code base. We’ve created a researcher agent that can answer almost any question about your entire ecosystem. People can onboard so much faster today. My job now is to challenge my own assumptions about how people can adopt this technology and ensure they’re plugged in to absorb the change we’re producing.

How is the adoption of AI maturing among your customers, given concerns around governance and the way some of these AI tools work?

We have an advantage over other providers because our customers’ data doesn’t have to leave the platform. Because the AI tools are built into the platform, we can implement policies on top of it without losing context. For example, with the Knowledge Graph, you don’t really need a large language model (LLM) to answer questions about something that’s already deterministic. That alone is a huge, powerful tool.

You have to be sceptical of AI in the same way you are with a growing workforce. You will always have team members who are pleasers, who just try to tell you “yes” whether or not they know what you’re talking about
Sabrina Farmer

But to your point, adoption is hard for people. Interestingly, I watched how slow adoption was internally at GitLab, even though we were building the tools and invested in the success of Duo. We saw hesitation because it feels threatening. The press makes my job harder because they talk about how this will replace software engineers. I try to tell them I want to take away the day-to-day operational burden. If you look at how software engineers spend their time, I’m not going after the 20% of their time when they get to write code. I’m going after that 80% – the meetings, writing tests, and documentation that I didn’t love when I was an engineer.

I’m trying to give them more time for creative, visionary thinking. What should they build next? Some companies talk about freeing up developer time and then recouping that on the bottom line, which, to me, is about maintaining the status quo. The leaders who say, “I’m freeing up this time and reinvesting it in my business,” are the ones who will truly benefit.

It may change the growth curve investors are used to seeing, which might be the increase in the number of engineers year over year. I’d be more impressed with companies that have a slower growth curve but are producing more value and expanding their portfolio. That’s how you know they’re really leveraging AI and reinvesting in not just status quo, but what comes next for their business.

Do you think the technology is where it needs to be, even for maintaining the status quo? We’ve heard about AI models being sycophantic, acting like a cheerleader for developers and hiding errors. What are your thoughts on that?

Honestly, this happens with people too. People ask about what to do about AI hallucinations. You have to be sceptical of AI in the same way you are with a growing workforce. You will always have team members who are pleasers, who just try to tell you “yes” whether or not they know what you’re talking about.

The idea that you should automatically trust what the AI is doing is where people should be far more cautious than they are today. You should treat AI as something that’s just trying to be helpful. If it’s too easy to get the answer you wanted, maybe you should be a little sceptical and spend more time analysing it, like what’s motivating it and the prompts that you gave it.

The phrase “be careful what you ask for” is very relevant when you’re talking to an LLM. I always tell my team to never accept the first answer. Do a follow-up. When I interviewed people at Google, it wasn’t about whether they gave me the right answer; it was about whether they truly understood their answer and identified problems with it. It’s the same thing in the AI world. Ask it a question to challenge whether it’s right, because I have found LLMs give a better answer after multiple questions than on the first try.

You mentioned challenging an LLM – we’ve heard of the idea of using one LLM to judge the output of another. Is that a sustainable approach, given that it could double your compute costs?

There are multiple ways to address it. To your point, you can’t do that for every single answer, which is why you’re never going to get rid of all the people. I tell people all the time – don’t try to apply AI to everything, especially when the answers are deterministic. An LLM is valuable when you need to reason about a problem with many different inputs, but if the answer is a simple yes or no, don’t ask an LLM that question. You’re paying way more than you need to for the answer.

We all appreciate the role of software bills of materials (SBOMs) for code provenance. As AI models become integral to applications, are you looking at supporting something like an ML-BOM for tracking the provenance of what goes into a model?

We’ve talked about this a lot. A year or two ago, everyone thought they needed to build and train their own models on their specific code base. In reality, we’ve learned you don’t need a model trained on that - you just need to be able to give context to a generalised LLM in a very efficient way. Because of that, I don’t think there's as much demand for an ML-BOM from a general user perspective as we might have thought.

Don’t try to apply AI to everything, especially when the answers are deterministic. An LLM is valuable when you need to reason about a problem with many different inputs, but if the answer is a simple yes or no, don’t ask an LLM that question
Sabrina Farmer

That said, if you are a model provider and one of the top companies building foundational models, you absolutely need to worry about this. You need to know what inputs and data you’re training on. But I think that’s a super hard problem for a fairly small set of users. It’s not necessarily where we’re spending a lot of our time today.

What we are trying to do is give you the same power that we provide with an SBOM for your code. So, when you check in a model or related components, how can we give you the input and control patterns to understand what you’re doing, as opposed to doing something specifically very different?

Looking at the GitLab platform, can you give us a glimpse of some of the gaps you are looking to plug in terms of capabilities and features?

What I found when I got here was that while we are comprehensive, there’s an opportunity in artifact management because things can change very quickly between your environments, especially with the use of open source. We’re putting more controls in place to ensure that what you build for development and staging is exactly what you ship to production, with revision control and policies associated with each stage.

Another area is the complexity of pipeline configurations. Customers often copy a configuration from another team and edit it, but they leave in a lot of stuff they don’t need because they don’t have time to understand all the variables. We’re creating a catalogue so you can adopt from a standard pipeline instead of a random team’s configuration.

More importantly, we’re using agents to help debug those complex pipelines. We have a customer with a central platform team that gets inundated with requests from hundreds of development teams asking, “What’s wrong with my pipeline?”. Instead of the central team trying to debug it, we can push this back to the development team and let the researcher agent debug it for the developer.

The agent will have access not just to the code base via the Knowledge Graph, but also to all the changes across the full stack, including dependencies of dependencies. So, when you ask, “What changed between my last build?”, it can identify a dependency which can be three layers away – something that only your most senior principal engineers could figure out. Now, a junior person on call can solve it.

That changes what people think they need. For example, a customer recently asked for improved analytics and observability dashboards. But in the future, it’s not about having an analytical dashboard; it’s about having something that can answer any question you have. The dashboard itself was never the powerful part; it was the human insight needed to interpret the data.

Today, AI can provide that insight directly. You don’t need a bunch of dashboards – you need something that can take all the event data and answer the ultimate question of why something is happening. So, when a customer asks for dashboards, I’d say, “Let me give you an analyst agent” instead. This agent can look at all your data and tell you not just that you’re doing better or worse, but why. It can see a slowdown in merge request submissions and correlate it with the team’s calendars to find out that they were at a summit, which is an insight a dashboard could never provide.

Microsoft has absorbed GitHub into its CoreAI division. Have you seen GitHub customers moving over to GitLab because they don’t trust Microsoft to maintain the openness of the platform?

We’ve had a lot more customer calls talking about this, asking if we can do what GitHub used to do for them. I’m not sure that I’ve seen a mass exodus yet, but we’re having a lot more conversations than we were having before. Interestingly, it’s the bigger companies, the big tech titans, that are knocking on our door today. Before, they might have thought, “Oh, GitLab is so small”. But now, they want to have conversations and partner with us. They like that we’re very focused on the SDLC and they don’t feel we compete with them. The only question we have to answer is, can we scale to their demands? We can, of course, as we already support 50 million developers around the world.

Read more about software development in APAC

Read more on Software development tools