Pascal Brier, Capgemini: AI will prove its enterprise truth this year
Capgemini’s chief innovation officer says the 2025 rise and deployment of artificial intelligence agents put enterprise AI progress on hold, but laid the ground for acceleration to come
Capgemini has said that this year will be the one when artificial intelligence (AI) finally proves its business value.
Pascal Brier, group chief innovation officer at the consultancy and systems integrator, says similar viewpoints advanced at the start of 2024 and 2025 proved premature. The rise and deployment of AI agents both put progress on hold and laid the ground for an acceleration to come, he says.
Brier has been the chief innovation officer at Capgemini since 2021, and before that spent over 15 years at Altran, a French consulting firm that was acquired by Capgemini in 2019, which followed a decade at Microsoft in senior marketing roles.
Brier discusses, in this interview with Computer Weekly, the trends and difficulties of enterprise AI adoption, emphasising a significant gap between industry advances and user implementation. He notes that over 90% of companies have started AI journeys, but many struggle with scaling and return on investment.
His firm advocates a three-step framework to enterprise AI adoption: AI essentials; AI readiness; and human-AI chemistry. He says that only 15% of companies have achieved advanced AI integration.
Brier also talks about the impact of AI agents on business processes, predicting a shift towards self-healing and self-monitoring systems. And he touches on a trend of multicloud strategies for risk mitigation and business continuity in the context of asserting digital sovereignty, particularly for European companies.
Many commentators thought 2024 would be the breakout year for proving the business value of AI, especially generative AI. Then they thought 2025 would be. And yet AI adoption at companies and organisations is behind what the industry is producing. Do you have thoughts about what the blockers are?
First, I agree that there is a gap. There is a definitive gap between the amount of focus, money and time that has been spent on AI over the past three years and what we see in terms of adoption – and especially adoption at scale, because adoption, as such, is not a problem.
More than 90% of companies have started the AI journey, so adoption has started now. But we see a gap both in terms of scaling that to the enterprise level and getting the return on investment on the time and money spent on the technology. That’s where we currently have a gap, but it’s not so different from what we had with cloud or some other technologies when they came in.
The big difference was the way AI appeared. GenAI [generative AI] appeared in our lives all of a sudden on 30 November 2022, with ChatGPT, and it was like the world was starting over. Everybody was caught by surprise. That’s maybe the second reason why many companies have walked before they could crawl, and they’ve run before they could walk.
What I mean by that is that there has been an underestimation of the complexity and the change that was required to take the opportunity of AI, and what it could bring. I do not blame the [user] companies for that, because I think that, on the other side, the vendors have created some hype, and they’ve oversold and maybe underdelivered.
“More than 90% of companies have started the AI journey, so adoption has started now. But we see a gap both in terms of scaling that to the enterprise level and getting the return on investment on the time and money spent on the technology”
Pascal Brier, Capgemini
As a vendor, when you say “this is a deterministic model” or “it’s a probabilistic technology”, you understand what it means. As a client, I’m not sure you understand what it means and the effect it’s going to have.
And then there are the hallucinations, despite all the data quality [good practices], data access, avoiding data silos, and so on, and I think that’s what companies discovered, and that’s what prevented many of them going from the proof of concept side to the enterprise level.
We see that with many of our clients, and that’s the reason why at Capgemini we’ve come in with a framework – and we are not the only ones. We identify three steps for clients to make – we call it “AI essentials”. Do you understand what it means to be probabilistic and non-deterministic? Do you understand how you can build on guardrails? Do you understand how you can use small and big models and the implications of each? Do you understand what it means to train your own models rather than take a public one?
If you understand all of this, then you can go to what we call the “AI readiness” [stage]. Do you have the right infrastructure? Do you have the right level of data and access to data? Do you have the right level of governance around the system in terms of, for example, defining ethics? Otherwise, you’re going to build something like 100 proofs of concept. They’re going to have different guardrails, different ethics models, different ways of being governed, and it’s going to be impossible to manage. So that’s the second level.
Once you’ve done that, there’s a third level that we call “human-AI chemistry”, which is where, at some point, AI is just a technology, and you realise the technology alone doesn’t bring any return on investment. Return on investment is brought by the application of technology to an organisational process, which is mainly a human process.
If you put AI into that, you go from people using AI to people working with AI. Using AI as just a tool – so, when you need it, you go to the AI, you ask a question, you prompt it, you get a result, and you do something with it. That is a way to augment, to some extent, what you can do, but it’s not really bringing any large return on investment.
We estimate at Capgemini that doing that kind of thing will bring maybe a 3% to 8% increase in productivity, which is good, but it’s not what some people claim when they talk about 30% to 50% – something you will never do by just using AI. But if you put AI at the centre of some of the things you do, and you have people working with AI and trusting it to do things, that’s a different matter. But very few companies have reached that level where they have the essentials, they have the readiness, and they have worked on the human-AI chemistry.
So when we talk about our top five trends*, the reason we say the first one – that this is the “year of truth for AI” – is because we think we are starting a new cycle after three years.
How does that go down with CIOs and CEOs at your clients? You said very few companies have reached that third stage, where they’ve got what you call human AI chemistry going on, and there are real synergies, and so on – how many do?
We estimate it to be around 15%. Many companies are jumping to the conclusion: “We want to use AI to make some productivity gains.” That is too vague. What did you call productivity [before]? Is it the time it takes to perform one task? Is it the number of people doing that task? Or is it the number of items that you can process in a day? Some people want to do more, some people want to do more with less, and some people just want to spend less.
It needs more iteration about the way you define productivity, because in many cases, if you didn’t do that in the very beginning, it’s very difficult to measure in the end. Where people tend to be disappointed is in how much they get in terms of return. But we see nobody wanting to go back and stop doing it [AI projects]. No company tells us: “It’s not working. We’re giving up.”
What changed in 2025 is that this new technology of AI agents blurred the whole landscape. The dust was starting to settle on the models, the small and the big, the open source, and all those things we started to sort out. People started to understand that it was not a one-size-fits-all type of world where you had to be full ChatGPT, but you could also tailor some smaller models to your own needs. And all of a sudden, you have these AI agents coming on top. And people say models don’t matter anymore, it’s all about agents now. So we had another strike in terms of technology push, to some extent, which made the situation a bit more complex. I guess this year, in 2026, we have all the pieces of the puzzle, but that doesn’t mean we solve the puzzle.
Do you think 2026 is the year when we stop talking about different types of AI?
That’s what I would say. We speak about business, and then we’ll apply the technology that fits into that business. At Capgemini, from the very beginning, we were adamant about the fact that GenAI would not solve all the problems of the world.
There are some classical domains of engineering where truth matters. You cannot go with a probabilistic model. You cannot say: “This is the best possible option, and that’s going to be enough.” When you’re designing planes, when you’re designing bridges, when you’re building semiconductors, you have to have a proof of truth.
I can see why, in customer service scenarios and for CX, where you’ve got chatbot agents and human agents working together, agentic AI makes sense. But outside of customer relationship management (CRM), it seems less relevant, or to the extent to which it is relevant, it’s also a revisiting of robotic process automation (RPA).
First, there are different types of agents. You can build a personal agent in 15 minutes that would do something for you, like automating searches on the internet, and that’s fine. To some extent, that is RPA on steroids – you could do that 15 years ago if you could code. Now you can do it without coding.
Over the past 25 years, processes have been captured by the applications and put into the applications. So, companies are not defined by their processes, they are defined by their application. What these agentic systems will change in the future is put the process outside of the application and use the application as a peripheral
Pascal Brier, Capgemini
But when it comes to an agentic system – which is agents working with agents, agents working with humans, humans using agents on their behalf or together with them – that’s a bit different. It’s more complex. At the same time, it’s far more powerful. What we see in the value of that technology is that it’s going to be applicable to any process. So not only customer experience or CRM, but any process which is currently defined within an organisation can be augmented or replaced by an agentic system – [although] maybe not today.
That’s what we called our “intelligent operations” trend. What we mean by that is that over the past 25 years, processes have been captured by the applications and put into the applications. So, companies are not defined by their processes, they are defined by their application. What these agentic systems will change in the future is put the process outside of the application and use the application as a peripheral.
Whereas today, this is the contrary.
Is this the meaning of your metaphor of the Copernican Revolution – where the future break is to have applications revolve around processes rather than the other way round – as with planets correctly being understood, after Nicolaus Copernicus, as revolving around the sun?
Yes, but it’s going to take some time again. We signal 2026 as the beginning of it. It is going to change the way we can build processes, but it’s also going to change the way we’re going to build applications, and that’s why it’s going to have an impact on software engineering. The way you build applications will change dramatically as the AI agents mature.
Another of Capgemini’s trends for the year pertains to, let’s say, European digital sovereignty and what you call the “borderless paradox of tech sovereignty”. You say, “since full tech autonomy does not exist, organisations will focus on risk mitigation and selective control over key layers”. What do you mean by that?
What we mean is that we see a lot of clients taking sovereignty as almost a philosophical or geopolitical thing, which it is, in a sense. But as a CIO, that’s not the way you should approach it. If you approach it as a philosophical matter, you will never get any answer to your questions, because the more you dig into it, the more you will see that it’s impossible to build something that would be fully sovereign. There will always be a layer where you can’t have an alternative.
Today, if you want to train your own models, you have to go through Nvidia using their GPUs [graphics processing units]. And if it’s not Nvidia, it’s going to be someone else who is also not sovereign. Even if you can do that, then you will reach a level where the hardware on top will be provided by Dell or HP or whoever.
If you are trying to be sovereign on all stacks, either you will make a decision not to buy anything for the next 15 years, or you will be disappointed, or you will spend your time on something which has no solution. So rather than focusing on the what, which is the sovereignty, focus on why you’re doing it. Why are you willing to get some sovereignty and talk with clients? It always ends up with two things: risk mitigation and/or business continuity.
Leave, to some extent, geopolitical things to the politics of Europe and the EU and how they build that. As a CIO, if you want to make progress, focus on the things that really matter.
We see more and more clients going multicloud, which is new. Most of our clients, four or five years ago, would make a choice, and they would go all-in with one provider.
There is more diversification now, but that is not for technology reasons. It’s either for business continuity or risk modification.
*Capgemini’s five technology trends to watch are: The year of truth for AI, AI is eating software, Cloud 3.0: all flavours of cloud, the rise of Intelligent Ops, and the borderless paradox of tech sovereignty.
Read more about business AI innovation and adoption
What is enterprise AI? A complete guide for businesses: Enterprise AI tools are transforming how work is done, but companies must overcome various challenges to derive value from this powerful and rapidly evolving technology.
Steps to implement AI in your business: AI technologies can enable and support essential business functions. But organisations must have a solid foundation in place to bring value to their business strategy and planning.