Zendesk once pushed its AI vision, but now customers are leading the charge. Its CTO, Jason Maynard, explains how this reversal is creating roles like the ‘bot manager’ and shaping the future of customer experience
Zendesk, a stalwart in the customer experience (CX) space, has been integrating artificial intelligence (AI) into its platform for over a decade. But where it once pushed its vision of how AI can help with customer service, its customers – now armed with board-level initiatives to deploy the technology – are the ones pushing for AI capabilities.
This demand is forcing a rethink of not just the technology, but Zendesk’s business model. Moving away from traditional per-seat licensing, the company has introduced a resolution-based pricing model, charging customers for each issue resolved by its AI agents. This aligns the company’s incentives directly with customer outcomes, but also requires a deeper, more consultative partnership with its customers to ensure the technology delivers on its promise.
In a recent interview with Computer Weekly in Singapore, Jason Maynard, Zendesk’s chief technology officer for Asia-Pacific and North America, discusses this shift, the complex architecture required to power intelligent CX, the emergence of new roles to manage AI agents, and how the CX industry is being reshaped by automation.
Editor’s note: This interview was edited for clarity and brevity.
Tell us more about the work that Zendesk is doing around AI.
We’ve been looking at algorithms and AI in customer service for just over 10 years now, making our first big investment in a data science team back in 2014. But the thing that really changed in the last two years is that the market push has totally flipped. Back in 2018, we were pushing the vision of where AI was going. Today, our customers are coming to us saying, ‘We have a board-level initiative to go do this’.
AI is not a line-of-business initiative anymore. Executive teams and boards are thinking about it in two ways. One, how can AI improve our customer experience? Waiting 15 minutes or an hour for a response is a key source of customer dissatisfaction. For real-time businesses, like a streaming service for a live event, a two-hour response time is a horrible experience. So, customers are looking at how they can deploy automation to give customers real-time resolution without waiting in a queue.
The other is the CFO [chief financial officer] view, which is about taking significant cost out of the system through automation and reducing labour spend. Those two things are interrelated, but that demand from customers to improve CX and reduce cost using AI has really shifted. We now have over 10,000 customers using our AI products.
My focus is on what we’re delivering with AI, because just saying ‘AI’ doesn’t mean anything. We have algorithms integrated at every part of our product stack, each solving a different business challenge. This could be end-to-end automation with our AI agents, improving agent efficiency with the Zendesk copilot, revamping quality assurance [QA] by automatically scoring tickets, or using predictive algorithms for workforce management. The focus has to be on the outcomes, and hopefully, the technology eventually fades into the background.
I’ve talked to customers who are wondering, ‘Is my BPO vendor using AI to reduce their costs and increase their margins, or am I getting those cost savings?’ There’s the question of where the value from this technology accrues
Jason Maynard, Zendesk
How has the outcomes-based pricing model shifted the way Zendesk works internally to ensure it is focused on delivering value?
When your pricing model is directly aligned with customers – where the more success and value we build for them, the more that’s reflected in our top line – our incentives are aligned at the highest level. The biggest shift I’ve seen is that we have to be much more engaged on the ground in delivering value within a customer’s operation.
For example, a service engagement for AI agents starts with a pilot where our classification models analyse the customer’s historical conversations. We can identify the contact reasons that are driving the highest volume, lowest customer satisfaction (CSAT), or longest handle times. Based on the outcome the customer cares about most, we prioritise where to start. Some of those contact reasons are great candidates for full automation. For others with compliance risks, like trust and safety issues, the customer will want a human in the loop, making it a great use case for a copilot.
We’re having much deeper conversations to help customers on their automation journey. It means bringing more expertise and consultation, which is different from Zendesk’s early days of being almost completely self-service. Today, we’re not just selling a piece of software; we’re helping a customer apply that software to achieve a specific outcome.
With the greater adoption of AI, what changes are you seeing in the skillsets and roles in your customers’ CX teams?
There’s a new persona emerging in CX teams. We call it the ‘bot manager’, but customers might call it an ‘AI architect’ or ‘automation architect’. It’s this new role that acts as the technical product manager behind the AI agent experience. You have to work with the content team to build out the relevant knowledge, and you have to work with the engineering team to expose an endpoint so the AI agent can take an action in a back-end system. This is a role that coordinates all of those activities.
And for 20 years, customer support teams have scaled through people. The best support leaders were experts at hiring, training and managing people. Now, those same leaders are being asked to scale a technology solution, which is a very different skillset. We’re seeing a reskilling of the whole industry, very similar to what happened in marketing between 2000 and 2010, when it became a much more technical, data-driven function. We’re seeing that same inflection point in CX right now.
The AI space is moving incredibly fast. How is Zendesk keeping up with developments like agent-to-agent (A2A) communication and the model context protocol (MCP)?
Our core philosophy is to abstract away the technology shifts from our customers, allowing them to define their service operations regardless of the underlying technology. A great example is foundation models. Every month there’s a new, more powerful model. We’ve architected our product to be agnostic of the underlying models we use.
Our AI agent, for instance, is a constellation of about five different LLMs [large language models], each performing a different task, from creating a task ontology to planning the conversation and generating dialogue. This is a complex architecture we don’t expose to customers. As new models are released, we benchmark them and put the best-performing ones in place without our customers having to re-engineer their setup.
We’re seeing a reskilling of the whole industry, very similar to what happened in marketing between 2000 and 2010 when it became a much more technical, data-driven function. We’re seeing that same inflection point in CX right now
Jason Maynard, Zendesk
Regarding A2A and MCP, it’s a bit of the Wild West right now. MCP is designed to solve the integration problem, allowing an AI agent to interact with an API [application programming interface] in a less brittle, more conversational way. We’ve already done proof-of-concepts around this.
A2A addresses how our service agent will work alongside a customer’s homegrown booking agent or another vendor’s sales agent, sharing context to create a seamless experience. It’s a challenge, and a lot of the plumbing is still manual and brittle. I don’t think a standardised protocol can come fast enough, honestly, to enable that ecosystem to evolve.
You mentioned about using a constellation of different models. What is the thinking behind that as opposed to building a single, proprietary LLM for CX?
We do a lot of fine-tuning on top of base models. Training a foundation model from scratch is extremely expensive, but the cost to use them has come down precipitously due to market competition. The industry has really congealed around the major model providers. What we do is fine-tune these models for very specific tasks. For example, our task ontology agent is fine-tuned to create very high-fidelity, reliable task descriptions from procedures, which is critical for all downstream tasks.
The other main consideration is cost. We can optimise a lot of the compute and inference cost of solving an issue and make multiple model calls to get more robust and reliable results, because we have an agreed-upon price per resolution.
We don’t monetise other applications like generative search on a customer’s help centre directly. Generative search can be used millions of times, so we have to drive down the inference cost to serve search results in a more cost-effective way. Each application has a different value and cost profile. We’re working with different providers to bring down our inference costs for applications that need to scale to millions of model calls without necessarily increasing a customer’s cost.
With so many moving parts, governance must be a major concern. What does the lifecycle for deploying and managing an AI agent look like?
On the product development side, we have evaluations and golden data sets that we use to test new models we’re deploying into production. We also do a lot of testing behind the scenes before we change a model or run a smaller version of a model.
On the customer side, the lifecycle goes from design, to testing, deploying, and then the feedback loop. We start by analysing the top intent drivers to build a roadmap of automations. Then, we work with the customer to define the automation procedures to handle tasks, which fall into three tiers of complexity: simple Q&A from a knowledge source; lookups that require reading from another system like an ERP [enterprise resource planning]; and actions that require writing to a downstream system, such as creating a ticket for a bug in Jira.
After deployment, the feedback loop begins. A key differentiator for us is our AI-based QA tool. Traditionally, QA teams would manually sample 5-10% of customer interactions. Now, you can score 100% of interactions – both human and AI – across categories like completeness, tone of voice and compliance risk using the tool. The scores go to the human bot manager to identify areas for improvement, whether it’s missing content or a flawed procedure. It’s an AI product in and of itself, helping to improve other AIs. We’re in this circular place where the solution to every AI problem seems to be more AI.
How is the adoption of agentic AI affecting the wider CX ecosystem, particularly for companies that rely on business process outsourcing (BPO) firms?
That’s a great question. A BPO firm is typically handling your tier-one, high-scale, low-complexity issues, which AI is really good at handling right out of the gate. I’ve talked to customers who are wondering, ‘Is my BPO vendor using AI to reduce their costs and increase their margins, or am I getting those cost savings?’ There’s the question of where the value from this technology accrues.
Many of our customers are considering winding down their BPO contracts and having AI agents handle their tier-one support. If you’ve gone down the BPO route, you’ve already clarified your cost structure in a way that makes it easy to see the value of replacing that labour cost with a technology solution.
It can be a harder decision for companies with large in-house teams, where you are dealing with internal dynamics rather than just ending a vendor contract. We see a lot more in-house teams leaning into tools like AI copilots to get the most from their existing teams.
As organisations race to build resilience and agility, business intelligence is evolving into an AI-powered, forward-looking discipline focused on automated insights, trusted data and a strong data culture.