Parradee - stock.adobe.com

Salesforce shifts focus from AI models to agentic AI

Rather than being preoccupied with large language models, Salesforce is now focused on building AI agents, with an eye on achieving what it calls ‘enterprise general intelligence’

Salesforce is moving away from deploying large language models (LLMs) and towards developing specialised, efficient and trustworthy artificial intelligence (AI) agents to solve specific business challenges.

In an interview with Computer Weekly, Silvio Savarese, executive vice-president and chief scientist at Salesforce, said the true value of AI for organisations is not in the underlying model, but in the capability of the agent built on top of it.

“The whole company is moving away from being obsessed with models and focusing more on agents,” he said.

“What really matters for customers is not the model, but the agent. The agent is what’s transforming the way we do business.”

To achieve this, Salesforce’s research team is breaking agents into four key components: memory; a reasoning “brain”; the user interface; and function-calling capabilities. This involves creating proprietary vector embeddings for better data retrieval and organising information (memory), enhancing the Atlas reasoning engine to be more reliable and accurate in planning complex tasks (brain), and creating advanced voice models for more natural conversations (interface).

As for function-calling, Savarese, a computer scientist who joined Salesforce from a tenured professorship at Stanford University, said the research team has created their own large action models (LAMs) that are better at making application programming interface (API) calls and carrying out actions compared with LLMs.

“LLMs are not trained to perform actions; they are trained to auto-complete text,” he said. “The difference with a LAM is that in the training process, we explicitly incorporate feedback with respect to how the environment reacts to a certain action. Because of this, the ability to do function calls is much more accurate.”

Read more about AI in APAC

Salesforce’s focus on agentic AI also addresses enterprise concerns about the cost and efficiency of deploying massive, general-purpose AI models, said Savarese, arguing that the right model should be matched to the right use case, with smaller, specialised models powering dedicated agents for narrower tasks, such as processing a product return or managing a password.

“We have seen that smaller models can achieve on-par performance as compared to the big models, as long as the use case is well specified,” he said, adding that smaller models are also less resource- and energy-intensive. That said, Savarese noted that larger models remain useful in complex workflows that require an agent orchestrator to coordinate the work of multiple agents.

With agentic AI, Salesforce is gearing up for what it calls enterprise general intelligence (EGI), which is about pushing the boundaries of agentic capabilities in areas deemed critical to business, such as performing deep research instead of a single search or enabling long-horizon planning for complex marketing campaigns. This differs from the more consumer-focused goal of artificial general intelligence (AGI).

“The community has been obsessed with AGI, but for the enterprise, we don’t need to solve math problems or pass college examinations,” said Savarese. “What we need to do is to ensure we achieve a high level of accuracy and consistency for tasks that are useful to the enterprise. We need to make sure the customer agent gives you the right information at the right time.”

Underpinning all of Salesforce’s work in agentic AI is the Einstein Trust Layer, which integrates enterprise-grade guardrails such as data masking, zero data retention and audit trails. He also stressed the importance of building agents that are aware of their own limitations to prevent hallucinations and allow for human oversight.

“It’s critical that agents reach this level of awareness,” said Savarese. “If an agent is super confident in coming up with an answer, there’s a risk that it can hallucinate and bypass humans. And so, it’s important to find some way of enabling it to assess its confidence, where decisions that it’s less confident of can be checked by a human.”

Read more on Artificial intelligence, automation and robotics