iStock
What it takes to secure agentic commerce
With AI agents increasingly acting as digital concierges for shoppers, verifying bot identities, securing the APIs they rely on, and detecting anomalous behaviour will be key to safeguarding automated transactions, according to Akamai
As more consumers prepare to use personal artificial intelligence (AI) agents to research and purchase goods on their behalf, ensuring these digital assistants don’t go rogue and embark on a shopping spree has become a security concern in the e-commerce industry.
According to data from Akamai, traffic from AI agents and bots has nearly tripled over the past year. During a two-month period, the commerce industry saw over 25 billion AI bot requests, with China, India and Singapore accounting for most of the traffic.
Against this backdrop, online merchants that have focused on fending off malicious bot traffic will need to rethink their security strategies, said Reuben Koh, director of security technology and strategy at Akamai.
“Merchants have been so used to the old thinking that bots and automation are the enemy,” said Koh. “Now, they are in a split-brain scenario – they can’t block everything anymore because fewer humans are visiting their sites, but they also can’t open the floodgates.”
To mitigate potential security risks for merchants looking to accommodate more shopping agents, and to prevent these agents from being compromised by threat actors, Akamai has partnered with Visa to secure agentic transactions, leveraging Visa’s trusted agent protocol (TAP) alongside Akamai’s behavioural intelligence.
“Think of Visa as the passport provider and Akamai as the immigration officer,” Koh said, adding that Visa will verify the identity of the agent and the human behind it, and cryptographically sign the intent of the transaction. Akamai then acts as border control by analysing the agent’s origin and behaviour in real time.
“A good agent verified by Visa that comes from a bad location, such as a compromised server, will be blocked by us,” Koh explained. “And if a verified agent enters with the stated intent to browse but suddenly begins scraping data or testing credit card numbers, Akamai’s behavioural intelligence will pick that up and stop it.”
While Visa verifies identity and Akamai secures transport and behaviour, model creators must do their part to ensure agents do not deviate from user instructions, Koh said.
“The agent’s ability to execute accurately comes from the agent provider,” he said. “Their responsibility is to ensure the agent doesn’t hallucinate – for example, if I ask it to buy a pair of sneakers, it doesn’t buy two sacks of rice because it feels I need them.”
Open standards key in agentic commerce
Akamai’s partnership with Visa was made possible by its early adoption of the Web Bot Authentication (WBA) standard, which, along with the Universal Commerce Protocol developed by Google and Shopify, serves as the foundation for the emerging agentic economy.
Because WBA is an open standard – likely to be ratified by the Internet Engineering Task Force – it prevents vendor lock-in and allows Akamai to support multiple payment ecosystems simultaneously. While proprietary layers like Visa’s TAP add specific identity features, WBA serves as the anchor standard for authenticating bots and providing additional information about their operators to websites.
“If there’s another payment provider that comes up with their own standard that’s based on WBA, then it makes it easy and quick for us to integrate our stuff with them,” he explained.
Koh noted that this architectural decision enables Akamai to expand its partnerships beyond its current arrangement with Visa, adding that the company is in active negotiations with other major payment providers and fintech startups.
Beyond preventing errors, agent providers must also implement guardrails to prevent financial damage, such as an agent going on a spending spree or maxing out a user’s credit card. “We need to ensure the agent doesn’t go on a rampage,” Koh added. “All three of us – Visa, Akamai and the agent provider – need to work in tandem.”
That includes securing the substrate that underpins agentic AI: the application programming interfaces (APIs) that agents rely on to gain access to systems and data.
“For AI to interact with the real world – buying things, creating documents, or booking airline tickets – it requires APIs,” Koh said. “But from an attacker’s perspective, I don’t need to attack the agent; I just need to target the APIs executing the instructions.”
He warned of scenarios involving goal hijacking, where an attacker manipulates the API data pipeline to alter an agent’s objectives – for example, changing an instruction to buy $200 worth of groceries into a transaction for a $5,000 luxury item.
With API transactions set to balloon as automated agents begin shopping at scale, Koh believes API security will become the weakest link. “We’re already seeing API attacks on traditional applications; the growing number of agentic transactions is only going to compound the problem.”
Making matters worse is excessive agency – one of the Open Web Application Security Project’s (OWASP) top 10 list of large language model (LLM) security vulnerabilities – where developers, under pressure to ship products quickly, grant AI agents more permissions than necessary.
“We are dealing with non-human identities at scale – not 20 employees but 500,000 agents,” Koh noted. “If an agent has too many permissions and suffers from hallucination or bias, the guardrails may no longer be effective.”
Read more about AI governance and security in APAC
- Computer Weekly speaks to Keeper Security’s leadership on how identity and access management systems are becoming unified identity platforms capable of securing both human and machine identities.
- Singapore has launched a governance framework for agentic AI systems, which are capable of independent reasoning and action, to address the growing security and operational risks posed by AI agents.
- Dataiku’s field chief data officer for Asia-Pacific and Japan discusses how implementing AI governance can accelerate innovation while mitigating the risks of shadow AI.
- As AI agents are given more power inside organisations, Exabeam’s chief AI officer argues they must be monitored for rogue behaviour just like their human counterparts.
