andranik123 - stock.adobe.com

Why organisations must block AI browsers – for now

AI browsers can improve employee productivity through autonomous workflows, but their security flaws and data leakage risks can expose enterprises to critical cyber threats

Artificial intelligence (AI) browsers promise to reshape web browsing experiences, making them faster and more intuitive with broader and deeper reach. Yet, their current fit for enterprise use remains too risky for general adoption by most organisations.

AI browsers such as Perplexity Comet and OpenAI’s ChatGPT Atlas have the potential to transform how employees work, shifting from manual navigation to autonomous transactions with the web. By integrating agentic features directly into the web browsing experience, these tools can summarise content, draft emails and autonomously research and purchase products across online stores.

Unlike traditional browsers, AI browsers are beginning to leverage AI agents that not only interpret user intent but can also orchestrate end-to-end workflows. This redefines how individuals and enterprises interact with digital touchpoints – across the web, cloud services, and connected devices.

Despite these capabilities, AI browsers can introduce critical cyber security risks. They perform autonomous web navigation and transactions that can bypass traditional controls, which could result in significant data leakage, rogue or inaccurate agent actions or abuse of credentials. Even more concerning is the potential for hidden vulnerabilities yet to be discovered in this nascent technology.

Until these tools mature and robust safeguards are proven, Gartner recommends organisations regard AI browsers as high-risk emerging technology that is still functionally flawed: block for now, then pilot only when the technology becomes more secure and reliable.

Sensitive data leakage to third-party AI services

An AI browser’s summarisation, autonomous navigation, interaction and task completion functions rely on cloud-based back ends. This creates a significant risk of sensitive data leaking beyond organisational boundaries – sometimes without user awareness or consent.

It’s important to conduct a security review of any back-end service powering an AI browser under consideration. If this is deemed not secure for organisational use, block users from downloading or installing AI browsers.

If given the green light to continue to use, educate employees that any content viewed could potentially be sent to the cloud back-end. Ensure they avoid having highly sensitive information active on browser tabs while using sidebar features like summarisation or autonomous actions.

Where possible, experiment only with those tools the organisation already trusts. For example, if Microsoft Edge introduces more agentic functions, it will be an attractive option for enterprises that already store and process sensitive data using Microsoft 365 services.

Erroneous agentic transactions from employees

AI browsers tempt employees to offload repetitive and less interesting tasks, but this comes at a cost. Browsers may fill out forms incorrectly; complete mandatory training without true engagement; or even book incorrect flights due to inaccurate reasoning from large language models.

More critically, AI browsers can be deceived into navigating phishing websites, potentially leading to the loss of corporate credentials and subsequent abuse for initial access. 

Mitigate these risks by piloting with users who demonstrate high AI literacy and only on low-risk tasks that don’t involve sensitive data or critical applications. It is important to remind them to closely monitor autonomous navigation by the AI browser. If it behaves unexpectedly or erratically, they should stop the task immediately.

Additionally, update acceptable use policies and provide clear education on prohibited uses for AI browsers, such as those related to completing annual cyber security awareness training.

Default settings prioritise user experience over cyber security

AI browsers are typically designed with default settings that optimise for end-user experience, rather than cyber security best practices. For example, many AI browsers retain usage data by default and may store user information for extended periods.

Develop clear security specifications for any AI browser under consideration and educate pilot users about updating settings to align with enterprise security requirements until centralised management becomes available.

For organisations already piloting an AI browser, disable any data retention features so the provider cannot use searches and interactions to improve their models. Instruct users to regularly delete stored histories or memories to minimise the risk of data leakage.

Additionally, review and make informed decisions about whether features such as email assistants or workflow automation tools may connect to organisational email systems (such as Google Gmail or Microsoft Outlook) or integrate with other enterprise applications.

Critical design flaws and vulnerabilities

As with any emerging technology, AI browsers are susceptible to design flaws and critical vulnerabilities.

For example, a critical vulnerability in OpenAI’s ChatGPT Atlas was found days after its launch, potentially allowing unauthorised access to user accounts. Users were advised to restrict usage to low-risk scenarios only and avoid any experimentation involving sensitive or high-risk data such as personal finance information. Organisations were required to enforce rapid patching and prohibit new downloads or installations until any vulnerabilities were resolved.

The rapid evolution of AI browsers brings both exciting potential and significant risks to enterprises. Until these tools mature and governance frameworks are in place, organisations should block AI browsers to protect sensitive data and maintain control over digital workflows.

Prioritising security now will enable responsible adoption when the technology is ready for enterprise use.

Dennis Xu is vice-president analyst at Gartner focusing on AI and cloud security topics. He will be speaking at the upcoming Gartner Security & Risk Management Summit in Sydney on 16 - 17 March 2026.

Read more on Endpoint security