AI workflows – Nightfall: Shadow AI workflows eat security from inside

This is a guest post for the Computer Weekly Developer Network written by Rohan Sathe in his capacity as CEO and co-founder at Nightfall.

Nightfall AI is known for its agentic data loss prevention platform that reduces both human and AI risk. The company aims to provide comprehensive visibility into data exfiltration across AI applications, SaaS, endpoints and email. The software helps organisations identify and prevent sensitive data exposure, implement automated governance policies and maintain compliance across their digital infrastructure.

Sathe writes in full as follows…

IT departments are conducting business as usual. They are rolling out new AI applications to keep employees productive. They are testing managed pilots with the newest cutting-edge platforms. They are doing all of this with extra care to make sure that whatever new system they introduce is secure.

But despite best efforts to control who uses what and when, employees are often adopting AI on their own without letting IT know.

With 75% of knowledge workers already using AI at work today and nearly half having started in just the last six months, this is creating ‘shadow AI workflows’ at a massive scale and most organisations have no idea.

A developer starts using ChatGPT to debug code. A marketing manager uploads customer data to Claude for campaign analysis. An analyst builds a forecasting process around an AI spreadsheet tool. None of this is malicious; people are just trying to get their jobs done faster. But each workflow creates data exposure risks that exist completely outside normal security controls.

Here’s what most security teams miss: when that developer copies database connection strings from Slack and pastes them into ChatGPT, traditional DLP (data loss prevention) sees nothing – the data never touches the network perimeter. When the marketing manager drags a customer spreadsheet directly from Google Drive into Claude’s interface, legacy systems only detect the initial download, not the AI upload. 

Nightfall’s platform with API integrations, endpoint agents and browser extensions catches both scenarios before submission, redacting credentials while preserving debugging context and blocking file uploads that contain PII while offering sanitised alternatives.

The issue is the lack of visibility into these shadow AI workflows, which actually accounts for more data leaks than people think.

Platforms miss shadow AI 

Shadow AI workflows are dangerous because of how they break traditional security assumptions.

  • First, AI tools are designed to be frictionless – employees can build entire workflows around them without involving IT at all.
  • Second, existing DLP systems weren’t built for this…. they can’t understand how AI workflows process and transform data across multiple touchpoints.
  • Third, employees don’t realise the risk… using AI tools feels as safe as using Google Search.

The attack surface is expanding in ways we haven’t seen before.

Bad actors are creating AI tools specifically designed to integrate into common business workflows, then marketing them through social channels and developer communities. Consider the most sophisticated threat: a developer clones your valuable codebase to their personal GitHub account through command-line git operations. 

Legacy DLP misses this completely – there’s no browser activity, no copy-paste, just process-level commands. Advanced solutions monitor git operations and detect when proprietary repositories are being transferred outside corporate boundaries, regardless of the technical method used. Employees adopt these tools and build workflows around them, unknowingly creating direct data pipelines to malicious actors.

The landscape has shifted dramatically since we started tracking shadow AI workflows. We began with SaaS scanning around 2019-2020, then generative AI guardrails became critical starting in 2023.

Now we’re seeing this urgent need for autonomous, intelligent threat prevention that can scale with employees rapidly building shadow AI workflows across every department.

Stop playing whack-a-mole

Nightfall’s Sathe: The landscape has shifted dramatically since we started tracking shadow AI workflows.

You can’t solve this by banning AI tools. That just drives workflows underground. We learned this lesson with cloud adoption; blocking tools led to more shadow IT, not less.

From working with enterprises on this problem, two technical approaches make the biggest difference. First is catching sensitive content before it gets sent to AI tools. Second is AI-native detection that understands data context, not just patterns.

The most effective deployments use browser extensions and endpoint agents that scan everything before it leaves the organisation. 

Prompts, clipboard activity, file uploads; all of it gets checked in real time. So when someone tries to paste source code into ChatGPT, the system can block or redact it before it’s sent. We also trace where files came from, so security teams know if something originated in a corporate system. This typically means deploying on macOS and Windows with extensions for Chrome, Safari and Firefox. The extensions handle the before-you-send functionality, while endpoint agents monitor clipboard activity and file uploads across all shadow AI workflows.

What separates effective systems from noise generators is continuous learning. The best solutions understand content context, learn from security team decisions and identify safe workflow patterns. This cuts false positives dramatically compared to legacy DLP which alerts on everything.

How to secure shadow AI workflows 

These 7 technical capabilities define the difference between comprehensive shadow AI protection and legacy solutions that leave critical gaps:

  1. Foolproof interception. Solutions must intercept clipboard operations, file uploads before submission, not simply rely on network monitoring that misses encrypted HTTPS traffic to AI platforms.
  2. Comprehensive exfiltration vector coverage. Monitor browser file uploads, clipboard operations, git commands, USB transfers, cloud sync activity, outgoing emails and desktop application data flows – not just web browser activity.
  3. AI-native content classification. Pre-trained LLMs and computer vision models that understand data context and intent, distinguishing between personal versus corporate AI tool usage, test data versus real customer information and achieving 95% precision without months of tuning.
  4. Complete data lineage tracking. Trace sensitive content from corporate SaaS apps to final destinations to Shadow AI apps or any risky domains, maintaining visibility across the entire data movement chain.
  5. Intelligent SOC agent. Built-in AI SOC agent for natural language incident investigation, automated risk scoring, continuous learning from security team decisions and persistent memory of complex cases – reducing triage time from days to minutes.
  6. Lightweight deployment architecture. Browser extensions and endpoint agents that deploy via MDM in minutes, consume minimal system resources and require zero network infrastructure changes or certificate management.
  7. Real-time prevention over detection. Block data exposure before transmission occurs, not alerting after sensitive information has already been processed by external AI platforms.

Getting this right means companies can carry out their AI workflows safely. Failing to follow protocol means flying blind while employees build critical business processes around unsecured AI tools.

The question isn’t whether your organisation has shadow AI workflows—everyone does. The question is whether you know about them and can control them. While IT focuses on managed AI pilots, employees are already building critical business processes around AI tools they found online. You can either secure these workflows or ignore them until something bad happens.