The ‘human exception’ in AI governance: Are we serious or just ticking boxes?

AI regulations heavily scrutinise algorithms while blindly trusting the humans in the loop. To achieve systemic safety, we must subject both human and AI decision-makers to the same rigorous standards

In the rush to govern the age of agentic artificial intelligence (AI), a strange consensus has taken hold. Nearly every framework, policy paper, and regulatory discussion treats AI agents as the primary source of risk. They must be observable, controllable, auditable, and constrained. Humans, by contrast, are quietly ushered into the role of the wise observer. The final backstop, the moral anchor, the one element we can safely assume will keep the system honest.

This is not a minor design choice. It is the unspoken foundation of almost all current AI governance. And it is, increasingly, a fiction.

We have arrived at this place through a perfectly understandable combination of history, psychology, and institutional convenience. For centuries, human decision making has been governed through layers of law, professional norms, corporate hierarchies, and social accountability. These systems are mature, politically settled, and emotionally reassuring. When a new technology arrives that can act at superhuman speed and scale, it is natural to focus regulatory energy on the newcomer. AI is the unknown variable. Humans are the known quantity. Why reopen settled questions about human fallibility when there is a shiny new risk demanding attention?

Layered on top is a deeper philosophical habit. We still place humans at the moral centre of the universe. Consciousness, free will, and intrinsic dignity. These are attributes we grant ourselves by default. AI is cast as a tool, an extension of human intent, not a peer controller. As long as a human remains in the loop or ultimately accountable, we reassure ourselves that the system is ethically sound. The evidence of our own limitations, cognitive biases, fatigue, emotional contagion, groupthink, conflicts of interest, is acknowledged in psychology textbooks but rarely permitted to disturb the hierarchy. Humans are treated as the golden standard. Everything else must prove it is not dangerous.

The uncomfortable truth is that both humans and AI are fallible controllers operating inside the same feedback loops. Safety does not reside in the material the controller is made of, carbon or silicon, but in the observability, controllability, stability, robustness, and performance of the loop itself

This mindset is reinforced by practical comfort. It feels cleaner to have a named person to blame when something goes wrong. It feels scalable. One human supervisor overseeing dozens of AI agents preserves the comforting illusion of control. And economically, it is convenient. Organisations deploy AI to cut costs and accelerate operations. Governance becomes the regulatory checkbox required to get the system live. Human processes are already governed or grandfathered in, so they largely escape the same quantitative scrutiny. Regulators, facing limited bandwidth and public pressure to be seen controlling the robots, naturally prioritise the novel threat.

The result is a subtle but profound inconsistency. In hybrid systems, the very systems that will dominate the next decade, we apply rigorous, measurable controls to silicon-based controllers while treating carbon-based ones as inherently trustworthy. We demand full transcripts, real-time drift detection, and kill switches from AI voice agents in call centres, yet the human operators handling the same calls are still evaluated largely through periodic quality assurance (QA) sampling and subjective supervisor judgment. When AI pushes too many escalations and humans become fatigued or inconsistent, the system level instability is treated as an operational hiccup rather than a governance failure.

Let us call this what it is: performative governance in many cases. Tick the box compliance that looks impressive on paper but fails to address the actual dynamics of hybrid decision-making. The uncomfortable truth is that both humans and AI are fallible controllers operating inside the same feedback loops. Safety does not reside in the material the controller is made of, carbon or silicon, but in the observability, controllability, stability, robustness, and performance of the loop itself.

Moving to the next level of systemic adoption will require us to do something genuinely difficult: dethrone the human exception. We must design governance that is entity agnostic, judging every decision maker, human, AI, or hybrid fleet, by the same measurable properties of the control loop. This means retrofitting legacy human processes with the same telemetry and quantitative risk scoring we now demand of AI. It means accepting real time fatigue monitoring for humans, similar to what modern cars already do when they detect driver drowsiness and recommend breaks. It means accepting bias audits and authority limits that feel invasive and dehumanising to some. It means updating liability, insurance, and regulatory frameworks to price human controller risk symmetrically with AI risk, even when that slows deployment or raises labour costs.

None of this will be easy, politically or culturally. Many employees will resist the idea of being measured with the same rigour as AI systems. Unions will push back. Executives will worry about morale and optics. Regulators will be reluctant to revisit established labour laws. Yet the alternative is worse: scaling hybrid agentic systems under the illusion of safety, only to discover too late that the weakest link was the one we refused to examine.

The path forward is not to fear technology or to diminish human dignity. It is to extend the same rigorous, evidence based discipline we are building for AI to every controller in the system, including ourselves. Only then can we move beyond performative governance to genuine systemic safety.

The question is not whether we will eventually make this shift. The question is how many avoidable failures we will tolerate before we admit that the material of the controller has never been the point. The loop is what matters. It is time we started governing it as such.

David R. Hardoon is a senior AI industry expert, former chief data officer of the Monetary Authority of Singapore, and author of the Feat principles (fairness, ethics, accountability and transparency) on responsible AI

Read more on IT governance