Feng Yu - stock.adobe.com

The rise (or not) of AI ethics officers

Job titles vary, but the principles shouldn’t – put AI ethics in the org chart, fund it and give it authority, so that good intentions turn into trust, accountability and good business

Four years after the World Economic Forum (WEF) called for chief artificial intelligence (AI) ethics officers, 79% of executives say AI ethics is important to their enterprise-wide AI approach.

Outside of large technology vendors, however, the role hasn’t taken off. Is centralising that responsibility the best approach, or should organisations look to other governance models? And if you do have someone leading AI ethics, what will they actually be doing? 

For one thing, enterprises tend to shy away from calling it ethics, says Forrester vice-president Brandon Purcell: “Ethics can connote a certain morality, a certain set of norms, and multinational companies are often dealing with many different cultures. Ethics can become a fraught term even within the same country where you have polarised views on what is right, what is fair.”

Salesforce may be urging non-profits to create ethical AI strategies, but enterprises talk about responsible AI and hire for that: in the US, as of April 2025, job postings on LinkedIn for “responsible AI use architects” are up 10% year on year (YoY).

But what most organisations are looking for is AI governance, Purcell says, adding: “Some companies are creating a role for an AI governance lead; others are rightfully looking at it as a team effort, a shared responsibility across everyone who touches the AI value chain.”

Organisations want a person or a team in charge of managing AI risks, agrees Phaedra Boiniodiris, global leader for trustworthy AI at IBM Consulting, “making sure employees and vendors are held accountable for AI solutions they’re buying, using or building”. She sees roles such as AI governance lead or risk officer ensuring “accountability for AI outputs and their impact”.

Whatever the title, Bola Rotibi, chief of enterprise research at CCS Insight, says: “The role is steeped in the latest regulations, the latest insight, the latest trends – they’re going to industry discussion, they’re the home of all of that knowledge around AI ethics.”

But Gartner Fellow and digital ethicist Frank Buytendijk cautions against siloing what should be a management responsibility: “The result should not be the rest of the organisation thinking the AI ethics officer is responsible for making the right ethical decisions.

“AI is only one topic: data ethics are important too. Moving forward, the ethics of spatial computing may be even more impactful than AI ethics, so if you appoint a person with a strategic role, a broader umbrella – digital ethics – may be more interesting.”

Protecting more than data

EfficientEther founder Ryan Mangan believes that, so far, a dedicated AI ethics officer remains a unicorn: “Even cyber security still struggles for a routine board-level seat, so unless an ethics lead lands squarely in the C-suite with real veto power, the title risks being just another mid-tier badge, more myth than mandate.”

A recent survey for Dastra suggests many organisations (51%) view AI compliance as the purview of the data protection officer (DPO), although Dastra co-founder Jérôme de Mercey suggests the role needs to expand. “The most important question with AI is ‘What is the purpose and how do I manage risk?’, and that’s the same question for data processing.”

Both roles involve both regulation and technical questions, communicating across the organisation, and delivering strong governance. For de Mercey, the General Data Protection Regulation’s (GDPR) concepts of fundamental rights are also key for AI ethics: “The economic and societal risk is always [pertinent] because there are people with personal data and DPOs are used to assessing this kind of risk.”

A standalone AI ethics officer isn’t feasible for smaller businesses, says Isabella Grandi, associate director of data strategy and governance at NTT Data. “In most places, responsibility for ethical oversight is still added to someone else’s job, often in data governance or legal, with limited influence. That’s fine up to a point, but as AI deployments scale, the risks get harder to manage on the side.”

DPOs, however, are unlikely to have enough expertise in AI and data science, Purcell argues. “Of course, there is no AI without data. But at the same time, today’s AI models are pre-trained on vast corpuses of data that don’t reside within a company’s four walls. [They may not know the right questions to ask] about the data that was sourced to use those models, about how those models were evaluated, about intended uses, and limitations and vulnerabilities of the models.”

Data science expertise isn’t enough either, he notes. “If we define fairness in terms of ‘the most qualified candidate gets the job’, that’s great, but we also know that there are all sorts of problems with the data used to determine who is most qualified. Maybe we have to look at the distribution of different types of applicants and acceptance rates given an algorithm. Your rank-and-file data scientist doesn’t necessarily know to ask those sorts of questions, whereas somebody who has been trained in ethics does, and can help to find the right balance for your organisation.”

The responsible AI team very often does not have somebody who is certified in AI ethics
Marisa Zalabak, Global Alliance for Digital Education and Sustainability

The remit for this role is distinct from the concerns of the DPO – or the CIO or CISO, says Gopinath Polavarapu, chief digital and AI officer at Jaggaer: “Those leaders safeguard uptime, cyber defence and lawful data use. The AI ethics lead wrestles with deeper questions – is this decision fair? Is it explainable? Does it reinforce or reduce inequality?”

Boiniodiris adds more questions: “Does this application of AI align with our company values? Who could be adversely affected? Do we fully understand the context of the data being used for this AI and was it gathered with consent? Have we communicated how this AI should be used? Are we being transparent?”

Asking what human values AI should reflect is a reminder that the role needs legal, social science, data science and ethics expertise.

“Responsible AI teams are lawyers, sometimes they’re researchers or psychologists – the responsible AI team very often does not have somebody who is certified in AI ethics,” says Marisa Zalabak, co-founder of the Global Alliance for Digital Education and Sustainability.

With more than 250 standards for ethical AI and another 750 in progress, they will need training – Zalabak recommends the Center for AI and Digital Policy while organisations build their own resources – that covers more than “the two things people think about when they think of AI ethics – bias and data privacy – because there’s a huge range of things, including multiple psychosocial impacts”.

The power to say no

While they have access to decision-makers, neither architects or DPOs are senior enough to have sufficient impact or to have visibility of new projects early enough. AI ethics needs to be involved at the design stage.

“The role must sit with executive leadership – reporting to the CEO, the risk committee, or directly to the board –to pause or recalibrate any model that jeopardises fairness or safety,” Polavarapu adds.

A responsible AI lead should be at least at the level of vice-president, Purcell agrees: “Typically, if there’s a chief data officer, they sit within that organisation. If data, analytics and AI are owned by the CIO, they sit within that organisation.”

As well as visibility, they need authority. “From the very start of when an AI project is conceived, that person is involved to elucidate what should be the responsibility requirements for this, in some cases, highly consequential, high-risk use case,” says Purcell.

“They are responsible for bringing in additional stakeholders who will be impacted, to identify where potential harms might occur. They help to create and ensure adherence to best practices in the development of the system, including monitoring and observability. And then, finally, they have a say in the go/no-go evaluation of the system: does it meet the requirements we’ve set out in the beginning?”

That will involve bringing in additional stakeholders with diverse perspectives and backgrounds to test the concept of the AI system and where it could go wrong so it can be red teamed for those edge cases.

“To a certain extent, it’s no different to what we’ve had with other new officers like ESG officers or heads of sustainability who keep up with specific regulations surrounding that capability,” says Rotibi. “The AI ethical officer, like any other officer, should be part of a governing body that looks overall at the company’s posture, whether that be around data privacy, or whether that be around AI, and asks ‘What’s the exposure? What are the vulnerabilities for an organisation?’”

The value of an AI ethics officer lies not just in their expertise and their ability to communicate, but also in the authority they’re given. Rotibi believes that needs to be structural: “You give them governance authority and escalation channels, you give them the ability to do decision impact assessments, so that there is a level of explainability in whatever they say. And you have consequences – because if you don’t have those structures in place, it becomes wishy-washy advisory.”

Boiniodiris agrees: “AI governance teams can pull together committees, but if no one shows up to the meetings, then progress is impossible. The message that this work matters has to come from the enterprise’s highest levels, communicated not just once, but consistently, until it’s embedded in the company culture.” 

Ethics needs to be cross-functional, warns Polavarapu: “Steering committees that span compliance, data science, HR, product and engineering ensure every release is stress-tested for unintended consequences before it ships.”

But Buytendijk maintains that an AI ethics officer should chair a digital ethics advisory board that doesn’t act as a steering committee: “There should be no barrier for line or project managers to hand in their ethical dilemmas. If it is a steering committee, line and project managers lose control over their project, and that is a barrier.”

In practice, he suggests creating advisory boards with sufficient authority: “We asked the advisory boards we have been talking with about how much it happens that their recommendations are not followed, and that essentially never happens.”

Doing well by doing good

Even so, AI ethics officers are unlikely to have the power to block widespread trends with ethical impacts, such as agentic AI that automates workflows and may reduce the number of staff required.

A recent NTT Data survey shows the tensions: 75% of leaders say the organisation’s AI ambitions conflict with corporate sustainability goals. A third of executives say responsibility matters more than innovation, another third rates innovation higher than safety, while the other third assigns equal importance.

The solution may be to view AI ethics and governance not as the necessary cost of avoiding loss (of trust, reputation, customers or even money, if fines are incurred), but as proactively generating longer term value – whether that’s recognition of industry leadership or simply doing what the business does better.

“Responsible AI isn’t a barrier to profit, it’s actually an accelerator for innovation,” Boiniodiris says. She compares it to guardrails on a racetrack that let you go fast safely. “If you embed strong governance from the start, you create the kind of framework that lets you scale responsibly and with confidence.”

AI ethics isn’t just about compliance or even good customer relations: it’s good business and competitive differentiation. Companies embracing Al ethics audits report more double the ROI of those who don’t demonstrate that kind of rigour. And the Center for Democracy & Technology’s report on Assessing AI is a comprehensive look at how to evaluate projects to reach those kind of returns.

If you embed strong governance from the start, you create the kind of framework that lets you scale responsibly and with confidence
Phaedra Boiniodiris, IBM Consulting

The recent ROI of AI ethics paper from the Digital Economist builds on tools such as the Holistic Return On Ethics Framework developed by IBM and Notre Dame, and Rolls-Royce’s Aletheia Framework AI ethics checklist with metrics for an ethical AI ROI calculator. Rather than treating ethical AI as a cost, “it’s a sophisticated financial risk management and revenue generation strategy with measurable, substantial economic returns”.

Lead author Zalabak describes it as “the right information for somebody who could not care less about ethics – ultimately, what’s the business case?”, and she describes AI ethics as “a huge opportunity for people to be amazed by the exponential potential of good”.

A clear ethical AI framework makes a company a more attractive, less risky investment, adds JMAN Group CEO Anush Newman: “When we’re looking at potential portfolio companies, their approach to AI governance and ethics is becoming a serious consideration. A robust data strategy, which inherently includes ethical considerations, isn’t just ‘nice to have’ anymore, it’s fast becoming a necessity.”

Organisations will almost certainly need to adopt a more holistic approach to evaluating risks and harms rather than marking their own homework. AI regulations remain a patchwork, but standards can help. Many enterprise customers now require verifiable controls such as ISO/IEC 42001, which attests that an Artificial Intelligence Management System (AIMS) is operating effectively, Polavarapu notes.

The conversation has moved on from staying on the right side of regulation such as the EU AI Act to embedding AI governance throughout product lifecycles. Grandi adds that UK firms look to the AI Opportunities Action Plan and the AI Playbook for guidance – but still need the internal clarity an AI ethics officer could bring.

Purcell recommends starting by aligning AI systems with their intended outcomes – and with company values. “AI alignment doesn’t just mean doing the right thing, it means, ‘Are we meeting our objectives with AI?’, and that has a material impact on a business’s profitability. A good AI ethics officer is someone who can show where alignment with business objectives also means being responsible, doing the right thing and setting appropriate guardrails, mechanisms and practices in place.”

Effective AI government requires principles such as fairness, transparency and safety, policies and practices ensuring systems follow policies and deliver those principles. The problem is many companies have never set down what their principles are.

“One of the things we’ve found in research is that if you haven’t articulated your values as a company, AI will do it for you,” warns Purcell. “That’s why you need a chief AI ethics officer to codify your values and principles as a company.”

And if you need an incentive for the kind of cross-functional collaboration he admits most large enterprises are terrible at, Purcell predicts least one organisation will suffer a major negative business outcome such as considerably increased costs, probably from “an agentic system that has some degree of autonomy that goes off the rails” within the next 12 months.

Read more about AI ethics

Read more on Artificial intelligence, automation and robotics