The intersection of AI Governance and Innovation
The intersection of AI Governance and Innovation
This is a guest blog post by Michelle Eisenberg, General Counsel, Unit4.
Companies often battle with two conflicting pressures: (i) the need for innovation – tech, demos, market expectations – and (ii) the human reality – lack of skills, routines, trust. Any workflows that add steps instead of removing them could be faced with resistance. Therefore, sometimes the slowing adoption rate is not an obstruction, it is a necessary friction. This conundrum speaks volumes for the challenges surrounding one of the most important tasks connected to adopting AI: the AI Governance framework. It is the foundation for embracing AI, yet it must balance supporting innovation and protecting the organisation. When do you know the Governance framework is ready? What if the architecture is unable to support the integration of AI and the effective extraction of data? What if the security posture slows down use of AI, or conversely opens the company up to too much risk?
Successful adoption of AI requires a balance that ensures governance but allows for innovation. It should enable tools to fit naturally into the way we work and underlying systems and processes should be built to let innovation flow.
However, that does not mean the AI Governance framework must be perfect. In fact, on the contrary, companies must be willing to learn on the go, course correct, retain agility, be comfortable with the unexpected and accept that full compliance is a moving target.
That said, organisations can take practical steps to ensure they chart a path to ethical AI adoption which minimises the potential bumps in the road.
Learning as you go
The idea of learning as you go is uncomfortable, but the constant evolution of AI and various moving parts make it very difficult to build a policy framework that can keep pace. This is a key challenge for governance teams tasked with providing guidance to employees on AI best practice, safeguarding sensitive information and ensuring compliance.
This is nothing new, because technology has always been one step ahead of policy. AI, though, is forcing organisations into a continuous learning mode as they adjust policy in real-time. This challenge should not be under-estimated. Even seasoned IT and data professionals are facing situations every day where they do not immediately have the answers and need to upskill themselves. The very ‘experts’ employees are used to turning to for help, need expert help.
The concern for leadership is that if you are learning as you go, it may either increase the company’s risk exposure or slow innovation down.
Expect the unexpected
So how can organisations avoid such dangers and establish governance policies that are workable, pragmatic and facilitate innovation? Embracing a mantra for AI ethics and policy of “Expect the unexpected” is a good place to start.
AI is also posing new questions which do not fit neatly into the traditional governance and compliance boxes in areas like data security and data transparency. For example, take a mindfulness app made available to all employees, which includes an AI Agent that can act as a companion and provide some forms of counselling. What if an employee becomes dependent on the app (the ‘therapist’) but then leaves the company and no longer has access to the support? What responsibility, moral or otherwise, does a company have?
This hypothetical scenario also raises questions about responsibility for such decisions. Should it lie with the Chief People Officer or General Counsel? Or should responsibility be shared by the executive leadership team? It’s important to get this clear.
Accept that you will never be fully compliant
It may sound unconventional, but in truth the ever-changing nature of technology means companies may never be fully compliant. Senior leadership teams willing to accept this rather than be paralysed by worry, will be better positioned to cope with the unpredictable nature of AI’s impact.
Companies accept various risks every day and this is no different. The key is transparency at the right levels, prioritising the areas employees and customers truly care about and making reasonable efforts to maintain and strengthen compliance. As we have seen with other areas, regulators are much more sympathetic when a company can show it is on the right journey.
The first crucial step is establishing an AI Governance Committee to define the organisation’s AI principles and overarching governance framework. This committee serves as the foundation for responsible AI adoption, and its effectiveness depends on adhering to these core tenets:
Define non-negotiable principles. These are the core values, particularly around data security, transparency, and ethical use. They are the foundational guardrails for all AI initiatives.
Ensure diverse representation. Pull together a cross-section of decision-makers across the business, not just technology leaders, but also representatives from legal, HR, operations, and other key functions.
Translate principles into practice. Abstract ‘rules’ are not effective if employees don’t understand how to apply them. The committee should develop clear, practical policies that guide staff on how, when, and why they can use AI in their daily work. For example, in an AI Acceptable Use Policy.
Map AI use cases across the organisation. Different scenarios require different approaches. The committee should categorise how AI is being used throughout the business. At Unit4, we identified three distinct cohorts, which proved valuable because governance policies can be tailored to each category’s specific risks and opportunities.
These categories are:
Internal productivity: AI as an internal productivity tool.
Customer facing services: AI tools used to improve service to customers.
Product & Technology: AI embedded in products such as Unit4’s Advanced Virtual Agent (Ava).
Finally, as we stand at the intersection of unprecedented technological capability and human need, I feel it is important to remind ourselves about empathy, which must be a crucial part of any AI Governance framework.
Do not underestimate the human impact
The team leading governance must not assume levels of understanding and openness amongst both the employee base and experts. Even those experienced in technology are learning on the job. This can lead to uncertainty and anxiety, so it is incumbent on the leaders to show empathy. Some employees are reluctant to use AI because it so different to how they usually work, while others do not have the confidence to use it due to a lack of suitable skills or even because they feel like they are ‘cheating’. This is where a clear identification of the practical uses of AI and relevant upskilling needs should come in.
Similarly, customers may share concerns and questions about AI uses in products and services, so it is important to maintain empathy and educate them on the organisation’s approach to governance. Transparency is key.
Ultimately, successful AI transformation comes down to people, not just technology. It requires assembling diverse voices at the table, approaching uncertainty with both courage and humility, accepting that iteration is part of innovation, and maintaining the flexibility to adjust course as we learn. When we anchor our efforts in these principles and keep the human element at the centre, we don’t just implement AI; we shape a future where technology genuinely serves humanity’s best interests.
