phonlamaiphoto - stock.adobe.com

AI everywhere all at once

Artificial intelligence became mainstream in 2023. Advances in technology and accessibility led to increased awareness and use of AI

The rapid deployment of artificial intelligence (AI) is outpacing current legislative and ethical frameworks, with politicians worldwide struggling to keep up. As a result, big tech companies are being accused of "marking their own AI homework" as they can effectively make rapid developments in the application and use of AI without too much restraint.

The US responded by getting tech giants - including Amazon, Google, Microsoft, and others - to sign a “landmark” voluntary agreement that allowed independent security experts to test their latest algorithms and models. In the UK, the Prime Minister announced his intention to set up the world’s first AI safety institute, which will assess national security risks associated with the technology. And at the end of last year, the US, UK, and over a dozen other countries unveiled an international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are “secure by design”.

2023 also saw governments worldwide racing to establish themselves at the forefront of AI regulation. The UK hosted the inaugural Global AI Safety Summit  - inviting selected international researchers, governments and NGOs. The more concerning elements of the technology were discussed at length, with the Prime Minister saying, “AI could pose a risk to humanity on the scale of a nuclear war".

For some, however, there are more clear and present risks attached to the rapid adoption of AI, including the impact on jobs and the widescale deployment of algorithmic decision-making in things like the justice system, welfare state and financial services.  Along with many others, the Open Data Institute (ODI) co-signed an open letter to the Prime Minister on this issue.

The UK and EU approaches to AI regulation also diverged over the course of 2023,  with the EU AI Act being more prescriptive and static than the UK’s proposed principles-based approach. The UK’s approach is likely to allow more innovation and potentially encourage AI entrepreneurs to the UK; however, there’s a risk that elements will fall “between the gaps” of different regulators with each regulator thinking the other is responsible for enforcing regulations. 

The Prime Minister explained the challenges, saying lawmakers are trying to “write laws that make sense for something we don’t yet fully understand”. 

In many ways, we are in unchartered waters, but it is clear that technology is evolving at pace. Our understanding of it, its benefits and risks, need to be understood - at least to some degree - by all of us. The need for digital and data literacy has never been more important. Policymakers and regulators should be properly equipped for the work they have to do. 

AI, and the models that underlie it, are immensely complex, and we must ensure that those in positions to legislate on AI understand it. The UK’s digital and data skills shortages were highlighted by Open Access Government as becoming an urgent problem, and the civil service was flagged as having a particularly significant skills gap, according to consultancy Global Resourcing.

Data literacy

At the ODI, we were pleased to see the media and other civil society organisations focus on the need for increased levels of data literacy and other essential workplace skills. These are, of course, core to managing and working with AI; however, a report from the Alan Turing Institute cited that only 27% of UK business leaders thought their non-technical workforce was currently technologically literate enough to leverage new technology. 

Like all tech, AI also needs to be properly scrutinised by people who understand it. Existing UK regulation in different sectors already protects us from some harmful effects, for example, when AI-based decisions are discriminatory or adversely affect someone’s health - but the UK’s regulatory approach will need to continually evolve to remain “fit for purpose”. The Post Office Horizon scandal has demonstrated the grave and tragic impact that can sometimes occur when technology goes wrong or is inadequately understood by those commissioning or implementing it.

When it comes to AI, public concern has continued, especially around the potential impact on jobs. The creative industries feel especially vulnerable, with creatives concerned about derivative works; using human creations as source material without compensation for the originators; and AI potentially replacing models and actors in TV and fims. These fears helped fuel a 146-day strike by the Writers Guild of America and a Getty Images lawsuit against Stability AI. Demonstrating the tension between differing attitudes around the use of AI within sectors, this industry had already adopted generative AI to produce outstanding imagery. Generative AI has even been used to produce award-winning fine art, in part to stimulate debate about its use in the creative field.

As the use of generative AI grew, new tools emerged designed specifically to protect originators’ interests. These tools, such as Nightshade, "poison" data connected with human work and confuse AI art models - eroding their ability to generate meaningful images based on the unlicensed work of human artists.

Thoughts on the year ahead

As we’re now in a General Election year, the ODI’s policy team is shaping recommendations that we would like to see adopted by the next government.

We feel that an obsession with new technology has overshadowed the role of data as the feedstock of most of the recent developments. We strongly suggest that a commitment to building and maintaining data infrastructure must be at the heart of the UK’s digital and data agenda. We urge each party to strengthen this essential national data infrastructure, to increase data literacy and ensure that all policy decisions around data consider transparency, safety, and equity.

Equally important will be commitments to keeping human decision-makers in the loop when AI is used to make choices. If an algorithm has decided - even if only in part - the outcome of a decision where a human is impacted, there must always be a human with the expertise and the capacity to hear an appeal against that decision.

The latest wave of AI models has disrupted how we think about certain components of our data infrastructure. It has triggered discussion and debate on the data we choose to share, the value of data we publish openly and the rights we have over it. It has also highlighted the importance of considering the quality, governance and biases of datasets.

The ODI’s data-centric AI work builds on our belief that AI must evolve to ensure AI engineers carefully consider the data that feeds their models and that users of these models consider the quality and biases of their data. Our work recognises that generating high-quality data for AI and working to ensure this is used responsibly and equitably are all key to ensuring everyone benefits from recent technological innovations and ensuring that we can harness the potential - and mitigate the risks - of an AI-enabled world.

Resham Kotecha is global head of policy at the Open Data Institute.

Read more about AI and ethics

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close