Laurent - stock.adobe.com
The House of Commons Science and Technology Committee has launched an inquiry into the UK’s governance of artificial intelligence (AI), which will examine how to ensure the technology is used in an ethical and responsible way.
In July 2022, the Department for Digital, Culture, Media and Sport (DCMS) proposed “a pro-innovation framework for regulating AI”, highlighting the need for a clear legal framework to deal with a lack of clarity, overlaps, inconsistency and gaps in the UK’s current approach.
The committee’s inquiry will now examine whether the government’s proposed approach – which will be formalised in an upcoming whitepaper before the end of 2022 – is the right one, with a particular focus on bias in algorithms and the lack of transparency around both public and private sector AI deployments.
The inquiry will also explore how automated decisions can be effectively challenged by ordinary people, as well as how the risks posed by AI systems should be addressed generally.
This includes looking at, for example, which bodies should provide formal regulatory oversight, and how to improve the explainability of AI models to the public.
“AI is already transforming almost every area of research and business,” said Greg Clark, chair of the Science and Technology Committee. “It has extraordinary potential, but there are concerns about how the existing regulatory system is suited to a world of AI.
“With machines making more and more decisions that impact people’s lives, it is crucial we have effective regulation in place. In our inquiry, we look forward to examining the government’s proposals in detail.”
MPs will also look at other countries’ approaches to AI governance, including the European Union’s forthcoming AI Act, which the DCMS proposal said would hinder innovation because of a lack of granularity.
The committee will seek written evidence on the UK’s approach to regulating AI, with submissions open until 25 November 2022.
Read more about AI governance
- Ada Lovelace Institute publishes recommendations on how European institutions can improve the Artificial Intelligence Act by establishing a ‘comprehensive remedies framework’ around those affected by the deployment of AI systems.
- The European Union is rolling out a raft of measures designed to hold technology companies more accountable for how their products and services impact end-users, including new online safety and artificial intelligence liability rules.
- The Alan Turing Institute has launched the AI Standards Hub jointly with the British Standards Institution and the National Physical Laboratory, backed by the Department for Digital, Culture, Media and Sport.
Concerns around the use of AI have already been highlighted by other parliamentary inquiries, as well as unions.
In March 2022, for example, a House of Lords inquiry into the use of advanced algorithmic technologies by UK police – including facial recognition and various crime “prediction” tools – found that these tools were being deployed without a thorough examination of their efficacy or outcomes, with policing bodies essentially “making it up as they go along”.
A report published by the Lords Home Affairs and Justice Committee (HAJC) said: “The use of advanced technologies in the application of the law poses a real and current risk to human rights and to the rule of law. Unless this is acknowledged and addressed, the potential benefits of using advanced technologies may be outweighed by the harm that will occur and the distrust it will create.”
The HAJC further described the situation as “a new Wild West”, characterised by a lack of strategy, accountability and transparency from the top down.
However, the government largely rejected the inquiry’s findings, claiming in July 2022 that there was already “a comprehensive network of checks and balances”.
In March 2021, the Trades Union Congress (TUC) warned that huge gaps exist in British law over the use of AI in the workplace, which would lead to discrimination and unfair treatment of working people, and called for “urgent legislative changes”.
A year later, in March 2022, the TUC said the intrusive and increasing use of surveillance technology in the workplace – often powered by AI – was “spiralling out of control”, and pushed for workers to be consulted on the implementation of new technologies at work.
A parliamentary inquiry into AI-powered workplace surveillance previously found that AI was being used to monitor and control workers with little accountability or transparency, and called for the creation of an Accountability for Algorithms Act.