NicoElNino - Fotolia

Government and industry figures meet to discuss AI regulation

Tech industry figures are broadly supportive of the need for artificial intelligence to be regulated, but despite growing consensus, there is still disagreement over what effective AI regulation looks like

Despite a growing consensus that artificial intelligence (AI) requires regulation, there is still disagreement over exactly how the technology should be governed, and whether the UK government’s proposed “light-touch” approach will be effective.  

On 6 June, tech industry leaders met with government ministers and regulators in London to discuss the UK’s approach to governing AI, as part of trade association TechUK’s second annual Tech Leadership Policy conference.

In a pre-recorded speech, secretary of state for science, innovation and technology Chloe Smith said: “It wasn’t government who kick-started the generative AI revolution… nor will the race to unlock the potential of quantum computing be won in Whitehall – it’s British businesses who are making people’s lives better, driving economic growth, and creating employment opportunities that tap into the potential of communities right across the UK.”

She added the government’s AI whitepaper “sets out an agile approach to regulating emerging tech which encourages innovation”, and that the government is working both internationally and domestically to ensure appropriate guardrails are in place to build public trust and confidence in the tech.

TechUK deputy CEO Antony Walker criticised politicians’ habit of “talking in clichés around technology” and called for “clarity” around how the government’s tech policy ambitions will be delivered – but minister for tech and the digital economy Paul Scully said in a conversation with Walker that the newly established Department for Science, Innovation and Technology (DSIT) is now in a position to start delivering results, after spending its first 100 days getting all the pieces in place.

Noting this includes establishing the science and technology framework, earmarking investment for various areas of emerging tech and launching a consultation on AI regulation, Scully added: “Now that we’ve got everything there, we are able to turbocharge this. So what we now need to do is to properly work at speed, to deliver, deliver, deliver.”

Regulatory models

In a keynote address, Brad Smith, vice-chair and president of Microsoft – which acted as headline sponsor for the conference and is a major investor in ChatGPT creator OpenAI – said: “We can’t afford to go into this new AI era without our eyes wide open. There are real risks, there are real problems, and we’ll have to figure out how to solve them.”

He added that, from Microsoft’s perspective, any regulatory model should be built around the tech itself, which in the case of AI means building different rules for each layer of the technology stack.

This would involve specific rules at the application level, for how AI systems are deployed by users; the model level, which Smith said would likely include some kind of safety review or licensing; and the infrastructure level, to ensure that datacentres where models are deployed have to meet certain safety and security standards, especially if being used to power various pieces of critical infrastructure.

Asked about the UK’s Competition and Markets Authority’s rejection of Microsoft’s acquisition of Activision Blizzard in April 2023, and how Microsoft now views its future in the UK, Smith said: “I’m in search of solutions. If regulators have concerns, we want to address them; if there are problems, we want to solve them. If the UK wants to impose regulatory requirements that go beyond those in the EU, we want to find ways to fulfil them.”

Policy and technology

In a session on how policy can keep pace with technological development, Bea Longworth, who sits in Nvidia’s government affairs team, said the consensus among TechUK members is that the government’s AI whitepaper “is taking a very balanced middle ground” between the EU and the US, and that “it was thoughtful, it was differentiating the UK by creating quite a flexible environment” based on the recommendations of the March 2023 review conducted by former chief scientific adviser Patrick Vallance.

She added the approach would enable the tech to be rolled out in the real world, to “find out how well existing policy is working, where the gaps are and, if necessary, regulate.”

However, Longworth warned there was a danger of “balanced, measured policy-making being overtaken by politics, and the need to be seen to be doing something”, which could limit the UK’s ability to enable innovation while managing the risks of AI, and that there needs to be a “process for ensuring public trust” in how the tech is rolled out.

Felicity Burch, executive director of the Centre for Data Ethics and Innovation (CDEI), also said it was important to consider how the general public feels about the use of AI if regulation is going to be effective, and that research conducted by her organisation into public attitudes shows the different circumstances in which people are willing to trust AI systems.

“It turns out we trust it a lot more and we’re a lot happier with AI and data-driven tech being used when we can see a positive purpose for it,” she said, adding people were generally comfortable around the use of tech to combat the pandemic because they could clearly see the social benefits.

“Unfortunately, and this is tricky for industry, people are less trusting when they think it’s just about profit or turning a quick buck. They want to know there is something that will benefit them.”

She added trust in AI is also highly conditional based on the institution deploying it, noting that while there is generally much higher trust in public authorities’ use of the tech when compared to the private sector, this itself is dependent on different demographics’ experience with different institutions.

Public good

Asked by an audience member whether corporations are able to effectively act in the public good given the structural impetus to maximise shareholder value, Henry Parker, head of government affairs at AI startup Logically, said: “Nobody is saying there is no risk of shareholder value trumping ethics, and nobody is saying that AI should not be regulated under any circumstance and that there should be a soft approach applied.

“I think what we would say is that those risks need to be managed in an agile way, rather than a blunt way.”

However, while ministers and industry figures were generally welcoming of the UK government’s regulatory proposals for AI, Labour politicians expressed reservations.

Lucy Powell, shadow secretary of state for digital, culture, media and sport, for example, said that Labour would adopt an active, interventionist approach in government, to ensure the benefits of technology are spread more widely across society.

She further described the AI whitepaper as a “damp squib” and “big missed opportunity” for the UK, due to the lack of clarity about what the government will actually be doing.

“I think we missed the point on some of the issues,” she said, adding, for example, that regulators are now scrambling to build capacity “without a clear direction of travel” or division of responsibilities between them.

MP Darren Jones, who chairs the House of Commons business committee, added there also needs to be consultation with workers about AI and other emerging technologies throughout the process of procurement and deployment, to ensure their inclusive and thoughtful roll-out.

I’ve had businesses before the committee who bring in technology to improve productivity, but they don’t involve workers in the discussion from the very start and it ends up being exploitative – it’s about productivity gains for the bottom line of the profit margin, without really setting out the incentives for the worker,” he said. “You have to have an inclusive roll-out of technology in the workplace that involves workers as well as the business.”

Read more about artificial intelligence

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close