ex_flow - stock.adobe.com

House of Lords launches an investigation into generative AI

The government wants the UK to lead AI regulation, but technology is developing at a breakneck pace

The House of Lords has put out a call for evidence as it begins an inquiry into the seismic changes brought about by generative AI (artificial intelligence) and large language models.

The speed of development and lack of understanding about these models’ capabilities has led some experts to warn of a credible and growing risk of harm. For instance, the Center for AI Safety has issued a statement with several tech leaders as signatories that urges those involved in AI development and policies to prioritise mitigating the risk of extinction from AI. But, there are others, such as former Microsoft CEO Bill Gates, who believe the rise of AI will free people to do work that software can never do such as teaching, caring for patients, and supporting the elderly.

According to figures quoted in a report by Goldman Sachs, generative AI could add roughly £5.5tn to the global economy over 10 years. The investment bank’s report estimated that 300 million jobs could be exposed to automation. But others roles could also be created in the process.

Large models can generate contradictory or fictious answers, meaning their use in some industries could be dangerous without proper safeguards. Training datasets can contain biased or harmful content,  and intellectual property rights over the use of training data are uncertain. The ‘black box’ nature of machine learning algorithms makes it difficult to understand why a model follows a course of action, what data were used to generate an output, and what the model might be able to do next, or do without supervision.

Baroness Stowell of Beeston, chair of the committee, said: “The latest large language models present enormous and unprecedented opportunities. But we need to be clear-eyed about the challenges. We have to investigate the risks in detail and work out how best to address them – without stifling innovation in the process. We also need to be clear about who wields power as these models develop and become embedded in daily business and personal lives.”

Among the areas the committee is looking for information and evidence on is how large language models are expected to develop over the next three years, opportunities and risks and an assessment of whether the UK’s regulators have sufficient expertise and resources to respond to large language models.

“This thinking needs to happen fast, given the breakneck speed of progress. We mustn’t let the most scary of predictions about the potential future power of AI distract us from understanding and tackling the most pressing concerns early on. Equally we must not jump to conclusions amid the hype,” Stowell said.

“Our inquiry will therefore take a sober look at the evidence across the UK and around the world, and set out proposals to the government and regulators to help ensure the UK can be a leading player in AI development and governance.”

Read more about AI regulations

  • Tech industry figures are broadly supportive of the need for artificial intelligence to be regulated, but despite growing consensus, there is still disagreement over what effective AI regulation looks like.
  • The rapid evolution and adoption of AI tools has policymakers scrambling to craft effective AI regulation and laws. Law professor Michael Bennett analyzes what's afoot in 2023.

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close