NicoElNino - Fotolia

Ethics key to AI development, says Lords committee

House of Lords Select Committee calls for government to draw up an ethical code of conduct, which organisations developing AI can sign up to

Ethics should be put at the centre of artificial intelligence (AI) adoption to ensure it’s developed “for the common good and benefit of humanity”, according to a report from the House of Lords Select Committee.  

The report, entitled AI in the UK: ready, willing and able, said the UK is in a unique position to lead international development of AI – but for the country to take full advantage of the benefits new technologies can bring, any development must be centred around ethics. 

This is in line with prime minister Theresa May’s January 2018 speech at the World Economic Forum in Davos, where she said the UK is already one of the best in the world when it comes to AI research and development, and the country is prepared to “bring AI into government”, working to become a world leader in “ethical AI”.   

The committee said the UK is well placed to “help shape the ethical development of artificial intelligence and to do so on the global stage”.

“To be able to demonstrate such influence internationally, the government must ensure it is doing everything it can for the UK to maximise everyone in the country,” said the report.

The committee set out a series of principles it believes AI development should follow, including that AI should “operate on principles of intelligibility and fairness, and not be used to weaken data rights and privacy. It added that “the autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence”.

The committee also called for a code of ethics, and said the government has a huge opportunity to help shape the AI conversation.

Many organisations are preparing their own ethical codes of conduct for the use of AI,” it said. “This work is to be commended, but it is clear there is a lack of wider awareness and coordination where the government could help.”

The government is already in the process of establishing the world’s first national advisory body for AI – allocating £9m to a Centre for Data Ethics and Innovation, which it hopes will “ensure safe, ethical and ground-breaking innovation in AI and data-driven technologies”.

Read more about artificial intelligence

The committee said the new centre, together with the AI Council and the Alan Turing Insitute, could be instrumental in the ethics groundwork.

It said: “We recommend that a cross-sector ethical code of conduct, or ‘AI code’, suitable for implementation across public and private sector organisations which are developing or adopting AI, be drawn up and promoted by the Centre for Data Ethics and Innovation – with input from the AI Council and the Alan Turing Institute – with a degree of urgency.”

Committee chair Lord Clement-Jones said it is hugely important that ethics take “centre stage in AI’s development and use.

“The UK contains leading AI companies, a dynamic academic research culture and a vigorous startup ecosystem – as well as a host of legal, ethical, financial and linguistic strengths,” he said. We should make the most of this environment, but it is essential that ethics take centre stage in AI’s development and use.

“An ethical approach ensures the public trusts this technology and sees the benefits of using it. It will also prepare them to challenge its misuse.”

The data issue

Access to huge, and constantly growing, amounts of data is a key reason AI is developing rapidly. However, the way data is currently gathered and accessed is not fit for purpose, according to the committee.

It said: “We have heard considerable evidence that the ways in which data is gathered and accessed needs to change, so that innovative companies – big and small – as well as academia, have fair and reasonable access to data, while citizens and consumers can protect their privacy and personal agency in this rapidly evolving world. 

“The ways in which data is gathered and accessed needs to change so that everyone can have fair and reasonable access to data, while citizens and consumers can protect their privacy and personal agency. This means using established concepts, such as open data, ethics advisory boards and data protection legislation, and developing new frameworks and mechanisms, such as data portability and data trusts.

The committee added that the government must take care to avoid big companies monopolising data, and ensure greater competition in the market.

Also key, it said, is that the data used isn’t prejudiced, adding that it was concerned many of the datasets currently used to train AI systems are not representative of the wider population, and therefore the AI system could be learning to make unfair decisions.

“While many researchers, organisations and companies developing AI are aware of these issues, and are starting to take measures to address them, more needs to be done to ensure that data is truly representative of diverse populations, and does not further perpetuate societal inequalities,” said the committee.

It added that a challenge should be created, as part of the Industrial Strategy Challenge Fund, to develop authoritative tools and systems to audit and test training datasets. 

“This challenge should be established immediately and encourage applications by spring 2019,” said the committee. Industry must then be encouraged to deploy the tools which are developed and could, in time, be regulated to do so.”

Public perception and skills

According to recent survey research by OpenText, UK citizens appear to be losing their fear of AI technology. However, concerns that AI will take over people’s jobs are still out there. 

The committee said it’s important that retraining schemes and upskilling the public are made priorities.

“We believe that AI will disrupt a wide range of jobs over the coming decades, and both blue- and white-collar jobs which exist today will be put at risk,” it said.

“It will therefore be important to encourage and support workers as they move into the new jobs and professions we believe will be created as a result of new technologies, including AI. It is clear to us that there is a need to improve digital understanding and data literacy across society, as these are the foundations upon which knowledge about AI is built.”

As AI becomes part of daily lives, it’s also key that the public understands and is aware of what this means, and when AI is used to make decision about them, said the committee.

“This clarity, and greater digital understanding, will help the public experience the advantages of AI, as well as to opt out of using such products should they have concerns,” it said.

“Industry should take the lead in establishing voluntary mechanisms for informing the public when artificial intelligence is being used for significant or sensitive decisions in relation to consumers.”

Read more on IT for government and public sector

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close