Ethical AI requires collaboration and framework development

Countries and companies are attending to the ethics of artificial intelligence technology use. Such efforts need to be collectively determined to result in solid frameworks

Countries are paying increasing attention to the ethics of artificial intelligence (AI). The UK has appointed Roger Taylor, co-founder of healthcare data provider Dr Foster, as the first chair of its new Centre for Data Ethics and Innovation, and started a consultation on the centre’s remit. Singapore is establishing an advisory council on the ethical use of AI and data, chaired by the city-state’s former attorney-general VK Rajah, with representatives of companies and consumers. Australia’s chief scientist has called for more regulation of AI, and the National Institution for Transforming India, a government think tank, has proposed a consortium of ethics councils.

Some companies are doing likewise. Google’s chief executive, Sundar Pichai, recently published a list of AI principles, including the ideas that the technology should be socially beneficial, should avoid creating or reinforcing unfair bias, be built and tested for safety and accountable, incorporate privacy by design and uphold high scientific standards.

Earlier this year, more than 3,000 of Google’s staff signed a letter to Pichai arguing against the company’s involvement in Project Maven, a US military programme using AI to target drone strikes. In his list of principles, Pichai responded that Google would not design or deploy AI in weapons, other technologies likely to cause overall harm, “surveillance violating internationally accepted norms” or anything which contravenes international law and human rights.

“We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas,” he added.

Google’s principles are worthy of consideration by IT professionals elsewhere, says Brhmie Balaram, a senior researcher for the RSA (the Royal Society for the encouragement of Arts, Manufactures and Commerce).

“Developers have a lot of power in this field to act ethically, to ensure their technology is being used to benefit society rather than hinder or harm,” she says.

Public ignorance

Research for a recent RSA report on ethical AI, carried out by YouGov and involving 2,000 Britons, found that just 32% were aware that AI is used in decision-making, with only 14% familiar with the use of such systems in recruitment and promotion and 9% in criminal justice, such as whether to grant bail. For both areas, 60% of respondents opposed such usage, with a lack of empathy and compassion the top reason.

The report recommended that organisations should discuss the use of AI with the public. Balaram says this has a number of potential benefits, not least that the General Data Protection Regulation (GDPR) gives data subjects the right to an explanation of automated decision-making. Working with members of the public can help to develop explanations that are meaningful to them.

“Citizens are really interested in the rationale for a decision,” she says, rather than the exact algorithm. “They want to understand why they’ve received that decision and the processes behind it.”

More generally, Balaram says the public can give technologists advance warning of what kinds of automated decision-making might cause an outcry. Groups of experts can suffer from groupthink, so the RSA has used panels which are both demographically representative of the population, and also represent those who tend to oppose as well as support technology. “Being exposed to that diversity can change thinking sometimes,” she says.

Ethics as oil?

A major ethical problem is capable of derailing an entire project, such as happened to the English National Health Service’s patient data-sharing Care.data project. “Ethics used to be perceived as something that was an obstacle,” says Mariarosaria Taddeo, a research fellow at the Oxford Internet Institute and deputy director of its digital ethics laboratory. “We have moved from a moment when ethics was considered grit in the engine, to a moment where ethics is oil in the engine.”

She says IT professionals can implement high ethical standards through being mindful of unintended consequences, including testing and examining possible deviations in a system’s design. An important task is to examine training data for bias, as this is likely to result in similarly biased outcomes.

But as it is impossible to predict all unintended consequences, AI should also be supervised when it is in operation. “It’s very difficult to predict AI, very difficult to explain how AI works,” Taddeo says. “We have to have a human on the loop, supervising and possibly intervening when things go wrong.”

“It’s very difficult to predict AI, very difficult to explain how AI works. We have to have a human on the loop, supervising and possibly intervening when things go wrong”
Mariarosaria Taddeo, Oxford Internet Institute

Such supervision should also include broader auditing, monitoring and stress-testing of AI systems already in operation, by users and regulators. The biased judgements generated by US criminal justice IT company Northpointe’s Compas algorithm could have been picked up by law enforcement users, rather than eventually exposed by public interest publisher ProPublica.

“It’s stopping to trust blindly in technologies,” says Taddeo, adding that the fact that many AI systems work as “black boxes”, with no way to see how they are reaching decisions, strengthens the case for users to be critical and questioning.

Taddeo adds that it is often hard to identify who is legally responsible when something goes wrong in an AI system. But this means the ethical burden is widened: “It means everyone involved in the design and development shares some of the responsibility.”

Joanna Bryson, a cognitive scientist at the universities of Bath and Princeton, adds that trying to shift responsibility for decisions to algorithms and software is an untenable position. “People mistake computers for maths. There’s nothing in the physical world that’s perfect,” she says, and that includes AI systems built by humans. “Someone is responsible.”

There are pragmatic reasons for assuming responsibility for AI decisions, including that regulators and courts may allocate this anyway, with Germany’s Federal Cartel Office criticising Lufthansa for trying to “hide behind algorithms” when the airline’s fares rose sharply after rival Air Berlin went out of business.

What’s an IT professional to do?

At a basic level, Bryson says programmers and others making decisions on setting up systems should work carefully – such as by writing clean code, ensuring personal data is stored securely and examining training data for biases – and document what they do.

But IT professionals should also consider the ethics of potential employers. Bryson tells the story of someone who considered a job with Cambridge Analytica, the data analytics company which collapsed after exposure of its use of data gathered through Facebook. Employees at the company warned the person they would hate themselves and would be helping people they would despise.

Bryson recommends discussing potential employers with peers, which was what helped the person decide not to take the Cambridge Analytica job. Specific things to investigate include whether the organisation has clear pathways of accountability, including a manager who will listen and accept information from you; good practice on security, including board-level responsibility and an executive tasked with listening to employees’ concerns; and good general practice, including programmers having access to the code base.

Ethics advisory boards

A recent report from the House of Lords select committee on artificial intelligence recommended that organisations should have an ethics advisory board or group. Its chair, Timothy Francis Clement-Jones, a Liberal Democrat peer, says that such boards could work in a similar way to ethics committees within healthcare providers, which decide on whether research projects go ahead. Some leading AI companies have set up such committees, including Alphabet-owned DeepMind, although its membership is not disclosed.

Clement-Jones adds that a diverse workforce – both in demographics but also educational background, as those with a humanities background will consider things differently to scientists – should help. There is also potential in using AI techniques that require less data, and have less need for vast archives reaching back many years that are more likely to contain biases.  

Clement-Jones, who is also London managing partner of law firm DLA Piper, says that despite the legal sector’s early use of AI, it is not necessarily a model for other sectors. “Trust is already there between client and lawyer,” he says. “We tell our clients we are using AI for a project upfront.” Furthermore, AI is typically used for low-level work, such as looking for useful material in a mass of documents.

Significant ethical problems arise when major decisions on people are made by AI, such as considering disease symptoms or deciding whether or not to offer a service. Clement-Jones says this is more likely in mass-market service industries such as healthcare and finance. He believes that industry regulators such as the UK’s Financial Conduct Authority are best-placed to examine this.

“We’re not keen on regulating AI as such,” he says as a parliamentarian, with industry-specific legislation making more sense, such as the automated and electric vehicles bill currently going through Parliament.

Ethical frameworks

Aside from laws and regulations, academics and companies are collaborating to establish ethical frameworks for AI. The Leverhulme Centre for the Future of Intelligence, which involves several universities and societies, is one of the partners in Trustfactory.ai, an initiative set up at the International Telecoms Union’s AI for Good summit held in Geneva in May. Huw Price, the academic director of the centre, says the aims of Trustfactory include broadening trust in AI by users – including disadvantaged ones – and building trust across international, organisational and academic disciplinary borders.

Price says there are numerous discussions on ethics in AI, and professionals working in the field can take advantage of these. “There are lots of people starting to think about these things,” he says. “Don’t try to tackle it on your own, connect.”

Several companies and not-for-profit organisations are connecting through the Partnership on AI, which brings together Amazon, DeepMind, Google, IBM, Microsoft and SAP with several universities, and campaigners including Amnesty and the Electronic Frontier Foundation. Price says it is important that such partnerships do not just include Silicon Valley organisations, or those from the English-speaking world.

Price, who is also Bertrand Russell professor of philosophy at the University of Cambridge, says AI’s ethical questions will get increasingly serious. It may be possible to build a drone the size of a bee, equipped with an AI-based facial recognition system and packed full of enough explosive to kill the person the AI system identifies. “It’s the kind of issue where an individual might want to say ‘no’,” he says of someone asked to build one. Some particularly dangerous weapons, such as landmines and biological weapons, have been be drawn in what AI should do, perhaps outlawing interference in elections as well as more deadly applications.

Whatever choices are made, building an ethical framework for AI will need the involvement of IT professionals – and is likely to help them in their work, as well as with their consciences.

Read more about artificial intelligence and ethics

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close