phonlamaiphoto - stock.adobe.com

Accountability is the key to ethical artificial intelligence, experts say

The inherent opacity of artificial neural networks means human accountability is needed to keep these systems in check, rather than increased transparency of its inner workings

Artificial intelligence (AI) needs to be more accountable but ethical considerations are not keeping pace with the technology’s rate of deployment, says a panel of experts.

This is partly due to the “black box” nature of AI, whereby it’s almost impossible to determine how or why an AI makes the decisions it does, as well as the complexities of creating an “unbiased” AI.

However, according to panelists at the Bristol Technology Showcase, transparency is not enough, with greater accountability being the key to solving many of the ethical issues surrounding AI.

“Meaningful transparency doesn’t simply follow from doing things like open sourcing the code, that’s not sufficient,” says Eamonn O’Neill, professor of computer science at the University of Bath and director of the UKRI Centre for Doctoral Training in Accountable, Responsible and Transparent AI.

“Code and deep learning networks can be opaque however hard you try to open them to inspection. How does seeing a million lines of code help you understand what your smartphone’s mid-ware is doing? Probably not a lot.”

O’Neill says that AI needs to be accompanied by a chain of accountability that holds the systems human operator responsible for the decisions of the algorithm.

“We don’t go to a company and say ‘I can’t tell if you’ve cooked the books because I can’t access the neurons of your accountants’ – nobody cares about accountants neurons, and we shouldn’t care about the internal workings of AI neural networks either,” he said.

Focusing on outcomes

Instead, O’Neill says we should be focusing on outcomes.

John Buyers, chair of the AI and Ethics panel and a partner at law firm Osborne Clark, points to the example of Mount Sanai Hospital using an AI system called Deep Patient, which was made to trawl through thousands of electronic health records.

“Over the course of doing that, Deep Patient became very adept at diagnosing, among other things, adult schizophrenia, which human doctors simply couldn’t do,” he says. “They don’t know how the system got to that, but it was of demonstrable public benefit.”

Accountability and bias

Zara Nanu, CEO of human resources technology company Gapsquare, says: “When we talk about bias, its bias in terms of the existing data we have that machines are looking at, but also the bias in algorithms we then apply to the data.”

She gives the example of Amazon, which gathered a team of data scientists to develop an algorithm that would help it identify top engineers from around the world, who could then be recruited by the company.

“All was going well except the machines had learnt to exclude women from the candidate pool, so it was down-scoring people who had ‘woman’ on their CV, and it was actually scoring higher people who had words like ‘lead’ or ‘manage’,” she says.

“Amazon came under scrutiny and tried to look how they could make it fairer, but they had to scrap the programme because they couldn’t hand-on-heart say the algorithm wouldn’t end up discriminating against another group.”

Therefore, while accountability does not remove potential bias in the first place, it did make Amazon, as the entity operating the AI system, responsible for the negative effects or consequences of that bias.

Investment fund

However, Chris Ford, a Smith and Williamson partner responsible for a $270m AI investment fund, says there’s a critical deficit in the way many corporate entities are approaching the deployment of the technology.

“MIT Sloan and Boston Consultancy Group produced an interesting paper earlier this year surveying 3,000 companies globally, most of them outside North America,” he says.

“What was eye catching was that of those who responded, about half of them said they can see no strategic risk in the deployment of AI platforms within their business, and I find that quite extraordinary.”

Ford says this is partly due to a “fear of missing out” on the latest technological trends, but also because there is not enough emphasis on ethics in education related to AI.

He notes the example of Stuart Russell’s book, Artificial intelligence: A modern approach, which has been through numerous iterations and is one of the most popular course texts in the world.

“That textbook in its most recent form is up to 1,100 pages,” he says. “It’s extraordinarily comprehensive, but the section that deals with ethics is covered in the first 36 pages.”

“So there’s an issue on emphasis here, both in respect to the academic training of data scientists but also what they’re expected to engage with in the commercial world when they leave education.”

Read more about artificial intelligence

  • Stanford University’s AI Index 2019 annual report has found that the speed of AI is outpacing Moore’s Law.
  • UK government organisations are taking the lead in AI investment despite the various challenges around the implementation of the technology, according to a report by Accenture.
  • Research unveiled at the Women in Data UK conference in London today shows that most data scientists are ready to move on because of a lack of management support.

In terms of bias, the panelists also note that what is socially normal or acceptable is biased in itself.

“The question then becomes whose societal norms are we talking about? We are already seeing significant differences and perspectives in the adoption of AI in different parts of the world,” says Ford.

Buyers summarised that “a lack of bias is not the introduction of objectivity, but the application of subjectivity in accordance with societal norms, so it’s incredibly difficult.”

The overall argument is that AI, like humans, will always be biased to a point of view, meaning transparency will only go so far in solving the ethical issues around the deployment of AI.

“Using AI in contrast to humans can facilitate transparency – we can fully document the software engineering process, the data, the training, the system performance – these measures can be used to support systematic inspection, and therefore transparency and regulation, but accountability and responsibility must stay with the humans,” says O’Neill.

The Bristol Technology Showcase was held in November 2019, and focused on the impact of emerging technologies on both businesses and wider society.

Read more on Technology startups

Search CIO
Search Security
Search Networking
Search Data Center
Search Data Management
Close