agsandrew - Fotolia

AI Summit 2020: Regulating AI for the common good

Speakers and panellists at the virtual AI Summit 2020 spoke about the tensions between cooperation and competition in the development of artificial intelligence

Artificial intelligence requires carefully considered regulation to ensure technologies balance cooperation and competition for the greater good, according to expert speakers at the AI Summit 2020.

As a general purpose technology, artificial intelligence (AI) can be used in a staggering array of contexts, with many advocates framing its rapid development as a cooperative endeavour for the benefit of all humanity.

The United Nations, for example, launched it’s AI for Good initiative in 2017, while the French and Chinese governments talk of “AI for Humanity” and “AI for the benefit of mankind” respectively – rhetoric echoed by many other governments and supra-national bodies across the world.

On the other hand, these same advocates also use language and rhetoric that emphasises the competitive advantages AI could bring in the more narrow pursuit of national interest.

“Just as in international politics, there’s a tension between an agreed aspiration to build AI for humanity, and for the common good, and the more selfish and narrow drive to compete to have advantage,” said Allan Dafoe, director of the Centre for the Governance of AI at Oxford University, speaking at the AI Summit, which took place online this week.

Speaking on whether AI can empower non-governmental organisations (NGOs) and benefit social good, Stijn Broecke, a senior economist at the Organisation for Economic Co-operation and Development (OECD), added the ferocity of competition in AI could lead to a “very unequal future”.

“One of the big risks in AI is that it leads to a winner-takes-all dynamic in competition, where some firms are capable of developing the technologies much faster than others,” he said. “They have access to data, they can invest in the tools, and in the end it leads to increased concentration in the labour market. This concentration in the labour market has the potential for huge negative consequences in terms of inequality, reduced number of jobs, reduced quality of jobs and reduced pay working conditions.”

Uneven development and deployment

Broecke added that OECD countries are already experiencing sharp rises in inequality, as well as a concurrent polarisation of their labour markets, something that will only be exacerbated by the uneven development and deployment of AI technologies.

“The emerging evidence on AI also shows that the people who benefit most are high skilled people because AI complements them, and so their wages increase and so it leads to an increase in inequality in the labour market,” he said.

To break these dynamics and prevent a further spread of “techno-nationalism”, Dafoe believes we must collectively define what the responsible governance of AI technologies looks like.

“Unfortunately, being a responsible actor is not going to be easy because governance of AI is not easy,” he said. “AI is a general purpose technology, and general purpose technologies have a set of properties which make them difficult to govern and to achieve certain aims without other by-product consequences.”

He added that the “social and other consequences of AI tend to be fast-changing and dynamic, which makes it hard for policymakers to devise a single solution that works in an ongoing way”.

Embracing meaningful commitments and ‘pro-social’ regulation

For Dafoe, the solution is to create the conditions for an “AI race to the top”, whereby existing incentives such as competition are used in a way that “leads to more pro-social rather than anti-social behaviour”.

“Instead of just talking about responsibility in an abstract sense, which would be easy to just be captured by public relations rhetoric or marketing, we want it to really bite – have meaningful commitments that map directly onto behaviours that are most likely to lead to beneficial outcomes,” he said. “We don’t want to just impose costly behaviour, we want behaviour that is what society needs, so that AI is deployed to achieve maximal benefits and minimise the risks.”

This focus on developing “meaningful commitments” was echoed by a number of other speakers at the AI Summit, including equality lawyer and founder of the AI Law Hub Dee Masters, who spoke of the need for business to embrace regulation that encourages more accountable behaviour.

“I think there’s been this traditional idea that business doesn’t like regulation, business doesn’t like red tape, doesn’t like being told what to do, but actually this is an area where I think we’ve got to move beyond wishy-washy ethics [statements], and we’ve got to be really clear about what businesses can and can’t do,” she said.

“We need very clear legal rules, unambiguous legal rules, but we also need rules that can be creative in the sense that they encourage good behaviour. Under the Equality Act, for example, an employer will be vicariously liable for an employee, but an employer can get around that by showing that it took all reasonable steps to stop discrimination. [By] using those sorts of really interesting models that encourage accountable behaviour, I think we can do it and I think we just have to embrace regulation rather than pretend it's bad for business.”

She added that existing legal frameworks like the Equalities Act is already “95% there” and would only need some minor alterations to make it more suitable, as “nobody was thinking about AI, automated decision-making and algorithms when it was drafted”.

A further benefit to having clear rules governing the use of AI is that it avoids the need for costly and time consuming litigation down the line, for example against technology companies that think current legal frameworks have no bearing on their AI operations. “I don’t think litigation is a good way of creating change as it’s after the event, it requires well-resourced individuals to proceed matters through the courts, [and] I don’t think we can expect our citizens to police big tech,” said Masters.

Read more about technology ethics

Jessica Lennard, the senior director of global data and AI initiatives at Visa, added that working for a company that operates 200 countries, regulatory divergence is a massive problem as it creates an inconsistent patchwork of rules for enterprises to follow.

“We want to see high standards of consumer protection, and as much regulatory alignment globally as possible, but what we’re in fact seeing is some areas of divergence around the world which cause us concern,” she said: “One of those is ethics, privacy is another one, data sharing is a third and at the end of the day, this really has the potential to undermine those consumer protections and to jeopardise cross-border data flows which you really need to build good AI.”

“I think one of the biggest issues that we want clarity on, which is not easy at all to apply in practice, is where responsibility lies, and for what” added Lennard.

“You want to get everyone speaking the same language – especially the technical and non-technical folks who are both involved in different parts of the process – you want everyone to be clear about the process itself, the governance, the law that sits behind it, and that’s not that easy to do.”

According to Charles Radclyffe, an AI governance and ethics specialist at Fidelity International, while the deluge of ethics principles that have been released by enterprises in recent years is a promising step, many are too far removed from the practical reality to be helpful.

“What’s needed is some layer of substantive governance, and the substantive governance that you need is going to be something that really directs you towards the right answer more often than not,” he said.

“What you need is his direction, what you need is clarity and certainty. I call this “pronouncements” – you need clear guidance in terms of what should you do in this situation, or what should you not do in that situation – and it’s that kind of governance that’s required.”

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close