sdecoret - stock.adobe.com
Taking a collaborative, multi-stakeholder approach that takes public opinion into account is key to developing ethical frameworks for the use of data and artificial intelligence (AI), according to Roger Taylor, chair of the new Centre for Data Ethics and Innovation.
The centre, unveiled in June, was the first initiative announced by the Department for Digital, Culture, Media and Sport (DCMS) since it took over responsibility for data policy and governance from the Government Digital Service (GDS) last spring.
The DCMS billed the centre as a core component of the government’s Digital Charter, claiming that it will help the UK become a global leader in innovation-friendly regulation.
Now, after the unveiling of its multidisciplinary advisory board at the Open Data Institute (ODI) annual summit on 20 November, the centre is ready to begin “taking inclusive action to harness the potential benefits of data and AI”.
Upcoming research by the BBC about the impact of AI neatly summarises public opinion about the technology, says Taylor, who also founded healthcare data management and analysis firm Dr Foster.
“The first finding was that actually most people understand, or have a reasonable understanding of, what we’re talking about here – that there’s going to be a bunch of stuff done by machines,” he says. “There’s a very broad consensus that this is going to have an enormous impact on their life and on everybody else’s lives.
“Also, there was an almost universal feeling that they had no possibility of influencing it in any way, they had no agency over what’s going to happen, and I think that sums up how a lot of people feel about this.”
Addressing this concern will be a top priority for the Centre for Data Ethics and Innovation, says Taylor. “I would say that if there’s one objective for the centre, it is to make people feel that these technologies are not something that is just going to be done to them, but something they are going to have agency in how they are deployed across society, whether it’s collectively or individually, and how it’s going to impact their lives.”
The individual versus the collective
A major issue in the data governance and ethics space is the emerging tension between the impact of data on the individual and the impact on society as a whole, says Taylor.
“One way of looking at the sum of the problems we’ve got is that data protection legislation is grounded primarily in thinking about how individuals protect themselves,” he adds. “But a lot of what we’re talking about here is that, as you create very powerful datasets and algorithms, it often doesn’t make sense to just think about the individual.
“There are many instances where that is the right route, but there is an aspect to it which is: how do we collectively engage with this, because the power of these technologies can shape our whole society? We need to have mechanisms to think collectively about how we want to use these technologies.”
Taylor points to the imbalance of power between organisations and governments on the one hand, and consumers on the other. “Their knowledge of the customer’s behaviour far exceeds the customer’s knowledge of their behaviour,” he says.
Read more about AI and data ethics
- A new centre for data ethics and innovations will drive UK government policy making regarding data sharing and use of public data.
- Some believe Theresa May handing over government data policy responsibility to DCMS is “downgrading digital government”, while others see it as a positive move.
- Startups have been using an accelerator’s open data to develop fitness-related products in an effort to encourage people to be more active.
This is important because data about a single individual is not very powerful, but when aggregated and combined with other people’s data, much deeper and more complex insights can be gained, which is something only governments or large organisations are able to do, says Taylor.
“The question is, who should control that power? What we’re talking about really is trying to nuance the degree to which that power is either held by particular organisations – in which case, is it properly held accountable for the way it’s using that power – or are there mechanisms that will distribute that power more evenly across people?” he says.
“I think the key to what the centre will be doing is bringing together different disciplines, working with regulatory organisations, with entrepreneurs, with people who want to buy and use these technologies, with the individuals who will be affected by the use of these technologies, and start to build a consensus around a legitimate way of regulating these technologies.
“It’s not simply about the mechanisms we need to put in place to get a grip on what’s going on, it’s also about what mechanisms we need to make sure that decisions we make with this information are legitimate and are supported by the public at large.”
Another part of the Centre for Data Ethics and Innovation’s work will be to engage in international collaboration to address the global issue of data governance, and these efforts will not be hampered by Brexit, says Taylor.
“We will actively be focused on trying to frame international agreements about the right approaches, and so we will absolutely be going out and seeking allies around the world,” he says.
“I think there’s actually an opportunity for both the UK on its own and the UK working within Europe to lead the world in this space, to start to establish the kinds of institutional arrangements, rights and powers that are necessary, and the kinds of structures that would be legitimate in a democratic country to ensure that both individuals and society collectively have sufficient control and agency over these technologies.”