Klemsy - Fotolia

Social media targeting algorithms need regulation, says CDEI

The Centre for Data Ethics and Innovation is recommending regulation of social media algorithms as part of a drive to make user targeting safe and ethical

The government-backed Centre for Data Ethics and Innovation (CDEI) has published its final report on online targeting, following on from its publication, in July 2019, of two linked reports into online targeting and bias in artificial intelligence (AI)-powered decision-making.

It is calling on the government to regulate social media algorithms developed by companies such as Facebook as part of a drive to make user targeting safe and ethical.

The government AI and data ethics adviser identifies the online harms regulator the government created last year, the UK Council for Internet Safety (UKCIS), as a key player, alongside the Information Commissioner’s Office and the Competition and Markets Authority.

“When technology develops very swiftly, there comes a moment when the implications of what is happening start to become clear. We are at such a moment with online targeting. A moment of recognition of the power of these systems and the potential dangers they pose,” wrote Roger Taylor, chair of the CDEI, in the centre’s final report on online targeting, published today.

“Most people do not want targeting stopped. But they do want to know that it is being done safely and ethically. And they want more control. These are very reasonable desires. But that does not mean it is easy – or even possible – to accommodate them. In making our recommendations we are proposing actions that kick-start the process of working out how public expectations can best be met.”

The report is based on research led by David Beer from York University, a UK-wide programme of public engagement and a regulatory review of eight regulators.

The research, conducted with Ipsos Mori, found that 29% of people trust platforms to target them in a responsible way, and when they try to change settings, only one-third (34%) of people trust these companies to do what they ask. Some 61% of respondents favoured greater regulatory oversight of online targeting, compared with 17% of people who support self-regulation. 

“Most people do not want targeting stopped. But they do want to know that it is being done safely and ethically. In making our recommendations we are proposing actions that kick-start the process of working out how public expectations can best be met”
Roger Taylor, CDEI

CDEI’s Taylor said: “To build public trust over the long term, it is vital for the government to ensure that the online harms regulator looks at how platforms recommend content, establishing robust processes to protect vulnerable people.”

Bernadka Dubicka, chair of the Child and Adolescent Faculty at the Royal College of Psychiatrists, said: “We completely agree that there needs to be greater accountability, transparency and control in the online world. It is fantastic to see the Centre for Data Ethics and Innovation join our call for the regulator to be able to compel social media companies to give independent researchers secure access to their data.”

The report recommended that the UKCIS “should have the power to require platforms to give independent researchers secure access to their data where this is needed for research of significant potential importance to public policy”.

“Online targeting systems may have a negative effect on mental health, for example as a possible factor in ‘internet addiction’. They could contribute to societal issues including radicalisation and the polarisation of political views,” the report said.

“Platforms should be required to maintain online advertising archives, to provide transparency for types of personalised advertising that pose particular societal risks. These categories include politics, so that political claims can be seen and contested and to ensure that elections are not only fair but are seen to be fair; employment and other ‘opportunities’, where scrutiny is needed to ensure that online targeting does not lead to unlawful discrimination; and age-restricted products.”

The report acknowledged, however, that personalisation of users’ online experiences increased the usability of many aspects of the internet. “It makes it easier for people to navigate an online world that otherwise contains an overwhelming volume of information. Without automated online targeting systems, many of the online services people have come to rely on would become harder to use,” it said.

Nevertheless, it continued, online targeting has helped to put a handful of global online platform businesses in “positions of enormous power to predict and influence behaviour”, but “current mechanisms to hold them to account are inadequate”.

“We have reviewed the powers of the existing regulators and conclude that enforcement of existing legislation and self-regulation cannot be relied on to meet public expectations of greater accountability,” the report said.

It went on to recommend that “the government strengthen regulatory oversight of organisations’ use of online targeting systems through its proposed online harms regulator”.

Read more about ethics and social media

 

Content Continues Below

Read more on Artificial intelligence, automation and robotics

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close