chrisdorney - Fotolia

Antisocial media and AI in play at Oxford Science and Ideas Festival

Facebook and other forms of social media were discussed at two linked events at the Oxford Science and Ideas Festival, alongside artificial intelligence and how it might augment humanity

For a service used by three in every 10 people on the planet, Facebook increasingly seems to lack friends. Governments and campaigners accuse it of paying too little tax, with chancellor Philip Hammond’s last Budget speech targeting the company through a planned tax on digital services. Facebook has accepted criticism that it was used to spread hate speech in Myanmar, contributing to 700,000 Rohingya refugees fleeing the country.

Speakers at two sessions at the recent IF Oxford Science and Ideas Festival, chaired by Computer Weekly’s business applications editor Brian McKenna, focused on Facebook – although the problems they described and solutions they proposed could be applied more broadly.

“Facebook is designed for a better species than ours,” said Siva Vaidhyanathan, professor of modern media studies at the University of Virginia. “We are a vengeful species, a lustful species, a shallow species. We are a species filled with people who are looking out for the short term, for instant gratification.” Most human institutions help to counteract those weaknesses, he added – but Facebook exploits them.

Vaidhyanathan, author of the recently published Antisocial media: how Facebook disconnects us and undermines democracy, pointed out how Facebook’s scale means its flaws are magnified enormously. On 30 October, the company said it had 2.27 billion monthly active users, up 10% from a year ago. “I can’t think of anything else in the world, except oxygen and water, that affects that many people regularly,” he said.

Those users are exposed to algorithmic amplification, where content that generates comments, clicks, shares or likes is brought to more people’s attention through their newsfeeds, said Vaidhyanathan. “What is not favoured on Facebook? Things that are carefully argued, soberly assessed, deeply researched, modestly presented – the kind of material that we depend on to think collectively about out problems.”

He added: “What does fly on Facebook? Things that generate strong emotions. So baby pictures and puppy pictures, but also hate speech, calls to genocide, conspiracy theories.”

Painful paradox of Facebook

In Antisocial media, Vaidhyanathan notes that Facebook has a tolerant culture and a missionary zeal to connect and empower people, but argues that the internet giant has become complicit in the rise of extremists and terrorists around the world. “The painful paradox of Facebook is that the company’s sincere devotion to making the world better has invited nefarious parties to hijack it to spread hatred and confusion,” he writes.

In Oxford, Vaidhyanathan said Facebook damages public debate in other ways. It undermines journalism that used to be paid for by advertising through its own highly successful service. This has caused further damage by allowing anonymous, targeted political advertising.

The company recently opened an archive of political adverts and now requires buyers to identify themselves, but does not appear to carry out robust checks on the latter. Before the recent US mid-term elections, Vice News applied to advertise on behalf of all 100 senators and was approved every time. An application supposedly on behalf of Islamic State was also approved – although one for Facebook founder Mark Zuckerberg was refused.

“Facebook is designed for a better species than ours. We are a vengeful species, a lustful species, a shallow species”
Siva Vaidhyanathan, University of Virginia

Vidya Narayanan, a researcher for Oxford University’s Oxford Internet Institute, said Facebook and other social media provide a platform to people who have lacked one, particularly in countries such as India. “It is an empowering thing, as maybe their voices have been suppressed in the past,” she said, speaking after the event.

Because of this, Narayanan warned against heavy-handed controls. “It is important not to over-regulate, because I strongly feel these platforms offer people a means of expressing their thoughts and opinions and a way of being connected with the outside world,” she said, such as for older people and others who are physically isolated. If there is to be regulation, it should recognise those benefits and look to maintain them, she added.

But there are problems and newer social media services – such as encrypted messaging system WhatsApp, which is owned by Facebook – may coarsen discussions even further. The Oxford Internet Institute tracked political discussions in Brazil on WhatsApp in advance of October’s presidential election, in which far-right candidate Jair Bolsonaro beat left-winger Fernando Haddad.

Researchers joined groups, announced themselves and only stayed if no one objected. They recorded the images, videos and links to news sources shared within the group, rather than any personal data. WhatsApp groups are private and Facebook does not monitor what is shared. “There is a lot of freedom to express yourself in any way that you deem fit, and we’ve seen some pretty dire content in some of these groups,” said Narayanan, adding that the researchers developed new categories for classification – including “hate, gore and porn”.

The ongoing research has seen material created exclusively for WhatsApp groups, including memes and jokes about politicians and ideas, sometimes appearing in a number of groups. “It is hard to track where these messages originate,” she said.  

The institute also looked at data from Twitter accounts in Brazil, and found polarisation between supporters of Bolsonaro and Haddad. “There was almost no middle group,” said Narayanan. Facebook was not included in the research, as it largely refuses to co-operate with academics.

Tools and humans

The profound influences that tools have on their users were discussed by Nigel Shadbolt, principal of Jesus College, Oxford and co-founder of the Open Data Institute, in another IF Oxford event linked to the one on social media.

Recent discoveries have shown that hominids had been making tools for 200,000 generations before Homo sapiens developed, he said. “These tools allowed the species to master its environment; they also changed everything from the fine motor control in hands and fingers to the cortex. It is also thought that it drove other functions, such as sociability and language development.”

Research also suggests that the development of more intricate tools activates more parts of the human brain, said Shadbolt. “We often think that we made our technology. But our technology made us, and is continuing to make us.”

So what might provide answers? In the same Augmented humanity event at which Shadbolt spoke, Helena Webb, a senior researcher at Oxford University’s department of computer science, outlined two research projects. Digital Wildfire, a collaboration between Oxford, de Montfort, Warwick and Cardiff universities that ended in 2016, looked at how social media’s tendencies to spread harmful material could be lessened.

“Sometimes the way in which people respond can be just as inflammatory as the original post”
Helena Webb, Oxford University

It found that legal measures and controls by social media platforms were limited by time-lags between publication and removal, giving it time to have an impact, and their focus on individual pieces of content or users, rather than a wider “digital wildfire”.

The research found greater benefits from self-governance. Webb said social media users may be wary of replying to someone expressing inflammatory views. “A lot of the time, the assumption is that the person is doing it for attention,” she said. “If you reply to and give them attention, that’s giving them what they want.” However, ignoring material also means it goes unchallenged, she said.

The research analysed data from Twitter on challenges to sexist, homophobic and racist comments. It found that when one person responded, an online discussion developed. “The hateful content is spread as the conversation continues,” said Webb, but something else happens when a group gets involved. “The conversation shuts down more quickly if you have multiple people coming in to disagree,” she said.

Although this didn’t show any change of users’ opinions, it did represent users self-regulating such content, she pointed out.

Webb said platforms could encourage this process, but it is not without problems. “Sometimes the way in which people respond can be just as inflammatory as the original post,” she said. In some cases, multiple negative responses turn into a “pile-on” – a deluge of condemnation that may see the person who made the first comment professionally damaged.

Combating bias in algorithms

Webb is now working on UnBias, a collaboration between Nottingham, Oxford and Edinburgh universities examining how to free users from algorithmic biases. One of the techniques the project is looking at is “filter bubbles”, where algorithms show users material that they tend to agree with, arguably amplifying their existing biases – although Webb said it is very hard to trace their effects in a systematic way.

Even so, she said social media platforms could add functions that help users access alternative views, such a slider control that increases the proportion of posts from accounts they do not follow.

Tackling other kinds of bias is fraught with difficulty. In 2016, Google’s top result for “Did the Holocaust happen?” changed from a far-right site denying that it took place to a page from the US Holocaust Memorial Museum. Google told Search Engine Land that this was due to across-the-board improvements to its algorithm that preferred “high-quality, credible content”.

Webb said there is a tension between search engines reflecting the world as it is, even if that legitimises stereotypes and prejudice, or “correcting” for how they think the world should be, which means deciding what information is credible and whether they should intervene. One problem is that such companies tend to have left-of-centre staff. It was found that Facebook employees in the US made 87% of their political contributions to Democrats and just 13% to Republicans, while staff employed by Google’s parent Alphabet made more than 90% of such donations to Democrats.

Webb highlighted other problems, such as where to stop. “If they change search results relating to the Holocaust, are they then required to do so for all instances of antisemitism or do they need to somehow develop a scale of ‘seriousness’ and only act in the more serious cases?” she said. Another issue is that such action could mean users perceive problems to be less serious than they really are.

A better option could be the algorithmic equivalent of the Fairtrade certification given to producers that pay farmers and suppliers agreed minimum prices. This could involve a trusted, neutral authority examining algorithms and publishing assessments, while the actual algorithms, which technology companies see as valuable intellectual property, stay secret.

Read more about social media and culture

  • What the ICO’s Facebook fine teaches us.
  • Collaborative culture key to enterprise social media software adoption.
  • With intelligent policy architecture, social media security doesn’t have to be a CIO nightmare.
  • Photo story: Facebook celebrates its fifth birthday, in 2009.

In her IF Oxford talk, Marina Jirotka, professor of human-centred computing at Oxford University, outlined how such “responsible innovation” could be part of the answer for technology organisations. She said it involves being prospective – carrying out a risk assessment, including of unintended negative consequences – and inclusive, involving those who will be affected in advance of implementation.

“We need some form of agile anticipatory governance,” said Jirotka, speaking after the event. “We can do this kind of foresight work. Legislation is not good at doing foresight work and is not good at working with innovations it doesn’t understand.”

Research-focused organisations, including universities and healthcare providers, often have ethics committees – and tech companies could follow suit, she said. Other options include ethical codes of conduct, whistleblowing systems that allow employees to raise concerns with independent outsiders, and training.

Aviation has particularly strong lessons to teach, with its “black box” flight and voice recorders and processes for learning from disasters.: “They come to some kind of public, accountable and explainable conclusion,” said Jirotka. “The public have come to trust that.”

A key point for algorithms is the ability to know why a system has done something, she said: “For anything that impacts you, you should be able to discover how it has come to that decision.”

This may not need publication of algorithms, which may not be comprehensible anyway, she said, adding: “You need a proper explanation of how a system came to its conclusion.”

And while Facebook may not offer such explanations, the IF Oxford discussions suggest that technologists can learn from the company’s flaws to become more responsible in developing services.

Read more on Big data analytics

CIO
Security
Networking
Data Center
Data Management
Close