Will Advertisers and Insurers hold Social Media Platforms to Account after Governments fail

The Shareholder Backlash

Most digerati appear to be in a state of denial over the scale and nature of the damage being done by the abuse of social media. It is fuelling family breakdown, mental problems and violence, particularly in our inner cities. They may, however, be correct in saying that current political and regulatory efforts to address the problem will not achieve their objectives.  The delay in implementing UK legislation on Age Checking, in the face of opposition from those whose business models are based on intrusive mass data collection as well as those promoting anonymous access services, shows how easy it is to mislead officials and politicians over the impact of proposals that help enhance, not invade, the privacy of the vast majority of users, particularly the most vulnerable.

Advertisers and, more importantly shareholders, are not so easily fooled.

The dips in share price last year, when luxury brands began pulling adverts which might appear alongside ISIS videos and risk their Middle Eastern sales, were short lived. So too were the dips which accompanied the original revelations of scale and nature of the fake news/click bait “industry”. In both cases assurances of the “technical” measures being introduced appear to have been sufficient.

Not so the 20% fall in Facebook share price as complaints grew that family friendly content and adverts were being used to promote child abuse and drug gang messages and not “just” fake news about politicians and celebrities. Assurances of action have, so far, proved insufficient – in the face of evidence of practical ineffectiveness, particularly with regard to addressing abuse over its platforms for teenagers, such as Instagram and Whatsapp.

The arguments received a very public twist when the DCMS Select Committee used forgotten powers  to seize evidence of “intra-US misbehaviour” for use in its report on Disinformation and Fake News . This report refers to the Ledger of Harms  produced by the Centre for Humane Technology: “loss of attention, mental health issues, confusions over personal relationships, risks to our democracies and issues affecting children” but focussed on the harms supposedly caused by targeted misinformation in support of political campaigning. Hence the committee’s proposal for a new regulatory category for organisations that are neither “Platforms” nor “Publishers”. But can organisations which claim ownership of the content placed on them (including records of usage and location) and analyse this, sometimes in real time, to target advertising, really claim “Innocent carrier” status at all, let alone avoid responsibility under common law and tort for failing to take rapid action to enforce their terms and conditions when warned that abuse is leading to harm, suffering and death, whether on the streets or in the bedroom.

The Twitter share price has been hit harder, now down over 30%, as it becomes a medium of choice for some of more prolific abusers, whether of truth or of the vulnerable. It has yet to find a way of reassuring advertisers that their brands are not at risk.

The Google (Alphabet) share price has been more resilient, down barely 10%. So far its efforts to monitor abuse have seemed more credible to Advertisers. But it remains to be seen how it will respond to the loss of well- known clients for You Tube advertising. In particular will it be able to reassure AT&T. We can expect it to step up efforts to use technology to identify and remove abusers. Will it feel a need to go further and take a lead in helping drain the swamp, including by organising systematic co-operation to help advertisers prevent their images from being contaminated by indiscriminating pay-per-click automated adtech placement algorithms? Will it feel the need to go further, in co-operation with AT&T and Verizon, to restore faith in the safety and security of the on-line world. If so, where will that leave the current “division of labour” between the Internet Community and the ITU.

The Public Backlash

Meanwhile parents, teachers, picking up on what they know about the hidden fears of their children and pupils,  are now well aware that the on-line world needs at least as much supervision as physical playgrounds and streets. We have long had anecdotal evidence of the scale of parental concerns over some of the apps being used to stalk their children . We are now seeing evidence on the fears felt by children and what they would like to see in response.

We can also see the debate over on-line harms spilling over into the way social media are being used to create a climate of violence and fear on our the street. We are also beginning to see constructive thought on how the processes actually work and what we might do in response.

Last Saturday I attended a meeting for local residents on violence reduction. It was attended by the constituency MP, the GLA member and three cabinet members from the Council (in this case Lambeth) . It was standing room only.I had expected some of what I heard. I did not, however, expect the extent to which social media were blamed for contributing to the problems and the anger at their ongoing failure to use their profits to become part of the solution. Perhaps the saddest contribution came from a teacher who was describing the effect on family life. She mentioned a contribution to an early years discussion on what “what do you want to be when you grow up”: – “I want to be an I-Phone, so my Mummy will speak to me.

At the teenage level the use of social media to video and brag about violence and/or issue challenges is said to be a major factor in the escalation of a local dispute into open warfare. It is also said to be impossible to get rapid and effective action taken by those operating the relevant “platforms”. The resultant bitterness against those who make so much profit while driving local shops out of business and denying responsibility for their actions has to be seen to be believed.

I had not previously appreciated how much of this bitterness is linked directly to the systematic use of social media by drug gangs (not just ISIS or pederast rings) to groom their audiences with the mix of fear and exhilaration that will promote their product at eh same time as helping them control their neighbourhoods and supply chains.

In the physical world you can call Crimestoppers and report where and when the action is happening. What happens when you try to report abusive content to Facebook, Google or Twitter?
The social media giants claim they are innocent carriers but their claims to own the data posted by their users and analyse transmissions to facilitate precisely targeted advertising indicate clearly that they are publishers (and more) – with all the duties entailed.

It appears to be only a matter of time before crowd-funded class actions on behalf of the victims of bullying, abuse and violence cause them to change their business models.

So what could/should be done by those who wish to expedite that process without destroying the undoubted benefits from advertising funded open internet access?

Pre-empting the damage the backlash could cause

1) Implement Age Verification to PAS 1296

The first and most obvious action is to expedite implementation of the UK Age Verification legislation passed two years ago . The members of the new Age Verification Providers Association  spent £millions to get effective systems operational in time for the original deadline. More-over the technologies they had in mind to help with enforcement (based on those already used by Hollywood and the Music Industry for copyright enforcement) could also be used to address other forms of on-line harm.

The reasons for delay do not stand up to examination and are doing serious harm to UK-based business. They are in part based on misinformation by those who do not want to see the spread of effective, low cost, anonymised access technologies. The access provider or retailer need only ever know the name and id number and/or biometric of the individual. In some cases even the service provider knows only the age or other relevant attributes (e.g. a membership or security clearance). Everything else is stored behind further layers of one-way encryption on the files of the third parties e.g. health, education or financial services providers who provide only extracts. Such a privacy-centric capproach fits, however, neither mass surveillance nor big data agendas. Hence the publicity for misinformation based on services which do not use data minimised implementations of PAS 1296  (on its way to becoming a global standard).

Early, well publicised, adoption of such implementations, audited against the PAS, without waiting for the legislation to be implemented, is not only a good way of demonstrating family friendly credentials, it can also greatly reduce your potential liabilities under GDPR if allied to an overall data minimisation strategy.

2) Work with Law Enforcement, Crimestoppers and Child/Mental Health/Victim Support and Welfare Charities on expedited and/or automated/collated reporting and response processes

The current confusopoly of victim-hostile, labour-intensive reporting processes is unfit for the internet age. Those involved have neither the expertise nor the resources to sort the situation. That will require action by those who want the rest of society to have confidence in the safety of themselves, their children and their grandparents in the on-line world. More-over, until we have created  processes for effective co-operation between industry and law enforcement, the responses will have to come from industry under a mix of civil and contract law.

That raises the questions of whether the organisation is more concerned over avoiding legal liability or enhancing consumer confidence (and profitability). That will divide industry between those whose “conscience” and/or “appetite for risk” are driven by the legal department or the finance director.

3) Find ways of using your expertise and resources to plug the gaps of those in Law Enforcement.

It is now over a decade since the EURIM-IPPR study  into Partnership Policing for the Information Society identified that law enforcement would never have more than a fraction of the resources and expertise necessary to policing the on-line world. The need was for processes to enable the police to draw on those of industry – including (but not just) as warranted special constables.
Since then there has been little, if any, progress in implementing the recommendations. We may actually have gone backwards. The current Metropolitan Police criteria for Special Constables  appear to preclude almost all those professionally involved with Information Security.

It is unclear whether change requires primary or secondary legislation or this a local requirement but action should be a core part of the National Cybersecurity Skills Strategy
as well as of any strategy to address the on-line safety of the rest of society.

CIO
Security
Networking
Data Center
Data Management
Close