The story of how Cambridge Analytica used Facebook data collected by an Academic intermediary to supposedly help Remain “steal” the UK referendum campaign and Trump “steal” the US Presidential campaign can be read at many levels with many different questions.
- What was so offensive/different about the Facebook processes for encouraging us to share our personal data (so that they can make it available to advertisers, researchers and others to enable them to target their messages), while making its privacy settings unintelligible to anyone other than a Californian geek.
- What was so offensive/different about the conduct of academic conducting large scale psychometric research which he proceeded to turn to commercial advantage in ways which the “Big Data Institutes of many of the worlds leading Universities (including Oxford, Cambridge, Southampton and UCL) are exploring, encouraged by bodies like the Open Data Institute
- What was so offensive/different about the behaviour of Cambridge Analytica, compared to those who ran similar exercises to help the President Obama get elected, the EU and Government supported Remain Campaigners to fight a gaggle of Brexiteers and/or Momentum to turn a snap election into a bloody nose for those who called it.
The BIG question, however, is whether this is a storm in a teacup, a minor hiccup in an inexorable march towards the dictatorship of the sysadmins or a watershed, with those who control our access to the on-line world, including over 60% of all traffic over the Internet, about to be brought to heel?
But, if so, who is about to bring them to heel? Governments (who their lobbyists can buy) or Advertisers (who they have to persuade to keep spending)?
One of the delightful ironies of this controversy is that it took place during the 101st meeting of the Internet Engineering Task Force in London. The IETF comprises the 15 – 20,000 engineers who maintain the building blocks that enable the global inter-operability on which the Internet depends. 1,200 of them were gathered in London this week – and no journalists asked for their views. In consequence their one “political” meeting, a discussion in the House of Lords on “How to build trust between politicians and engineers in tackling the security (or otherwise) of the Internet of Things”, was able to have a quiet and constructive discussion. That meeting was organised by the Internet Society (the nearest the IETF has to an oversight body), ICANN (which looks after the addressing system) and the Digital Policy Alliance.
That meeting identified that the problems arise with the processes used when the building blocks are assembled into applications, increasingly in ways that no-one foresaw. The alleged abuses of social media data for political ends are a good example,. So too are the weaknesses that allow the spoofing of addresses, facilitating impersonation and complicating enforcement. But neither IETF nor ICANN has policing powers and it would be unrealistic to expect the Internet Society to try to boycott those who do not follow good practice – especially since the latter can afford to fund the best lawyers and lobbyists that money can buy. The meeting was attended by the chairs of both the Internet Engineering Task Force and the Internet Society. The panel for the discussion included the Chairman of the IETF Internet Architecture Board, the Chief Internet Technology Officer of the Internet Society, the Chief Executive Officer of ICANN and Baroness Neville-Jones (former security Minister and chairman of the DPA Cybersecurity group). The main achievement of the meeting was to leave the audience with a shared recognition of the problems and the need for ongoing dialogue between policy groups and those who keep the Internet running in order to improve the government policies and the regulatory, certification, quality control and enforcement processes within which the product and service engineers, corporate lawyers and salesmen and their employers operate.
That dry and cautious conclusion provides an essential backdrop for public discussion about what it was that Facebook, Cambridge Analytica and their Academic fall-guy did differently (if anything at all) and why “the roof fell in” after a sting operation akin to those offered to the under cover reporters by the CA CEO.
I will conclude with a few more questions:
1. Should the current furore been seen as propaganda for the supposed power of “Big Data” analysis to target online advertising/messaging to, inter alia, persuade a “gullible” electorate Or is it a set of excuses to enable an introverted and complacent establishment (that lost touch with the electorate) to call for a rerun of history?
I alluded to this argument in my blog on Brexit as an existential threat to the new UK “establishment” but Dominic Cummins makes the argument much better using inside knowledge of who did what. He concludes that story is a set of convenient excuses for those who lost the trust of the electorate . Unfortunately misplaced belief in that power is not only expensive for Corporate advertisers and lucrative to high tech snake oil salesmen, it is politically dangerous. It is akin to the belief of many Germans that the Jews were to blame for the collapse of their Armies on the Western Front at the end of 1918. That led directly to World War 2. Will belief that Trump or the Russians stole the US election or the Brexiteers stole the UK referendum lead us down a similar road? Did the Prime Minister lose her majority in 2017 because the snap election caught no-one save Conservative Office by surprise? Or was it stolen by student activists using social media to organise massively duplicated on-line voter registration and exploit a postal voting system that “would disgrace a banana republic”? Was the use of Facebook for political campaigning as mundane as, for example, paying .67p for a targeted advert that was seen by 87 people – albeit on a much larger scale. Or was it really more sinister?
2. How many other “academic researchers” have provided plausible deniability to similarly egregious data collection from, for example, UK NHS and US Veteran’s Administration patient records?
This question goes to the heart of Big Data governance models and the idea that academic researchers are somehow more virtuous than, for example pharmaceutical or insurance companies. The latter take security, privacy and data protection very seriously because leaks cost commercial advantage and may even destroy the business (if a rival patents compounds before a decision to incur the cost of safety testing). It is not “just” a matter of fines. Meanwhile academics have a tendency to believe that their life saving (or reputation making) research is more important than, for example, the privacy of vulnerable patients who might be of interest to the state or the media (for whatever reason).
3. Who owns my data? Me? The state? A faceless corporation that persuaded me to give them uninformed consent in return for access to an app? Which of them, if any, is serious about protecting my data from those who really might be a threat to me?
Emily Sheffield in the Evening Standard summarises well our inability to understand the privacy settings of services which make it all too easy for us to give permission for sharing with those we have never heard of. She also raises some killer questions for both those who believe in the value of Big Data and those who believe in the importance of blanket data protection (as with the GDPR) as opposed to genuinely informed choice and targeted action against those guilty of serious abuse.
4. Which of us actually care about protecting the lies, half truths and exagerations we post about ourselves on social media?
Back in 2008 I was a UK representative at the workshop in Bled which led to EU Etica Project. The conclusions of our workshop on the Ethics of Public Service delivery were edited out of the final report but I covered some of them in a blog written about the time the project came to an end. . In the course of the workshop I came to appreciate that those from Eastern Europe found it hard to comprehend the Western naivety with regard to those claiming to want to help them, including those wanting to do so on-line. They had a simple approach – never ever tell the truth on-line unless there is a very good reason to do so. Sooner or later all that you do on-line will be collated and used against you. You therefore create a “safe” persona (or more than one) and use them for all on-line transactions that you cannot do anonymously from an Internet café. Today I avoid investing in those who pay for on-line pop-up adverts for products I have just bought or hotels in towns I recently visited. I also avoid sites where response times are clogged with data collection spyware and avoid search engines where what I want to find is hidden among promoted results.
4. Why was harvesting Facebook data (and other big data analytics exercises to target voters) praiseworthy when used to help elect Obama but not when used to elect Trump?
This is a bit like asking why the dirty tricks and lies used to get John F Kennedy elected and to prepare for his re-election were unacceptable when used by Richard Nixon.
5. Will any Facebook fine go into the ICO’s budget – whether as an ad hoc top up or as part of a switch to using “the proceeds of crime” to fund Internet law enforcement – including by the Trading Standards officers who are in the front line alongside the Information Commissioner and the City of London Police?
A £billion fine on Facebook for the processes which allowed its data to be harvested would transform the ability of the Information Commissioner to do her job and set a welcome precedent for raising the funds for effective law enforcement action, at all levels to tackle on-line crime and misbehaviour. It would also have a very powerful motivational effect and cause service providers to take more robust action to enforce their contractual conditions and reduce the risk of exposing third parties who might take legal action against them.
6. What, if any, are the roles of the IETF, ICANN and Internet Society in using the opportunity to help address some of the taboo ways in which the Internet can be manipulated, whether by adding new security building blocks, promoting the use of those that already exist, or informing politicians and regulators? so that thel atter can enforce better practice in assembling them into products and services.
The debates will run and run. There is something for every conspiracy theorist, technology enthusiast and luddite.
Those who are serious about finding a constructive way forward should join the Digital Policy Alliance and/or the Internet Society and help organise well-informed follow up debate. But that will require honest enquiry and rational argument about balance of risk in a world of evolving technologies and differing values. And honesty and rationality are all too rare in politic al debate. And the hoi polloi understand balance of risk (as in betting odds and having a flutter at impossible odds on the lottery as a way of keeping hope alive) rather better than the intelligentsia.