MarekPhotoDesign.com - stock.ado
How do we steer the future of the internet for the good of humanity? At a time of growing public concern over the potential of new technologies to control, coerce and displace people, it’s a question that the web’s inventor, Tim Berners-Lee, believes is more urgent than ever. And part of the answer is about ensuring everyone in the IT profession, at every level, understands the ethical implications of their decisions and actions.
In November 2019, the World Wide Web Foundation (the organisation Berners-Lee founded in 2009 to “ensure the web serves humanity”) stepped up efforts to do just that with the publication of its Contract for the Web – a set of nine principles outlining what governments, companies and citizens need to do to ensure the web develops in “the way most people want” – bringing people together, improving lives and allowing us to share knowledge and experiences, whoever and wherever we may be.
The Web Foundation convened legislators, academics, IT professionals and other stakeholders to thrash out the principles over the course of a year. The contract’s 76 clauses cover everything from road-digging disruption to radio spectrum allocation, but the broad thrust is about maximising open access and interoperability, championing data privacy, empowering users and ensuring that the web is a level playing field for allcomers.
The principles draw on existing frameworks like the UN Declaration of Human Rights, the EU’s General Data Protection Regulation (GDPR) and the OECD’s Guidelines for Multinational Enterprises. Of most relevance to tech companies and IT professionals are principles 5 and 6: “respect and protect people’s privacy and personal data to build online trust” and “develop technologies that support the best in humanity and challenge the worst”.
Among those to have endorsed the initiative so far are the governments of France, Germany and Ghana, as well as several hundred companies, including tech giants Google, Microsoft, Facebook and Twitter. “The fact that so many different parties have put their names to it is a milestone,” says Emily Sharpe, director of policy at the Web Foundation.
Bill Mitchell, director of policy at the BCS, the UK’s professional body for IT, cautiously welcomes the ethical aims of the contract, but notes that a number of its big-name signatories currently pursue policies that directly contradict those aims. “Tech companies have done a lot of great things, but they seem to be saying everyone should just trust them and democratically elected governments don’t really have a role,” he says. “Regulation may or may not be the answer, but at the moment it seems there’s a big question about who’s making the rules.”
Mitchell thinks the contract won’t have any significant impact until signatory companies can clearly show they are taking the notion of digital and data ethics seriously. “And how will we know they’re taking it seriously?” he says. “When they’re willing to be held accountable by an independent body. Only when we see that will we know they’re being ethical in the true sense.”
The Web Foundation’s Sharpe concedes that accountability needs to be a major focus in the coming year. “For example, the US needs to pass a comprehensive data protection and privacy law immediately, which is one area where it lags behind Europe,” she says. “There also needs to be more co-operation from tech companies around issues like disinformation and hate speech.”
While it is clear that authoritarian regimes such as China and Russia have very different ideas of internet governance to the West, Sharpe believes the US – which has traditionally been wary of data privacy regulation – is coming round to the idea that the European model is the right one going forward. “At a recent internet governance forum in Berlin, there was strong feeling in the room that the US, Europe and other democratic countries need to get together and fight for the web that we want, lest it go in that authoritarian direction that some other countries may be leading.” she says.
Emily Sharpe, World Wide Web Foundation
But if the Contract for the Web is to succeed in driving the necessary ethical practices among signatories, there needs to be more clarity in terms of how to translate its set of lofty principles into concrete actions that truly help change behaviours. “We don’t want this to be a document that sits on the shelf – it needs to be a living, breathing thing that companies and governments actually use,” says Sharpe.
As such, developing the principles and securing prominent signatories was just the first step. “We’re now undertaking a mapping exercise where we’re examining the 76 clauses and looking to point to best practices, standards, frameworks and other efforts that give more detailed guidance,” says Sharpe.
The Web Foundation hopes to complete that mapping exercise within the first few months of 2020. “Simultaneously, we’ll continue to advocate for adoption of the contract and build out more specific frameworks for IT professionals in some of these areas – for example, developing a template policy that companies can plug-and-play internally to deal with data and other ethical issues,” says Sharpe.
“While we might not develop that ourselves, we’ll certainly look to other organisations through that mapping exercise and create a public database of resources for companies and governments.”
The BCS, meanwhile, has already been working to firm up some of the steps organisations can take to embed ethical data and technology practices. Mitchell says: “The big issue around ethics isn’t the principles – everyone can sign up to those. It’s how you enable different teams collectively both to have the right kind of governance and to make things work in practice.
“We’ve been in lots of roundtables and consultations with employers and there’s clearly a need for interdisciplinary teams that can behave ethically across three different phases of IT – the science phase, where new technologies are developed; the engineering phase, where you build, integrate, deploy and maintain them; and the management phase, where you have to make sure adoption fits in with business strategy. In every one of those phases, ethical issues can occur.”
But, critically, companies need to focus on the overlap between the boundaries of these different phases, says Mitchell. “For example, if you plug a machine learning model into a datastream that wasn’t designed for that model, it may start making inferences and decisions that negatively affect people’s lives,” he says. “Is that the fault of the science team that developed the model, the engineers, the people who did the data mining, or the business people who didn’t understand the problem they were trying to solve?”
Read more about tech and ethics
- The inherent opacity of artificial neural networks means human accountability is needed to keep these systems in check, rather than increased transparency of its inner workings.
- Singapore is kicking off its national AI strategy with five key projects spanning logistics, healthcare, smart estates, education and border security.
There is still a lot of work to do to answer these questions, but Mitchell says clear ethical frameworks need to cover the entirety of the product lifecycle. “You can’t do this as a tick-box compliance exercise where each person signs off their little bit to say ethical concerns are sorted because when you put them all together, the system overall may not work as it’s meant to,” he says.
“Different teams need to understand the big picture. What are the business values you need to adhere to? How are those going to be collectively managed across different teams? Managers need to understand how ethical concerns will be reflected in the different components of the product lifecycle and make sure all the individual members of their teams feels ethically responsible for the whole product, not just their own little piece.”
To do that effectively, so you can embed “ethics by design” into everything IT does, your people need to be well-versed in data ethics and be able to communicate effectively both within and across different teams, including upwards,” says Mitchell. “You need individuals who are genuinely ethical, trustworthy and professional, which probably means giving them some formal training in data ethics principles. Then you need to put in place a governance structure that holds them to account, but also gives them the autonomy to put those ethical principles into practice.”
For example, in a survey of members, the BCS found that while many companies stated their ethical values, these were often neglected by team members under pressure from management to meet project deadlines. “You need to ensure your governance processes empower staff to proactively engage with their own senior managers and prioritise doing the right thing even if that might have some negative financial effects,” says Mitchell.
Bill Mitchell, BCS
There are already a number of ethical frameworks and guidelines emerging that can help you design effective strategies and processes, including the ODI’s Data Ethics Canvas, the UK government’s Data Ethics Framework and the acclaimed work that the Singaporean government has been doing on AI ethics. However, Mitchell says that although these are undoubtedly useful, the single most significant thing an organisation can do to start embedding an ethical culture is to make their teams more diverse.
“Teams need a spread of people from different ethnic minorities, nationalities, genders and backgrounds,” he says. “That helps you break through the silos and force people out of their little bubbles to consider different perspectives. Everybody we speak to that’s trying to be more ethical says this works better than anything else.
“When people start to realise that others in the team see things differently to them, they begin to appreciate that ethical issues around technology need to be thought through more carefully. That has a massively positive effect on the ethical outcomes we all want to deliver.”