One of the common concerns about identity-related technologies is the potential for abuse of privacy, and for function creep of the identity system itself: mechanisms which are designed to support authentication end up being used to hoover up personal data about the user’s interactions with relying parties, and pose a greater threat to privacy than the alleged security problems which they were originally intended to resolve.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Of course it doesn’t have to be that way: systems which are designed around technical, legal and procedural mechanisms which protect, rather than undermine, privacy can be privacy-preserving rather than invasive. This is one of the key philosophies of Privacy by Design, which recognises that good security, good identity and good governance can enhance, rather than degrade, user’s privacy.
With this in mind, a team of volunteers has been working with the Government Digital Service to operate the snappily-titled “Identity Assurance Programme Privacy and Consumer Advisory Group,” which provides expert advice and a sounding board for GDS and participating government departments to develop and test a set of design and operation principles which are intended to ensure that the Identity Assurance Programme adheres to strict criteria to respect users’ privacy: in short, to ensure that it doesn’t ‘go off the rails.’ The IAPPCAG includes the likes of No2ID, Privacy International, Which?, London School of Economics, Oxford Internet Institute and Big Brother Watch, and I’ve been fortunate to sit on the Group since its inception.
Yesterday IAPPCAG released the latest version of the Identity and Privacy Principles. These nine criteria will guide the development and delivery of the Identity Assurance programme, and whilst we acknowledge that they will need to evolve to respond to changing needs, we believe that they provide a firm foundation on which to build user trust and respect. The principles, which are explained in detail on the GDS blog (where you can also comment on them), include:
1. The User Control Principle: Identity assurance activities can only take place if I consent or approve them.
2. The Transparency Principle: Identity assurance can only take place in ways I understand and when I am fully informed.
3. The Multiplicity Principle: I can use and choose as many different identifiers or identity providers as I want to.
4. The Data Minimisation Principle: My request or transaction only uses the minimum data that is necessary to meet my needs.
5. The Data Quality Principle: I choose when to update my records.
6. The Service-User Access and Portability Principle: I have to be provided with copies of all of my data on request; I can move/remove my data whenever I want.
7. The Governance/Certification Principle: I can trust the Scheme because all the participants have to be accredited.
8. The Problem Resolution Principle: If there is a problem I know there is an independent arbiter who can find a solution.
9. The Exceptional Circumstances Principle: Any exception has to be approved by Parliament and is subject to independent scrutiny.
Of all of these, perhaps the most challenging principle for government will be that last one, particularly in light of PRISM revelations (and doubtless more to follow) and the hubris around censoring adult content. Will there be the appetite for true transparency and accountability in those situations where some degree of privacy is compromised in the interests of national security or user safety? That will be an acid test for whether the UK is on course to become a true digital economy, or is just paying lip service to online rights.
I hope to be discussing the principles further at the next Open Identity Exchange meeting in London on 2nd July. If you want to add to the debate, then do join us.