I spoke today at a Social Market Foundation event on biometrics. The keynote was Prof James Wayman, who was exceptionally fluent and interesting on the topic, and I was pleasantly surprised to see him talking openly about the abilities and limitations of biometric technologies.
Biometric technologies are one of those ‘lightning rod’ topics that quickly polarise people into the ‘for’ and ‘against’ camps. It’s difficult to say exactly why this is, but much of the problem probably rests in the dystopian science-fiction visions of the likes of Brazil or Minority Report that blur the reality of technology with the possibility of imagination.
I’m personally not too concerned about the application of biometric technologies in appropriate situations. What worries me are the processes and broader IT systems that depend on those technologies. Biometrics occasionally throw up false acceptances or false rejections. The problem is that the the systems and officials that depend on those biometrics, and the databases of personal information to which they are linked, place too much dependence on them and then make ridiculous decisions as a result. The attitude of “there’s a biometric involved so it must be correct” is very dangerous indeed – ask the people who have suffered wrongful arrest, rendition and torture as a result of stupid decisions made on the back of biometric system errors (more on this in a forthcoming blog article).
The paradox is that used correctly, biometrics can offer great privacy benefits. An oft-quoted example is that of using fingerprints to determine school meal entitlement – all children provide the print to obtain their meal, but the system knows that those on meal subsidies should not be charged, and the children are not stigmatised by having to admit to that subsidy. That’s a great example of identity technologies delivering privacy through anonymisation.
The problem is that all too often the organisations implementing biometric systems have failed to be transparent about the purpose or operation of the system, and this has reinforced mistrust of the technologies. School implementations are once again an example, since local authorities have often refused to discuss details of their fingerprinting approaches, or even to seek valid consent to that use of personal information, believing it to be covered by statutory processing permissions.
Biometrics can reveal information about their subject. A photo can reveal age, gender, race, religion (if the subject is wearing religious clothing or jewellery), health and other information. A voice can reveal age, gender, class, region of origin, education. Even fingerprints can give away gender and in some cases ethnicity. These attributes must be protected appropriately in any biometric system.
Privacy issues arising from the use of biometric technologies appear to have coalesced around three key questions:
Are biometric technologies an appropriate and proportionate solution to the problem? Just because we can use a biometric solution, that doesn’t mean it’s right to do so. The Hong Kong Information Commissioner, for example, made it clear that he felt the fingerprinting of schoolchildren was not an acceptable application. Too often we see biometric solutions rolled out as a solution looking for a problem.
Are we trying to identify or authenticate? In the vast majority of cases, biometrics can be used to help authenticate an assertion by the individual: eg “I am the legitimate holder of this token”. However, sloppy system design, or a desire to use every part of the technology system, or a misconceived desire to future-proof the technology investment, means that organisations instead set out to identify the user: rather than test an assertion, they try to singulate an individual from a database. An example is the IRIS immigration system, that picks the individual out from its database of enrolled users by the biometric alone, rather than asking them to present a machine-readable document and then confirming that the holder has the associated biometrics.
Should we gather biometric templates or biometric images? The most complex and expensive part of a biometric scheme is enrolment of the data subjects into the system. Algorithms and technologies are developing quickly, and to protect the investment it is tempting to capture images (a high-quality scan of the biometric, eg a digital photo or high-quality voice recording) so that templates (mathematical products derived from that image, which can be used to confirm a biometric but cannot be used to recover the original image) can be regenerated when needed. However, 9 times out of 10 organisations go for the image option since they believe that this will future-proof their investment. Templates have fewer privacy implications than images, since a stolen image can (in theory) be used to assist in attacks on the user’s identity, whilst the template is of far less use. Moreover, once a biometric image has been stolen and used for fraud, it can’t be revoked – you can’t change your fingerprints!
Not surprisingly, the answers to our key questions can be derived quickly, easily and with a minimum of cost. Every biometric application should have a Privacy Impact Assessment (PIA) as part of its business case, completed before any procurement or development commences. The PIA should consider whether biometric technologies are a proportionate and acceptable solution to the problem in hand; whether the application should seek to identify or authenticate the users; and if so, whether it is really necessary to capture an image at time of enrolment, or will a template alone deliver the necessary functions.
None of this is particularly difficult, and there’s a lot to play for here. Public trust in biometric technologies must be nurtured and protected, and it will only take a single major privacy disaster relating to a biometric system to destroy confidence in all biometrics. Remember, we’ve only got once chance to get this right – because trust won’t come back, and stolen biometrics can’t be replaced.