Dmitry Nikolaev - Fotolia
Digital identity platform Yoti has announced that its biometric age estimation technology can now deliver accurate, real-time age assurance for under-13s, which it claims will help social networks and other businesses protect children from harm online in a privacy-preserving way.
The technology uses a form of artificial intelligence (AI) known as machine learning, which in this case means exposing the age estimation algorithm to millions of images of people’s faces, tagged with their month and year of birth, so that it can eventually figure out how old someone is.
Using this method, Yoti claims it is now able to estimate people’s age to within 1.5 years for those in the 13 to 25 age range, and within 1.3 years for those aged 6 to 12.
Unlike facial-recognition systems, which establish a person’s identity by comparing a real-time scan of their face with a pre-existing photo, Yoti’s facial analysis system does not store any biometric information, either locally or in the cloud, and immediately deletes the scan once a person’s age has been verified.
Yoti’s announcement of its new capability comes as countries around the world are figuring out what standards internet companies and online services should follow to protect their younger users.
In the UK, for example, the Information Commissioner’s Office has developed a statutory Age Appropriate Design Code laying out the privacy standards companies are expected to follow when processing the data for children under 18.
At the end of June 2021, US senators wrote an open letter to a number of tech executives, including the the likes of ex-Amazon chief Jeff Bezos and Meta CEO Mark Zuckerberg, urging them “to extend to American children and teens any privacy enhancements that you implement to comply” with the UK’s design code.
Read more about facial-recognition technologies
- Experts giving evidence to the House of Lords have said that UK police use of facial-recognition technology is disproportionate and ineffective, and further questioned the utility of algorithmic crime “prediction” tools.
- Facebook and Meta have committed to halting their use of facial recognition technology and deleting the biometric data of more than a billion people by the end of 2021, but will retain the underlying algorithms and software for potential use in future products.
- The UK government is seeking a technology supplier to develop a new common digital identity check for access to government services.
Similar standards are also being developed in the Netherlands and Ireland, while Australia’s Online Privacy Bill will require internet companies to take all reasonable steps to verify the age of their users, as well as to obtain parental consent for the processing of data about any child under 16.
Speaking to Computer Weekly, Yoti’s director of regulatory and policy, Julie Dawson, said the technology could be used in a range of commercial applications, from age-appropriate content moderation and targeted advertising, to self-checkouts at supermarkets and the implementation of age gates on websites.
According to an October 2020 whitepaper from Yoti, its technology at that time had the highest “mean absolute error” when it comes to estimating the ages of dark-skinned women aged between 50 and 60, estimating to within 5.6 years of their actual age.
While similar results are shown in Yoti’s most recent whitepaper, Dawson said age estimation use cases are almost non-existent for those above the age of 30, with the primary use case for over-18s being the accessing of age-restricted products such as alcohol or pornography.
However, while she said there is not much material damage caused by mistakenly identifying a 50-year-old as 55, as there is no impact on their ability to access a product or service, the stakes are higher for younger adults and teens on the borderline of certain age restrictions.
“For us it was key with this younger age group to make it really balanced from the get go,” she said, adding that the algorithm was shown an equal amount of faces from each skin tone and gender from those in the youngest demographic in an effort to reduce its bias. “We’ve got a more even balance in this 6 to 12 range, because we’ve built the dataset proactively by speaking and working directly with parents and families, whereas in 13 and above the accuracy is impacted by the other data sets being opted into by users through the Yoti app.
“If going forward platforms try to do that from the get go, this discrepancy that’s happened in AI in lots of places maybe wouldn’t have happened,” said Dawson. “We’ve only got one example of this but it’s a really interesting result.”
In its whitepaper, Yoti argued that its technology does not process any biometric data because it does not allow for the unique identification of a person and instead merely returns an age estimation based on the algorithm’s analysis of the face. “We’ve built it so there’s no retention of the image,” said Dawson. “Take Yubo as a social media platform – they send us an image, they ping it on a software-as-a-service basis to our servers, and we give back our estimated age and a confidence value.
“Obviously, Yubo in that instance already has the image, but Yoti doesn’t learn anything each time it does one of these age estimates. We’ve done 550 million of these estimations now, but each time it’s basically giving a fresh estimation. We’ve fed the algorithm the ground truth in the past so that when it sees the new face it can do an estimate… there is no personal recognition or authentication.”
In October 2021, nine schools in North Ayrshire started using facial-recognition to take payments for food in their canteens, but paused their use of the technology days later following backlash from privacy campaigners.
According to privacy expert Stephanie Hare, for example, using facial-recognition to make payments is a “disproportionate” use of the technology, and its use in schools is “normalising children understanding their bodies as something they use to transact”. “That’s how you condition an entire society to use facial recognition,” she said.
In response to whether Yoti’s technology could contribute to the normalisation of surveillance for children, Dawson said education was key, adding that resources need to be developed so children are able to better understand the difference between recognition and detection: “We have to be really clear that this technology isn’t able to surveil because it doesn’t recognise or remember.”
On the age estimation technology’s potential for abuse, and whether there are any use cases that Yoti would refuse to participate in, she said the company has an internal and external ethics group, which would both be given oversight of any new uses cases before the deployments take place.
Yoti’s age estimation tech was approved by the German Commission for the Protection of Minors in the Media on 4 November.