pathdoc - stock.adobe.com
When applying for a new job, candidates may well find that the use of artificial intelligence (AI) tools is involved at some point in the recruitment process. New recruitment businesses and technology are entering the market, setting up entirely automated initial conversations with candidates to help them find the right vacancy for their skill set, saving time for applicant and recruiter alike.
CV screening is also becoming more prevalent, with AI screening and tracking tools being used to quickly analyse CVs to ascertain whether the individual has the qualifications and experience necessary for the role – for example, burger chain Five Guys is said to be utilising such technology.
Unilever recently hit the headlines when it announced that, instead of human recruiters, it uses an AI system to analyse video interviews. Candidates record interviews on their phone or laptop, and the system scans candidates’ language, tone and facial expressions from the videos, assessing their performance against traits that are considered to indicate job success at Unilever.
But it is not just the recruitment stage where AI and people analytics are being used by businesses – performance management is another targeted area. Amazon is leading this charge – the company was issued with two patents in the US for a wristband for tracking the performance of workers in their warehouse, which would mean that staff receive a little “buzz” if they place a product near or in the wrong inventory location.
It is also alleged that Amazon uses a computer system to automatically generate warnings or terminations to employees, when their productivity (or lack of) warrants it.
The benefits of such technology for employers are countless and clear, including costs savings, efficiency, and the purported removal of human unconscious bias and prejudice. However, the use of AI in the workplace has come under scrutiny and has posed serious ethical and legal questions, including whether AI itself could in fact be biased.
Another important aspect when implementing AI in the workplace is its relationship with data protection laws such as the EU’s General Data Protection Regulation (GDPR). So, what data protection considerations should an employer make when considering the introduction of AI technology?
Has a Data Protection Impact Assessment (DPIA) been carried out?
The use of AI for processing personal data will usually meet the legal requirement for completing a DPIA.
A DPIA enables the business to analyse how the AI plans will affect individuals’ privacy, and ensures the company can assess the necessity and proportionality of its technology.
As the UK Information Commissioner’s guidance confirms, the deployment of an AI system to process personal data needs to be driven by the proven ability of that system to fulfil a specific and legitimate purpose, not just by the availability of the technology.
The DPIA should demonstrate that the applicable purposes the AI is being used for could not be accomplished in another reasonable way. In doing so, organisations need to think about and document any detriment to data subjects that could follow from bias or inaccuracy in the algorithms and data sets being used.
What is the lawful basis to process the data in this way?
A business cannot simply process personal data because it wishes to do so – data can only be processed where one of the legitimate grounds or conditions of processing has been met. There are various bases, including performance of a contract, compliance with a legal obligation, consent and legitimate business interests. For the processing of sensitive personal data (such as health data), the bases are even more limited.
Before using AI or people analytics in the workplace, employers will first need to consider what data is being processed by such activity and second what legal basis can be relied upon in processing the data in that way. If they do not have a legal basis, the data cannot be processed.
Does the privacy notice adequately inform workers?
One of the key principles of GDPR is transparency, requiring businesses to provide individuals with mandatory information about the processing of their personal data, including the reason why it is being processed, the legal basis, who it will be shared with and how long it will be retained. Employers will need to update their privacy notices to ensure anyone subject to the AI technology is made aware of its use.
The privacy notice needs to be concise and intelligible, using clear and plain language – this will be particularly difficult when including a complex AI system, as businesses will need to provide a meaningful explanation of the technology to meet the transparency principle of GDPR. Opaque or complex descriptions of the tech may result in contention or pushback from the employees and candidates affected.
Remember to consider the specific rules for automated decision-making
GDPR prohibits instances of “computer says no” and contains the right for data subjects not to be subjected to a decision based solely on automated processing, which has a legal or similarly significant impact on them. Its aim is to protect individuals against the risk that a potentially damaging decision is taken without human intervention, and will therefore likely capture a recruitment result made without any human input.
There are specific exceptions when automated decision-making is permitted, including where explicit consent was given, contractual necessity, or where authorised by law. Where such an exception is being relied upon, such as with the consent of a candidate, the business must still implement further safeguarding measures, including permitting the individual to request human intervention or to contest the decision.
Employers will need to ensure that their automated technology is being lawfully used, before relying on its output.