Will artificial intelligence be a recruiter’s new best friend?

AI and machine learning are being introduced into corporate recruitment to save time and money, but employers need to understand the legal risks

Many organisations are focused on the “war for talent”, with a skills shortage across numerous occupations meaning they face hyper-competition to secure the people they need to sustain and grow their activities.

Simultaneously, these organisations are facing near-constant change as they seek to stay competitive and relevant. The emphasis has increased on having employees who embrace change, who withstand the pressures of the modern work environment and create a positive climate for others.

Intrinsically linked is the acknowledgement of the benefits that having a diverse workforce can bring – from increased productivity to improved reputation, to the ability to better compete in global markets. In this context, workforce planning – knowing which skills are needed and how they will be engaged – and recruitment become critical activities.

Artificial intelligence (AI) could offer the perfect solution. AI recruitment methods not only claim “the best candidate every time you hire”, but also promise efficacy, speed, consistency and greater cost effectiveness. On the basis of that sales pitch, it is hard to envision why employers would turn this down.

However, AI recruitment tools are not the cure-all that their creators make them out to be. There are risks involved that employers should be cautious about, the first being bias.

Unconscious bias

As a society we acknowledge that humans inherently hold unconscious bias. These are often inhibiting, especially for organisations keen to recruit a diverse range of individuals.

However, it would be wrong to assume that AI recruiters will be void of these human prejudices. If not programmed carefully, AI that relies on machine learning and algorithms can simply entrench the existing human bias found in the data used to develop it.

If erroneous assumptions are made during the machine learning process, the algorithms in AI recruitment models will produce systematically prejudiced recruitment results, rather than eliminating human bias.

By way of example, Amazon recently had to scrap a machine learning AI recruitment model which delivered sexist results by favouring male candidates. The model had been trained on a male-dominated collection of CVs submitted to Amazon over a 10-year period.

The bias dilemma is further complicated by the fact that most employers, unlike Amazon, would not be developing their own AI recruitment devices. Instead, biased policies and data would be embedded into their algorithmic devices by remote developers or contractors.

Aside from the reputational damage that employers may suffer, AI bias can lead to legal issues such as unlawful discrimination claims. It is unlawful for an employer directly or indirectly to discriminate against, harass or victimise a job applicant on the basis of any protected characteristic they may hold.

The protected characteristics are set out in The Equality Act 2010, and include age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex and sexual orientation.

If the bias within the algorithms controlling the machine lead to a systemic denial of, for example, black applicants, employers may find themselves facing racial discrimination claims.

The question then turns to liability. Would liability for discrimination lie with the AI developer who input the original data, or with the employer that subsequently programmed the AI to meet their specific recruitment needs? This is an area in which the law desperately needs to catch up.

AI disputes

We are already seeing an increase of AI disputes reaching the courts, but these tend to be personal injury and negligence claims, where the principles of the law of tort are relied on to establish liability. It is expected that there will also be an increase in AI disputes in areas such as employment law over the next decade.

AI recruitment processes can lead to discrimination claims in other ways too. A recent case which reached the Employment Appeal Tribunal concerned a respondent organisation that had used a simple form of recruitment AI to ask job applicants multiple choice questions.

The claimant, who had Asperger Syndrome and was unsuccessful in her job application, argued that she had been indirectly discriminated against. She claimed that her disability made it difficult to provide answers in the format required by the AI and that the organisation should have made reasonable adjustments to the AI process so as to not place her at a disadvantage to other applicants.

The tribunal ruled in the claimant’s favour, sending out a strong message that flexibility around the use of AI, as well as the ability to tailor AI recruitment devices to the specific needs of individuals, are necessary to avoid legal issues.

Finally, many AI recruiters rely on storing huge amounts of personal data to operate optimally. This is at odds with ever-more restrictive data protection laws and could lead to claims against developers or employers for data protection breaches.

Simpler forms of recruitment AI which filter candidates’ CVs, for example, may not face any legal claims, provided there is a legally valid purpose for storing the data and provided the data is stored for the least amount of time necessary.

However, more complex AI recruiters, such as those that scan the internet for all updated information about candidates, may well fall foul of the EU’s General Data Protection Regulation (GDPR).

Risk management

So how can those who want to benefit from the advantages of AI recruitment minimise the accompanying risks? Risk management must start with the initial data being fed into the device.

Swedish AI and social robotics company Furhat is developing an interviewing robot, which Swedish recruitment company TNG is testing. The lifelike robot, called Tengai Unbiased, interviews candidates and decides whether a candidate should move forward to the next stage of recruitment.

Furhat and TNG agree that the best way to minimise risks such as machine learning bias and potential discrimination claims is taking care to input diverse data. They are ensuring that Tengai Unbiased learns from a number of different recruiters to avoid picking up the trends of one particular recruiter only.

Furhat Robotics and TNG are also ensuring they carry out a significant number of test interviews using a diverse range of volunteers. Together, these practices should reduce the risk of latent bias appearing further down the line.

Effective risk management will be difficult for developers and employers to achieve alone unless the legal profession delivers certainty around the development and use of AI recruitment devices.

Although the advent of the GDPR may have brought some clarity around how to handle big data, there is little regulatory oversight of algorithms to ensure they are helping curb bias and discrimination.

For example, future regulation could require AI recruitment devices to be programmed using data which safeguards The Equality Act’s nine protected characteristics; helping to bolster the remit of employment law within the AI sphere.

Human input

The importance of having a human in the loop must not be forgotten. Although AI is proving more effective and efficient than humans in carrying out certain tasks, human input in an AI recruitment process is vital.

Humans must constantly monitor the devices and their output, and use human instinct to discern whether the AI recruiter is assisting as intended. Without human intervention, an uncritical reliance on AI recruitment could lead to wrong decisions that threaten ethics and undermine public trust in the developers of AI solutions and the technology itself. 

Despite the risks, the emergence of AI recruiters should be a welcome development. After all, they can easily outperform humans on a number of levels, and with the right framework and awareness in place, AI recruiters have the potential to abolish the bias that is prevalent in human recruiting.

The necessary framework and awareness will include a combination of diverse data, clear regulation and human safeguards. This framework would enable the recruitment industry, which is frequently considered to be behind the times when it comes to innovation, to safely launch itself into the 21st century.

Read more about AI recruitment

Amanda Glover is a trainee solicitor in the technology sector at law firm Coffin Mew.

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close