Newswire

For Further Information Contact:

ukscotland@transatlanticlaw.com

UK Update: Increasingly, employers are using or looking at using artificial intelligence (AI) in an employment context.

AI can help to make recruitment processes more efficient, better identify the best talent applying to them and to try to eliminate or even just minimise potential for human biases in the recruitment process.

AI tools can be used to perform sifts of CVs and application forms, search prospective employees’ social media for key phrases or terms, schedule appointments and interviews, analyse tone of voice or facial movement during interviews and perform automatic filtering of candidate through online assessments and tests. But its use is not without risks…

A recent report from the TUC, published jointly with the AI Consultancy in May 2021, reignited the debate over how AI can be used in a way which minimises the legal risks, and what legal reforms may be required to support this.

We’ve summarised ten main points for employers to be aware of when using AI:

1. AI can be time and cost effective

The main benefit to using AI is that it can make the recruitment process and other HR functions much more time-efficient, which can in turn prove cost effective. Screening CVs and applications for a role with thousands of applicants might take a person weeks, if not months. AI tools can assist with these time consuming, and more mundane, tasks.

2. AI has the potentialto remove human bias

AI can potentially remove elements of human bias from the process, by helping to standardise aspects of the recruitment process and other HR functions. It removes individual discretion from the decision-making process. For example, having an AI tool sift applications rather than a manager, reduces any risk of that manager harbouring, say, racist or sexist views and of those affecting their recruitment decision making.

AI can also conduct sentiment analysis on, for example, job ads or descriptions, to ensure the language has no hidden bias. However, AI cannot eliminate bias completely (see below).

3. But AI also has the potentialto perpetuate discrimination…

Whilst on the face of it, the use of AI per se is unlikely to directly discriminate on the grounds of a protected characteristic, the main concern is the potential for AI to be classed as a “provision, criterion or practice” within the meaning of the indirect discrimination provisions of the Equality Act 2010.

Put simply, indirect discrimination is where an employer applies a PCP or a policy to everyone but which more adversely affects people with a particular characteristic.  The concern with AI is that any algorithm(s) upon which it is based could be deemed to be a PCP for the purpose of indirect discrimination.  So a straightforward if hopefully unlikely example would be a computer program which sifts CVs to identify only candidates who are more than 5 foot 8 inches tall – that could be said to be use of a PCP which is indirectly discriminatory against women.

4. Biased data makes for a biased algorithm

AI tools are only as good as the data they are fed – if the data set is biased, the algorithm will likely be biased too. A high profile example was Amazon’s attempt to build a CV screening algorithm. Using Amazon’s recruitment data from the past decade, the algorithm taught itself that male candidates were preferable to female, because Amazon’s previous recruitment decisions had been subject to bias.

5. The lack of human touch

AI lacks any kind of common sense, compassion or empathetic touch. This can lead to irrational results, such as failing to authorise a holiday request made due to particular personal circumstances, or an indirectly discriminatory algorithm.

By way of example, the Bologna Court in Filcams Cgil Bologna and others v. Deliveroo ITALIA S.R.L, 59 decided that an app used by the Italian Deliveroo company was indirectly discriminatory. The system treated equally all data inputs in relation to the willingness of riders to work generally, and at the busiest times. This might seem sensible, but it failed to take into account any good reasons for late cancellations or inability to work, such as childcare or illness, and this disproportionately affected women who tend to bear caring responsibilities.

6. ‘Black Box’ Problem

What actually is AI, and how does it work? The problem is that most employees probably won’t know, and so there is a lack of transparency around how decisions are made, which is sometimes called the ‘black-box’ problem.

7. How does your workforce feel about use of AI?

Another TUC report on AI highlighted that many employees feel uneasy about the use of AI by their employers to make employment related decisions, with only 28 per cent feeling comfortable with technology being use to make decisions about them at work.  As an employer, how will you manage this apprehension?

8. GDPR

Under the UK GDPR, in relation to data processing by AI there is currently no obligation to provide meaningful information if the processing is necessary for the performance of the employment contract or has human involvement in the decision-making. This exemption has been widely criticised.

9. The EU’s regulations for reform

The EU has published regulations for harmonised rules for the safe use and development of AI, the first to do so globally. It classes AI systems involved in employment as ‘high risk’ and therefore subject to particular safeguards.

As the UK has left the EU, the regulations will not be binding here, but any UK companies using AI in the EU will be subject to the regulations in force there.

9. UK reform?

Whilst the EU is legislating in Europe, it is likely the UK will see reform in this area in the near future too. The recent TUC Report is just one paper, following a line of others, which calls for legislative reform and increased regulation – and it is unlikely to be the last. This is an area to keep an eye on.

10. What does this mean for employers?

Employers need to know what AI tools they are using, and be aware of any unintended consequences of their use. The extent to which employers bear legal responsibility for any discriminatory acts arising from AI systems remains to be fully tested but employers should be aware of such risks.

Consideration should therefore be given as to whether such tools are actually necessary, and any impact they might be having on the fairness of decision-making. Employees and workers should also be kept informed of how decisions are made about them.

If you’d like to know more about using AI in your recruitment process and your responsibilities as an employer, please do get in touch and we’d be delighted to talk you through things in more detail.

 

By Morag Moffett, Burness Paull LLP, Scotland, a Transatlantic Law International Affiliated Firm.  

For further information or for any assistance please contact ukscotland@transatlanticlaw.com

 

Disclaimer: Transatlantic Law International Limited is a UK registered limited liability company providing international business and legal solutions through its own resources and the expertise of over 105 affiliated independent law firms in over 95 countries worldwide. This article is for background information only and provided in the context of the applicable law when published and does not constitute legal advice and cannot be relied on as such for any matter. Legal advice may be provided subject to the retention of Transatlantic Law International Limited’s services and its governing terms and conditions of service. Transatlantic Law International Limited, based at 42 Brook Street, London W1K 5DB, United Kingdom, is registered with Companies House, Reg Nr. 361484, with its registered address at 83 Cambridge Street, London SW1V 4PS, United Kingdom.