Responsible AI in Recruitment: government issues new guidance for employers
The government has published guidance on Responsible AI in Recruitment, designed to support employers to reduce the risk of implementing AI recruitment systems that contain bias and are discriminatory.
How is AI being used in recruitment processes?
Increasingly, employers are using AI to automate and simplify existing recruitment processes, for example:
- to sift CVs/applications and select candidates;
- in the interview process to analyse speech, body language and facial expressions;
- to automate parts of the recruitment process e.g. chatbots.
AI systems have the potential to improve the recruitment experience for candidates and save significant time and cost, freeing up staff for other tasks. But there are also risks.
What are some of the employment law risks when using AI in the recruitment process?
Under the Equality Act 2010, job applicants (as well as workers and employees) are protected from discrimination on the grounds of any protected characteristic e.g. disability, race, sex etc. Using AI systems in recruitment can embed bias and discrimination. Namely, bias can creep into AI decision-making in several different ways – in the data being used to train an AI tool, or in the AI algorithm (the coded instructions which tell the AI tool how to function) and more. For example, a recruitment tool used by Amazon was trained on data submitted by applicants over a 10 year period, most of whom were men. As a result, the AI system taught itself that male candidates were preferable, and the tool began to (allegedly) discriminate against women.
AI systems also risk excluding and/or discriminating against applicants who may not be proficient in, or have access to, technology due to age, disability etc.
Employers have a duty to make reasonable adjustments to the recruitment process for applicants with disabilities to ensure they do not suffer a substantial disadvantage when compared with non-disabled applicants. This can be complex where AI systems are involved (particularly where employers and/or candidates do not fully understand their systems).
There are also significant data protection risks and considerations.
What is the new guidance and what does it tell employers?
The guidance is non-statutory and has been developed with feedback from CIPD and other organisations. It reflects the government’s five AI regulatory principles (safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress). Whilst these principles are not currently enshrined in law, they are enforceable by regulators including the Equality and Human Rights Commission (EHRC) and Information Commissioner’s Office (ICO). Therefore, employers and recruiters should take note.
The guidance provides considerations for employers, together with “assurance mechanisms” (e.g. ways of testing the system), for the procurement and deployment of AI recruitment systems.
We’ve picked out some of the key points for employers to note:
- Before procurement, employers should have a clear purpose for the AI system and what they want it to do. They should also clarify their desired output with suppliers. Employers should understand how the AI system will fit into existing systems and interact with employees. Employees should be consulted to understand what training they might need to use the AI system effectively. Employers will also need to consider any potential legal risks (including discrimination and data protection, as above).
- To reduce risks, employers should carry out an impact assessment, a data protection impact assessment and create an AI governance framework. More detail on these tools is set out within the guidance itself.
- During procurement, employers should ask suppliers for documentation and evidence about risks (e.g. impact assessments). Employers should work with suppliers to understand the actual functionality of the AI tool and establish further risks. Employers should also carry out several “tests” on the AI system including a bias audit and more. The guidance provides more detail about these tests.
- Before deployment, employers should support employees to correctly use the system via a pilot and training. Further, employers should assess the performance of the AI tool against equalities outcomes (checking for bias). Importantly, employers should plan for potential reasonable adjustments that might be required to the technology before it is deployed (e.g. text-to-speech software to enable a candidate with a visual impairment to use a chatbot – and more).
- Employers should ensure there is transparency for applicants by clearly signposting that AI is being used. Where possible the signposting should identify specific limitations of the system and how they might apply to individual applicants. Critically, without this signposting, applicants might not know that they need to request reasonable adjustments.
- Once the system is live, employers should continually monitor the system to identify any potential issues (the guidance provides more detail on how to do this). It isn’t realistic however for employers to expect that performance testing will identify every harm or unintended consequence that may occur as a result of the AI tool (including potential discriminatory outcomes).
- Unintended harms may be first identified by candidates using the system. Employers should therefore provide routes for candidates to feedback (and avoid, for example, a “faceless” AI system where there is no means of providing feedback). Feedback systems might include chatbots, surveys or a contact email. Where any harms are reported/identified, employers should remedy these.
Comment
AI is new and constantly evolving. This guidance is therefore welcome for all employers considering implementing AI systems in recruitment and provides key strategies to help employers identify and reduce risk. We’d always recommend you take legal advice when rolling out new AI recruitment systems, to support your risk assessments and processes.
Critically, the guidance accepts that not all risks will be identifiable via testing and some unintended consequences, including discrimination bias, may only be first identified and raised by a user. Employers implementing new systems will need to be mindful of these risks and ensure you have a proper process in place to manage them – for example a proper complaints process and a commitment to continually improving systems to reduce the risk of more discrimination.
It’s likely we will see more reported cases of unintended discriminatory bias emerge as more AI systems are rolled out – providing lessons to be learnt as the technology evolves.
Finally, as the guidance states, it’s going to be critical for employers to be transparent and explain to candidates where AI is being used and how. In particular, this will ensure candidates have all the information they need to request reasonable adjustments and limit claims in this area.
Our newsletters
We publish monthly employment newsletters. If you'd like to be added to the mailing list, please let me know.
Our fixed price employment law service
We also have a fixed price employment law service.
Please contact Gordon Rodham if you'd like to find out how we can help you with our fixed-fee annual retainer, or flexible discounted bank of hours service.
More information on all our services can be found here.