The Information Commissioner’s Office (ICO) has published the outcomes from a number of consensual audits it conducted on the use of AI tools in recruitment in its audit outcomes report (Report). The ICO concluded that there were “considerable areas for improvement” regarding how personal data is being used by AI sourcing, screening and selection tools. While the ICO recognised that AI can innovate the recruitment process and noted many encouraging practices, they also identified a number of concerns. As a result, the ICO put forward almost 300 recommendations to the organisations involved in their audit, all of which were accepted.
The Report summarises the issues the ICO uncovered and examples of good practice, and sets out its 7 key recommendations. It has also repackaged some of these issues and recommendations into a set of key questions organisations should ask when procuring AI recruitment tools. Together, these are a helpful resource for recruiters thinking about using AI in their recruitment processes as well as for developers of such products. However, given that a number of the concerns identified will arise with other AI use cases, the recommendations and questions will also have wider application.
ICO’s concerns
Accuracy and bias: The ICO found some instances where providers of AI tools didn’t take active steps to monitor their tools’ accuracy and bias. In other instances developers inferred a candidate’s race, age and gender based on other information from their application in order to monitor for bias. However, the ICO concluded that such inferred information is insufficiently accurate to monitor for bias effectively.
Fairness: The search functionality of some tools enabled recruiters to filter out candidates with certain protected characteristics.
Data minimisation: The ICO found some tools were collecting more personal information than was necessary. For instance, personal data in some instances was being scraped from social media and job networking sites to build databases of potential candidates.
Controller or processor: Several AI providers had misinterpreted their roles, viewing themselves as processors, and thus were not fully complying with their obligations under the UK General Data Protection Regulation (GDPR). Other developers had attempted to pass all responsibility for GDPR compliance to the recruiter.
Recommendations
The ICO put forward almost 300 recommendations to the organisations involved in their audits, which were aimed at improving compliance with the GDPR and promoting the good practices already set out in existing ICO guidance. The Report collates these into seven key recommendations, which are relevant to all organisations when designing and using AI. These are:
Fairness: Potential and actual fairness in the AI and its outputs must be monitored, with active steps taken to counteract issues. As reflected in the ICO’s concerns above, any special category data use for monitoring bias must be adequate and accurate enough for this purpose, with inferred or estimated data not being sufficient in the ICO’s view.
Transparency and explainability: Candidates must be informed of how their personal data will be processed by AI. This can be by the AI provider or the recruiter, which should be clear from the contract. In addition to telling candidates what personal data will be processed by AI, the privacy notice should explain the logic involved in making predictions or producing outputs as well as how the data is used for training, testing or otherwise developing AI.
Data minimisation and purpose limitation: AI providers should assess the minimum amount of personal data required to develop, train and test the AI, and ensure that this usage is compatible with the purpose for which it was originally collected. Likewise, recruiters must ensure they collect the minimum personal data necessary to use the AI and not reuse it for an incompatible purpose.
Data Protection Impact Assessments (DPIAs): AI providers and recruiters must complete a DPIA early in the AI development ie before high-risk processing occurs, and keep this updated. The DPIA must include a comprehensive assessment of privacy risks, appropriate ways to mitigate these risks and an analysis of the trade-off between people’s privacy risks and competing interests. The ICO notes, that even if the AI developer is a processor, it should consider completing a DPIA.
Role as a processor: The recruiter and the AI provider must record who is the controller and who is the processor of personal information and ensure this is reflected in privacy notices. The ICO gives an example that an AI provider will be a controller if it uses the personal data it processes for the recruiter to develop a central AI model to deploy to other customers.
Explicit processing instructions: Where the AI provider is the processor, recruiters must provide them with explicit and comprehensive instructions to follow when processing personal data. The ICO notes that this includes the recruiter deciding on the data fields required, the output required and the minimum safeguards to protect the personal data. Recruiters need to check periodically that the instructions are being followed.
Clear basis for processing: A lawful basis for processing the personal data must be established by the controller at the outset together with an additional ground where special category data is being processed. This basis should be documented (such as in a legitimate interests assessment) and described in the privacy notice.
Outlook
Many organisations looking to use AI in their recruitment wish to deploy it across the whole of their group, and therefore across jurisdictions. As we have seen, many different approaches are being taken to regulating AI, and so AI as well as privacy laws will need to be considered before deployment. For instance, in the case of the EU, the AI Act categorises use cases in recruitment as high risk, and therefore imposes more stringent obligations than for some other use cases (see our briefing for more information on the AI Act).
Overall it is clear that AI has the potential to streamline recruitment processes, and potentially could help reduce unconscious bias in the application screening processes. However, it is equally clear that there are challenges that need to be overcome to ensure that the AI complies both with law and with candidates reasonable expectations.
Many thanks to Callum Morley for his assistance in preparing this post.