Navigating AI in Recruitment

31st January 2025

Kevin McKenna, Partner

The Information Commissioner’s Office (ICO) recently conducted a series of audits on AI-powered recruitment tools, uncovering both promising practices and significant areas for improvement. These tools—employed for sourcing, screening, and selection—offer efficiency but also introduce risks, particularly regarding privacy and data protection. The findings highlight the need for HR professionals to carefully assess the compliance and ethical use of AI in recruitment.

Benefits and Risks of AI in Recruitment

AI tools streamline recruitment by efficiently processing large volumes of applications and providing consistent evaluations. However, the risk is that they can also perpetuate bias, collect excessive personal data, and operate in ways that may violate data protection laws. For example, some tools inferred candidates’ characteristics, like gender or ethnicity, based on names—often without their knowledge or consent—raising concerns about fairness and accuracy.

Key Findings and Challenges

The ICO identified instances of non-compliance where AI providers misclassified their roles under data protection law, leading to vague contracts that obscured responsibilities. Tools often collected more personal data than necessary, repurposing it for unintended uses. Additionally, recruiters and candidates were frequently unaware of how their data was processed.

Encouraging Practices

Despite these challenges, several providers demonstrated responsible practices. Some offered tailored AI models that minimized data collection, while others prioritized transparency by sharing detailed information about their AI systems.

ICO Recommendations for Compliance

To ensure compliance and build public trust, the ICO outlined seven critical recommendations:

Fairness: Regularly monitor AI for bias and accuracy issues, ensuring any special category data used is accurate and lawfully processed.

Transparency: Clearly communicate how candidates’ data is processed, including the logic behind AI predictions. Both recruiters and AI providers must ensure transparency.

Data Minimization: Limit data collection to the minimum required for AI functionality and ensure it is used solely for its intended purpose.

Data Protection Impact Assessments (DPIAs): Conduct DPIAs early in AI development and update them as needed to mitigate privacy risks.

Defined Roles: Clearly delineate whether AI providers act as controllers or processors in contracts and operations.

Explicit Instructions: Recruiters must provide detailed instructions to AI providers to ensure data is processed as intended.

Lawful Basis: Establish a clear lawful basis for data processing, particularly for special category data, and document this appropriately.

Conclusion

By adhering to these recommendations, employer can still leverage AI responsibly, enhancing recruitment processes while safeguarding privacy. High data protection standards not only ensure compliance but also foster trust and innovation in AI-driven recruitment.

Get in touch with our Employment Team on 0161 832 3434.

Kuits FSQS registered
Kuits good employment supporter