Just Hiring, a New Toolkit to Counter Discrimination in AI-Assisted Recruitment

Just Hiring, a New Toolkit to Counter Discrimination in AI-Assisted Recruitment

New summary available: FINDHR toolkit for Policy
Full toolkit: “Just Hiring! A Toolkit for Policymakers”

Finding the “best” candidate to hire is an arduous job, and the rise of algorithmic tools to streamline the decision has been incrementally on the rise. However, while useful, these tools are not neutral. They are vulnerable to creating and perpetuating systemic discrimination, often operating as a “black box” that lacks insight into its own biases.

This is a critical issue for gender, race, and economic justice.

For this reason, WIDE+ has been a proud partner in the Horizon Europe project “Fairness and Intersectional Non-Discrimination in Human Recommendation (FINDHR)”.

After years of extensive research, the FINDHR project has recently launched its Toolkit for Policy Makers, providing concrete recommendations to tackle algorithmic discrimination at a governance level.

Discrimination in AI-based hiring is a Feminist Issue

Discrimination in hiring is a lose-lose scenario. Companies miss out on the best talent, and society suffers from grave injustice. While designed to reduce costs, AI-assisted hiring tools are trained on existing data: if this data reflects historical biases (which it often does), the AI learns and automates those same biases.

In addition, as the research shows, most legal and technical fairness measures only address single-axis discrimination (e.g., bias based on gender or ethnicity). They completely fail to capture the compounded and unique disadvantages of intersectional discrimination, such as those faced by migrant women, older women, or women with disabilities.

Without clear intersectional metrics to abide by, hiring algorithms can produce misleading results, claiming a system is “fair” when it is still actively discriminating against the most marginalized groups.

Identifying the Gaps in Policy

The current EU legal frameworks, including the AI Act, GDPR, and anti-discrimination directives, are a necessary and valuable starting point. However, the FINDHR research has identified significant gaps and uncertainties that leave individuals and society at risk.

The AI Act, for example, focuses primarily on requirements for providers before a tool is on the market, but this is not sufficient to address harm and biases that emerge after deployment in a real-world context.

Key Recommendations for Fairer AI Hiring Processes

The FINDHR Toolkit for Policy Makers synthesizes our interdisciplinary research into clear, actionable recommendations to build a legal and technical infrastructure that protects people.

Our key recommendations include:

  • Need for a specific definition of Intersectional Discrimination. European anti-discrimination laws must be updated with a clear and explicit legal definition of intersectional discrimination, as it is the only way to ensure technical fairness measures and audits can accurately detect and mitigate it.
  • Mandatory Post-Deployment Monitoring. An AI system is not “bias-free” just because it passes a pre-launch test. We call for mandatory, systematic testing, independent auditing, and ongoing monitoring across the full life cycle of a hiring tool to detect unforeseen discriminatory patterns that appear in unpredictable real-world use.
  • Strengthening Transparency and Empowering Jobseekers. Transparency is fundamental to accountability. We recommend strengthening legal obligations to force employers to disclose when and how they are using algorithmic tools. Furthermore, job seekers must have enforceable rights to a meaningful explanation, the right to request a human review, and access to collective redress (class-action) mechanisms to challenge systemic bias.
  • Resolve the Data Dilemma. There is a critical legal tension that needs to be addressed on an EU level. To detect bias, we need to test using sensitive data (e.g., on ethnicity or disability). But data protection laws (like GDPR) rightly restrict collecting this data. The FINDHR toolkit identifies this challenge and proposes technical and legal solutions, such as Secure Multi-Party Computation (MPC), to allow for fair audits while protecting individual privacy.
  • Ensure Multi-Stakeholder Involvement. A socio-technical problem cannot be solved by data scientists and software developers alone. We recommend that policymakers mandate the involvement of diverse stakeholders, including civil society, equality bodies, and groups representing those who experience discrimination. Their involvement should be active in the design, deployment, and evaluation of AI hiring systems.

Download the Toolkit

Tackling algorithmic bias requires a contextual, socio-technical approach that embeds human rights and intersectional feminist perspectives from the start.

This new toolkit is a vital resource for policymakers, civil society advocates, and anyone involved in the governance of AI.

We invite you to read, use and share these findings.

Download the “Just Hiring! A Toolkit for Policymakers” here.

Download the summary of the FINDHR toolkit for Policy Makers.

 

(The FINDHR project has been funded under the EU framework programme Horizon Europe under Grant Agreement No. 101070212. WIDE+ is a proud partner in the consortium.)

Subscribe to our feminist newsletter
We keep your data private and share your data only with third parties that make this service possible. Read our Privacy Policy.

Discover more from

Subscribe now to keep reading and get access to the full archive.

Continue reading