WIDE+ participation in Horizon Project “FINDHR” to prevent, detect, and mitigate intersectional gendered discrimination in Algorithmic hiring
WIDE+ is part of a consortium to implement is a 3-year research and innovation project. It will in collaboration develop methods, algorithms, and training for an intersectional anti-discrimination approach that are contextualized within the technical, legal, and ethical problems of algorithmic hiring, and are applicable to a broad class of applications involving human recommendation.
This project is supported by the European Union’s Horizon Europe Programme (grant agreement No 101070212), under the call HORIZON-CL4-2021-HUMAN-01-24 (“Tackling gender, race and other biases in Artificial Intelligence”). The consortium includes leaders in algorithmic fairness and explainability research (UPF, UVA, UNIPI, MPI-SP), pioneers in the auditing of digital services (AW, ETICAS), and two industry partners that are leaders in their respective markets (ADE, RAND), complemented by experts in technology regulation (RU) and cross-cultural digital ethics (EUR), as well as worker representatives (ETUC) and two NGOs dedicated to fighting discrimination (WIDE+ and PRAK). WIDE+ is bringing in its gender expertise and network.
The project had its kick-off meeting in December of 2022 and is now ready to start. The project coordinator, Carlos Castillo, said at the start of the project: “Algorithms are increasingly intersecting with important aspects of our lives and shaping our social interactions and careers. Without the necessary understanding and oversight, there are critical risks that need to be better understood. I am excited to work with academic and industry researchers and representatives from advocacy groups in this very challenging, high-risk/high-reward research project.”
Through a context-sensitive, interdisciplinary approach, FINDHR will develop new technologies to measure discrimination risks, to create fairness-aware rankings and interventions, and to provide multi-stakeholder actionable interpretability. It will also produce new technical guidance to perform impact assessment and algorithmic auditing, a protocol for equality monitoring, and a guide for fairness-aware AI software development. The project will also design and deliver specialized skills training for developers and auditors of AI systems.
Algorithmic hiring is on the rise and rapidly becoming necessary in some sectors. Artificial Intelligence technologies promise to deal with hundreds or thousands of applicants at high speeds (Heilweil R., ‘Job recruiters are using AI in hiring’, Vox, 2019). Moreover, their uptake in European HR teams and Public Employment Services (PES) is growing faster than the global average (High-Level Expert Group on the Impact of the Digital Transformation on EU Labour Markets, final report, 2019). European tools are highly innovative, and include tools that instantly select and rank candidates based on their resumes and application materials, or process candidates using online tests or games.
Discriminatory biases have been documented across almost all applied domains of Artificial Intelligence (AI) (Feuerriegel S, Dolata M, Schwabe G, ‘Fair AI’, in ‘Business and Information Systems Engineering 62’, 2020). It is increasingly acknowledged that algorithmic hiring systems do this too, reproducing and amplifying pre-existing discriminatory entry barriers into the labor market. The FINDHR project is designed to create practical integrated solutions to tackle this issue.
FINDHR (Fairness and Intersectional Non-Discrimination in Human Recommendation) aims to push beyond current general guides to address discrimination in algorithms. And develop a more concrete approach for all involved to ensure fair online recruitment, leading to systems that excel at finding the best candidates for a vacancy, and thus explicitly address bias. It will be pioneering through different research methods in which WIDE+ will in particular conduct a participatory action research with marginalized and discriminated groups.
The project is grounded in EU regulation and policy. As tackling discrimination risks in AI requires processing sensitive data, it will perform a targeted legal analysis of tensions between data protection regulation (including the GDPR) and anti-discrimination regulation in Europe. It will also engage with underrepresented groups through multiple mechanisms including consultation with experts and participatory action research. All outputs will be released as open access publications, open source software, open datasets, and open courseware.
Consortium Partners:
- UNIVERSITAT POMPEU FABRA (UPF), Spain (Project Coordinator)
- UNIVERSITEIT VAN AMSTERDAM (UvA), Netherlands
- UNIVERSITA DI PISA (UNIPI), Italy
- MAX-PLANCK-GESELLSCHAFT ZUR FORDERUNG DER WISSENSCHAFTEN EV (MPI-SP), Germany
- STICHTING RADBOUD UNIVERSITEIT (RU), Netherlands
- ERASMUS UNIVERSITEIT ROTTERDAM (EUR), Netherlands
- WOMEN IN DEVELOPMENT EUROPE+ (WIDE+), Belgium
- PRAKSIS ASSOCIATION (PRAK), Greece
- CONFEDERATION EUROPEENNE DES SYNDICATS ADF (ETUC), Belgium
- ETICAS RESEARCH AND CONSULTING SL (ETICAS), Spain
- RANDSTAD NEDERLAND BV (RAND), Netherlands
- ADEVINTA SPAIN, SLU (ADE), Spain
- ALGORITHMWATCH SWITZERLAND (AW), Switzerland
Please follow the FINDHR project on Twitter (@HorizonFINDHR) and Mastodon (@findhr@eupolicy.social). The project website will be soon launched.