Recording available of online roundtable: Challenges and Solutions towards Anti-Discrimination Assessment of Algorithmic Tools in the Hiring Sector, 14 July

>>> WATCH THE RECORDING HERE <<<

Report from the FINDHR Online Roundtable on 14 July 2025: “Challenges Anti-Discimination Assessment of Algorithmic Tools in Hiring”

Algorithmic (AI) tools are increasingly being used in the European market to make the hiring process more efficient, most often for pre-selection or ranking. However, there is no conclusive evidence that the tools propose the most suitable or the best order of candidates (in terms of excellence). They can in fact reproduce existing discrimination in the labour market. To address this impact, regular monitoring of such tools is necessary. They not only need to be robustly tested in the development phase, but they also need to be monitored and assessed when they are used in actual job selection situations.

The AI Act of the European Union (EU) remains vague on how to ensure AI systems are trustworthy after they are placed on the market and moreover this Act is still in the process of being enforced. On the other hand, anti-discrimination directives are not designed to deal with algorithmic processes. Currently the General Data Protection Regulation (GDPR) does provide directions on how to collect and deal with the data provided in a hiring pipeline, but there is a grey area for those who want to actively monitor and assess AI tools in the hiring pipeline with the aim of eliminating discriminatory impact.

Join us for this roundtable, which aims to share possible solutions and challenges to effectively address intersectional discrimination in AI assisted hiring processes. It will reflect on outcomes from the Fairness and Intersectional Non-Discrimination in Human Recommendation (FINDHR) research project that is funded by the EU Horizon programme to come up with innovative solutions for the development and use of AI to prevent, detect, and mitigate intersectional discrimination in Algorithmic hiring. One promising solution researchers connected to FINDHR have found is the use of the computational multi-party technique as a privacy-friendly data collection and processing technique. However important questions remain with regard to  legal compliance and implementation in addition to finding other technical solutions.

This event is aimed at everyone interested in discrimination in AI-assisted recruitment, including researchers, developers, product managers, quality assurance engineers, HR professionals, activists, workers´ representatives, and all other public.

Practical information:

Panellists:

Dr. Nina Baranowska, LLM. is a legal researcher at iHub: Interdisciplinary Research Hub on Digitalization and Society, Radboud University, the Netherlands. Her research expertise focuses on the challenges of Al and digital technologies, with a particular interest in data protection, non-discrimination law, and product liability. She has pursued her research interests in new technologies through national and international research projects, and has been awarded scholarships at renowned research centers. Nina currently works as a researcher in the interdisciplinary and EU-wide project FINDHR (Fairness and Intersectional Non-Discrimination in Human Recommendation), funded by the Horizon Europe programme, where she focuses on the intersection between non-discrimination and data protection aspects of algorithmic hiring.

Dr. Changyang He is a postdoc researcher at the Max Planck Institute for Security and Privacy (MPI-SP) working with Dr. Asia Biega. His primary research areas are Human-Computer Interaction (HCI), Computer-supported Cooperative Work (CSCW) and responsible computing. He employs a human-centered approach to study how to develop algorithmic and social systems to enhance digital inclusion, connectivity and wellbeing for underrepresented populations. He is currently working on a data-protection-compliant protocol for fairness monitoring as part of the Horizon Europe project FINDHR. He has published more than 20 articles in top-tier HCI venues such as CHI and CSCW, including five award-winning papers. He received his PhD in Computer Science and Engineering from the Hong Kong University of Science and Technology (HKUST).

Dr. Aída Ponce Del Castillo is a Senior Researcher at ETUI’s Foresight Unit. ETUI stands for European Trade Union Institute, which is the independent research and training centre of the European Trade Union Confederation. Her research focuses on strategic foresight and on the legal, ethical, social and regulatory issues of emerging technologies. She is a member of the Competent Authorities Sub-Group to regulate nanomaterials at the European Commission. At the OECD she is a member of the Working Party ‘Bio, -Nano and Convergent Technologies’, and AI Governance. Previously, she was the Head of the ETUI Health and Safety Unit, working on occupational health and safety policies in the EU. She also was the Coordinator of the Workers’ Interest Group at the Advisory Committee of Safety and Health to the European Commission .Aida Ponce Del Castillo is a lawyer by training.obtained her European Doctorate in Law, focusing on the regulatory issues of human genetics, from the Universities of Valencia and Bonn. She also holds a Master’s degree in Bioethics.

Dr. Ansgar Koene engages with policy developments around the governance and regulation of Artificial Intelligence (AI). He works with policymakers, regulators, industry leaders and other stakeholders to support the trustworthy use of AI for the benefit of people, society and organizations. He is EY’s Global AI Ethics and Regulatory Leader. Ansgar chaired the IEEE 7003-2024 Standard for Algorithmic Bias Considerations working group and is a co-convener for the work on AI conformity assessment with the European standards body (CEN-CENELEC JTC21) “AI” committee. He is a trustee for the 5Rights foundation for the Rights of Young People Online, and advises on AI and Data Ethics for various NGOs and research consortia. Ansgar has a multi-disciplinary research background, ranging from Policy and Governance of Algorithmic Systems, data-privacy, AI Ethics, AI Standards, Robotics to Computational Neuroscience. He holds an MSc in Electrical Engineering and a PhD in Computational Neuroscience.

Moderator:  Dr. Angela Müller is Executive Director of AlgorithmWatch CH in Zurich and Executive Board Member of AlgorithmWatch in Berlin. Angela has testified as an expert before the Council of Europe, the German Bundestag and the Swiss Parliament, and was appointed as one of “100 Women in AI Ethics” worldwide in 2024. She is a member of expert working groups of the Swiss Federal Administration and – connected to her role at AlgorithmWatch – of the Federal Media Commission. Prior to her current role, she held positions at various universities, on an innovation platform, a civil society think tank on foreign policy and the Swiss Foreign Ministry.

FINDHR

This roundtable is organized as part of the FINDHR project. This activity is Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or European Commission. Neither the European Union nor the granting authority can be held responsible for them.

The FINDHR project facilitates the prevention, detection, and management of discrimination in algorithmic hiring and closely related areas involving human recommendation. On completion, the project’s publications, software, courseware and datasets will be made freely available to the public under free and open licenses.

Follow FINDHR on www.findhr.eu, 📢LinkedIn and 📢 Join its mailing list.

Note for illustration on cover photo: Creative Commons designed by Justice Adda for project FINDHR.

Subscribe to our feminist newsletter
We keep your data private and share your data only with third parties that make this service possible. Read our Privacy Policy.

Discover more from

Subscribe now to keep reading and get access to the full archive.

Continue reading