Read the full reports:
- the first panel debate “Designing a responsible AI assisted hiring pipeline through alignment between recruitment, software development and interdisciplinary approaches”,
- and the second panel debate “Refining Regulation, Policy and Governance for More Effective Mitigation of Intersectional Discrimination in AI-Assisted Hiring” in the FINDHR conference on 16 October 2025
For further information from previous events, also view the report from the Online Roundtable – Increasing Transparency to reduce Discrimination in AI assisted Hiring, 8 July 2025 and the report from the FINDHR Online Roundtable on 14 July 2025: “Challenges Anti-Discimination Assessment of Algorithmic Tools in Hiring”
“Designing a responsible AI assisted hiring pipeline through alignment between recruitment, software development and interdisciplinary approaches”, Reflections from FINDHR panel debate, 16 October 2025 by Francesca Maddii
The FINDHR project (Fairness and Intersectional Non-Discrimination in Human Recommendation) was created to address a simple yet critical question: how do we ensure that Artificial Intelligence hiring technologies serve all applicants equally? At the closing conference computer science and recruitment experts reflected on the lessons learned from this three-year interdisciplinary project in the panel debate titled: “Combining Innovation in Software Design and Human Resource Management to Reduce Discrimination in the AI-Assisted Hiring Pipeline”.
The panel was moderated by Professor Frederik Zuiderveen Borgesius of Radboud University. The panel featured Dr. Anna Gatzioura (Universitat Pompeu Fabra), Dr. Asia Biega (Max Planck Institute for Security and Privacy), and Anna Via (Head of AI at InfoJobs). They explored how fairness to reduce discrimination can and should be built into every stage of AI-assisted hiring, tackling the technical, social and legal strategies to build a fairer and responsible recruitment process. The panellists agreed that AI hiring tools can reduce discrimination only when humans remain accountable for their design and implementation, with a commitment to tackle biases and discriminatory processes where they arise.
Key Takeaways and Recommendations
Throughout the duration of the panel, many relevant suggestions and insights were shared. These were the main takeaways:
- Reducing discrimination in AI assisted hiring processes is a shared responsibility across developers, HR, and policymakers. Collaboration across disciplines leads to better outcomes.
- Fairness must be built into the algorithm from the start (preferably within an interdisciplinary team), and developers must consider societal values and context.
- Synthetic data can be a safe and effective tool for stress-testing on discrimination when developing an AI, given that the data is drawn from good quality data.
- Companies should view reducing discrimination in AI-assisted decision-making not solely as compliance issue, but as a trust-building mechanism to draw in and keep costumers; transparency and explainability are essential to build said trust.
- Continuous monitoring is needed to keep systems fair when being used in the market.
Anna Gatzioura stressed that a key takeaway from the research is A key takeaway from the research is that ensuring fairness in AI-based hiring is a combination of stress testing a system before launch, but also proper monitoring once it is in production. “We recommend the use of synthetic data for benchmarking in the pre-deployment phase, as once something is deployed in production, it’s very difficult to predict the evolution of the system and to correct what will happen”, she noted. Gatzioura explained that synthetic data needs to be of a good quality to be useful: if the original dataset is not representative, the synthetic data will not be either.
Asia Biega stressed that the point made by Gatzioura that bias can emerge at all times during the system’s lifecycle as a fundamental learning. In this context she found through the research that the “devil is in the details”: a solution can seem clear in theory, but the way it gets technically implemented can be (too) difficult. “We can’t assume we solved a problem after proposing a solution; there are so many steps in between”.
Anna Via, Head of Artificial Intelligence at InfoJobs a leading recruitment company in Spain, brought an industry viewpoint to the discussion. Her company worked with FINDHR to implement the toolkit in a real recruitment environment.
Anna Via expressed that research that FINDHR has promoted is deeply relevant for today’s hiring processes, as companies need and want these AI tools to operate better and avoid further discrimination. Via also warned against “implementation paralysis”, waiting for perfect systems before taking any action, as this is not an achievable goal. She explained that observing how others use AI can help shape good practices and avoid mistakes, with as objective making sure that the algorithms implemented are being used responsibly.
“Refining Regulation, Policy and Governance for More Effective Mitigation of Intersectional Discrimination in AI-Assisted Hiring”,
Reflections from FINDHR panel debate, 16 October 2025 by Masseni Keita
The further development of the Artificial Intelligence (AI) Act will define the options, standards, and oversight for the introduction and use of AI systems within the EU. The Act provides a necessary starting point for the regulation of AI, but it is not a final destination to ensure EU’s vision of a human-centric AI, which ensures AI works for people and protects fundamental rights. The current regulation leaves pressing gaps to mitigate discrimination resulting from AI assisted decision-making. Measures in policy, regulation and governance, are desired, focusing on the intersection of the anti-discrimination directives, the GDPR (General Data Protection Regulation) and the AI act.
This panel reflected on the policy gaps and current best practices. It focuses on the difficulties and opportunities for different oversight bodies, in particular equality and human rights bodies.

Baranowska, legal scholar for FINDHR’s equality monitoring protocol, explained the dilemma faced combining technical solutions with legal options. While continuous monitoring is crucial, especially when an AI tool is being used in the market, processing candidates’ personal data to monitor algorithmic systems post-deployment, raises complex legal issues under data protection law. Certain personal data relevant for discrimination monitoring are considered “special categories” and therefore subject to stricter safeguards under GDPR (for example ethnicity). FINDHR researchers have therefore experimented with privacy-preserving techniques, such as the multi-party computation. The proposal of MPC aims to ensure more valid consent and better protection of candidates ‘rights, but it also requires enhanced policy support for technical and legal solutions.
Milla Vidina, Senior Policy Officer at Equinet, the European network of equality bodies, explained the role of equality bodies, in monitoring and redressing discrimination that results from an AI system being used by humans. Equality bodies are independent institutions that promote equal treatment by assisting victims of discrimination, conducting independent research, publishing reports, and making recommendations on equality policy, are also engaged in monitoring and addressing discrimination in AI-assisted hiring.
Equinet is one of the limited number of stakeholders with a human rights perspective taking part in the European technical committee, CEN-CENELEC JTC 21, which is a body focused on AI standardization. Participation requires a lot of time investment and expertise, thus it is not easily accessible for civil society. However, there are many other ways, like EU consultations, to take part in raising the importance of non-discrimination and human rights when it comes to AI in the European market.
Nele Roekens, Artificial Intelligence Project Lead at Unia (the Belgian Equality Body, and Chair of Working Group on AI for the European network of National Human Rights Institutions, reflected on the role of national human rights institutions. These institutions often share overlapping mandates with equalities bodies and often collaborate closely with them. They are also addressing the gap between rising awareness of algorithmic harms and the limited number of related formal complaints. This underrepresentation is partly due to information asymmetry and lack of awareness among affected individuals, who may perceive the effort required to challenge discrimination as disproportionate to the potential outcome.
She underlined current efforts and the importance to address these gaps through training on legal frameworks (particularly the EU AI Act, which combines product-safety principles with fundamental-rights protection) and other capacity-building initiatives.
David Reichel, Head of the Data & Digital Sector, European Union Agency for Fundamental Rights (FRA), who was responsible, among other works, for the report Bias in Algorithms, emphasized the need for robust data to demonstrate discriminatory patterns in recruitment, for effective enforcement and policymaking. The FRA is currently working on a new report examining AI-related risks to fundamental rights and providing practical guidance on the responsible use of AI in hiring.
