Blog Article

AI tools in recruitment are no longer experimental. Across screening, video interviewing, assessment, and candidate ranking, automation has become part of how many organisations hire. That shift has brought real efficiency gains, but it has also introduced a category of risk called AI bias in hiring that most talent leaders are not yet fully equipped to manage.
The core problem is not that these tools are inherently discriminatory. It is that they reflect the data and assumptions they were built on and when those foundations are flawed, the outcomes can exclude candidates in ways that are systematic, invisible, and legally consequential.
With some lawsuits already in play, and regulators now paying close attention, the compliance landscape is shifting fast. Therefore, before any organisation deploys an AI hiring tool in 2026, its talent leaders need to understand what they are taking on.
Causes of AI Bias in Hiring

AI bias in hiring rarely happens because someone designed a tool to discriminate. More often, it emerges from decisions made earlier in the process such as how the tool was trained, what data it used, and what objectives it was optimised for. Understanding the root causes is the starting point for managing the risk.
Biased Training Data
The most common cause is a model trained on historical hiring decisions that already contained bias. When past data reflects a pattern of hiring mostly men for technical roles, or mostly younger candidates for entry-level positions, the model learns to replicate that pattern.
It learns to favour candidate profiles that look like previous hires
Historically underrepresented groups get systematically deprioritised, not because they are less qualified but because they look different from the pattern
Proxy Discrimination
Even when a tool does not use protected characteristics directly, it can still discriminate by relying on variables that correlate with them, a process known as proxy discrimination.
Residential postcodes can correlate with race or socioeconomic background
School names can correlate with socioeconomic status and indirectly with ethnicity
Job title formatting or language patterns can correlate with gender or age
The tool never mentions protected characteristics but still produces discriminatory outcomes
Facial and Voice Analysis in Video Interviews
Several AI video assessment tools claim to evaluate personality, cultural fit, or performance potential from how a candidate looks and speaks. The evidence for the validity of these assessments is thin, and the risk of discrimination is significant.
Facial recognition models have been shown to perform less accurately on darker skin tones
Accent and speech pattern analysis can disadvantage non-native speakers
Candidates with disabilities or neurodivergent conditions may be scored poorly for reasons entirely unrelated to their capability
Lack of Human Oversight
When AI tools make decisions without a human reviewing the output, errors and patterns of bias go unchecked at scale.
Automated rejection means hundreds of candidates are screened out before any human sees their application
No human review means no opportunity to catch when the model is producing unusual or discriminatory results
Poorly Defined Success Criteria
What a model is optimised to predict matters enormously. If it is trained to replicate the characteristics of current high performers, it can entrench the existing demographic makeup of the workforce rather than expanding it.
The model learns what a successful employee looks like today, not what one could look like
It may favour candidates who fit a narrow profile that reflects past culture rather than future capability
Diverse candidates who would succeed in the role but look different from the existing team get filtered out before they are ever seen
How to Use AI in Hiring Responsibly

None of the above means organisations should avoid AI hiring tools altogether. It means deploying them with intent, governance, and accountability built in. The following five practices reflect how responsible deployment looks like.
First, conduct a bias audit before deployment. Before any AI tool goes live in your hiring process, commission an independent bias audit that tests the tool's outputs across gender, age, race, and disability status. Several specialist firms now offer this service, and in some jurisdictions it is a legal requirement.
Second, keep humans in the loop at every consequential stage. AI tools should inform decisions, not make them autonomously. Any candidate rejection, shortlisting decision, or ranking should pass through a human reviewer who has the authority and the accountability to override the model.
Third, be transparent with candidates. Tell candidates when AI is being used in the process, what it is assessing, and how decisions are made. This is already required by law in several US cities and states, and it is good practice regardless of whether the law requires it in your jurisdiction.
Fourth, test for adverse impact regularly. Once a tool is live, track whether it is producing selection rates that differ significantly across protected groups. A meaningful disparity between groups is a signal that the model is producing discriminatory outcomes even if that was not the intent.
Fifth, hold vendors accountable. Employers remain legally liable for discriminatory outcomes even when the tool was built by a third party. Before signing a contract, require vendors to share their bias testing methodology, audit results, and ongoing monitoring commitments in writing.
Recent Lawsuits and Enforcement Cases
Three cases have materially shaped how the legal and regulatory community views AI bias in hiring, and every TA leader should be familiar with all three.
EEOC v. iTutorGroup (2023)
iTutorGroup's recruitment software was programmed to automatically reject female applicants aged 55 and over, and male applicants aged 60 and over, for tutoring roles in violation of the Age Discrimination in Employment Act. The company settled for $365,000 and was required to reform its hiring systems and submit to EEOC monitoring.
The case established clearly that the EEOC views automated hiring tools as selection procedures subject to existing anti-discrimination law.
Mobley v. Workday, Inc (2024)
A class action lawsuit filed against Workday alleging that its algorithmic decision-making tools discriminates against African American, older, and disabled applicants. The plaintiff applied for more than 80 roles using systems believed to be powered by Workday's tool and was rejected every time. In May 2025, the court certified it as a collective action meaning other job seekers who believe they were similarly affected can now join.
The case signals that AI vendors, not just employers, can be held liable for discriminatory outcomes.
CVS AI Video Interview Settlement (2024)
CVS settled a case after its AI-powered video interview tool was alleged to have rated candidates on facial expressions and physical characteristics in a manner that violated Massachusetts law.
The case reinforced concerns about the validity and fairness of video-based AI assessment tools, and it added to the growing body of enforcement action that has put video analysis tools specifically under scrutiny.
AI Regulations You Need to Know

In 2026, the regulatory landscape for AI in hiring is moving quickly and unevenly, with several frameworks already shaping what companies must operate moving forward.
New York City Local Law 144
New York City Local Law 144 is one of the first most specific and enforced regulations in the US on AI hiring and it requires employers using automated employment decision tools for hiring or promotion to conduct annual independent bias audits, disclose the use of such tools to candidates, and publish audit results publicly.
Illinois AI Video Interview Act
This focuses specifically on AI-driven interviews and it requires employers to notify candidates before using AI to analyse video interviews, explain what the AI evaluates, and obtain consent. It also requires demographic reporting on candidates who were not selected.
EU AI Act
The EU AI Act classifies AI systems used in employment as high-risk, meaning organisations operating in the EU must meet requirements for transparency, human oversight, technical robustness, and bias testing before deploying any AI tool in their hiring process.
To Wrap Up
AI bias in hiring is a present risk, with active litigation, regulatory enforcement, and an expanding patchwork of laws that TA leaders and CHROs need to navigate right now. That means auditing tools before they go live, keeping humans accountable for every consequential decision, being honest with candidates about how the process works, and staying current on a regulatory landscape that is moving faster than most internal compliance functions can track.
If you are working through how to build an AI governance framework for your talent acquisition function that holds up legally and operationally, WezOps works with talent leaders to design the policies, audit processes, and oversight structures that make responsible AI adoption seamless.
AI Bias in Hiring FAQs
What is AI bias in hiring and how does it happen?
AI bias in hiring occurs when an automated tool produces outcomes that systematically disadvantage certain groups of candidates based on race, gender, age, disability, or other protected characteristics. It most commonly happens when a tool is trained on historical hiring data that already reflected bias, or when it uses variables that appear neutral but correlate with protected characteristics.
Are employers legally responsible for AI bias in their hiring tools?
Yes. Under existing US federal law, employers are responsible for discriminatory outcomes in their hiring process regardless of whether those outcomes were produced by a human or an algorithm. The EEOC has confirmed that it treats automated selection tools as selection procedures subject to anti-discrimination law. Employers can also be held responsible for tools built by third-party vendors if those vendors are acting on the employer's behalf.
How can TA leaders reduce the risk of AI bias in their hiring process?
The most effective steps are to commission an independent bias audit before deploying any AI hiring tool, maintain human review at every stage where a consequential decision is made, disclose to candidates when AI is being used, and track selection rates across protected groups on an ongoing basis after deployment. Choosing vendors who are transparent about their bias testing methodology and holding them contractually accountable for discriminatory outcomes is equally important.
Related Articles

































