News

Artificial Intelligence in the Hiring Process — Can I Prove I was Discriminated Against?

March 19, 2025

Artificial Intelligence in the Hiring Process — Can I Prove I was Discriminated Against?
author bio pic of Wills  Ladd

Written by Wills Ladd

Brought to you by Filippatos Employment Law, Litigation & ADR

The Rise of AI in Hiring and the Specter of Bias

Modern hiring practices increasingly involve artificial intelligence practices. Tools like automated resume scanners and AI-driven interview analyses are common, used by most employers, including nearly all Fortune 500 companies. While promising efficiency, these technologies risk embedding and amplifying societal biases, potentially leading to unlawful discrimination. 
This raises crucial questions: Is hiring using AI inherently discriminatory? Can individuals file discrimination lawsuits if they suspect AI bias? Regulators like the U.S. Equal Employment Opportunity Commission (EEOC) and lawmakers are addressing these concerns with guidance and specific laws, such as New York City’s Local Law 144. The rapid adoption of AI in hiring may be outpacing the understanding of its discrimination risks. Ironically, technology intended to reduce bias can systematize it if not carefully managed.

How AI Learns Bias: Understanding “Trained Biases”

AI systems learn from data, and if that data reflects historical inequalities, the AI learns these biases – known as trained biases. This is not usually intentional programming but rather a reflection of systemic issues in the data.

Bias can manifest in several ways:

  • Biased Training Data: Historical hiring data showing skewed representation can teach AI to favor dominant groups. For example, facial recognition software struggled with darker skin tones due to datasets dominated by lighter-skinned individuals. Amazon famously scrapped an AI recruiting tool in 2018 that penalized resumes mentioning “women’s” colleges, learning this bias from past hiring patterns.
  • Algorithmic Design & Proxies: Algorithms might use neutral “proxies” (like zip codes or specific word choices) that correlate strongly with protected characteristics (race, gender), leading to discriminatory outcomes.
  • Societal Reflection: AI can mirror societal power structures and prejudices present in the data.

These trained biases can lead to various forms of discrimination:

  • Racial Discrimination: AI might learn biases from data reflecting historical underrepresentation or rely on proxies correlated with race. The Mobley v. Workday lawsuit alleges such discrimination against African American applicants.
  • Age Discrimination: AI tools might disfavor older workers. The EEOC settled a case where AI rejected applicants based on age. Algorithms might also undervalue experience profiles common among older workers.
  • Sex Discrimination / Gender Discrimination: As seen with Amazon, AI can learn to prefer male candidates based on historical data, leading to disparate impacts on women.
  • Disability Discrimination: AI may screen out individuals with disabilities, including neuro-divergent individuals whose communication styles might be misinterpreted.

The Legal Landscape: Is Hiring Using AI Automatically Discriminatory?

Using AI in hiring is not automatically illegal. However, employers using artificial intelligence practices remain subject to anti-discrimination laws. They cannot use AI to justify discriminatory outcomes.

Key legal frameworks apply:

  • Federal Anti-Discrimination Laws: Title VII (race, color, religion, sex, national origin), the ADEA (age 40+), and the ADA (disability) cover AI-assisted employment decisions.
  • EEOC Guidance: The EEOC clarifies that employers are responsible for AI tool discrimination, even from third-party vendors. Employers must analyze tools for “disparate impact”—neutral practices harming protected groups.
  • Department of Labor (DOL) Framework: The DOL offers voluntary guidance for employers to minimize algorithmic discrimination risk.
  • State and Local Laws: Laws like NYC’s Local Law 144 specifically target AI in employment.

Can I Sue? The Hurdles in Proving AI Discrimination

While discrimination lawsuits over biased hiring using AI are possible, proving them is difficult. 
The “Black Box” Problem: AI decision-making can be opaque, making it hard to understand why a decision was made. Vendor trade secrecy often adds another barrier.

  • Proving Causation: Linking the AI’s opaque process to a discriminatory outcome based on a protected characteristic (or its proxy) is challenging.
  • Evidence Access: Obtaining the algorithm’s code, training data, and audit details is often difficult.
  • Disparate Impact Challenges: Plaintiffs must identify the specific practice causing disparity and overcome the “business necessity” defense, which is complex with AI.
  • Proving Intent: Establishing intentional discrimination is complicated by the complex chain of actors involved in AI development and deployment.

Potential Paths to Proving Your Case & Emerging Trends

Despite hurdles, options exist for those suspecting AI discrimination:

  • Leveraging Disparate Impact: Focusing on discriminatory outcomes under Title VII remains viable, often requiring statistical evidence.
  • Utilizing NYC Local Law 144: For NYC residents, mandated audit summaries can provide direct evidence of disparate impact. Non-compliance with the law can also strengthen a case.
  • The Discovery Process: Litigation allows formal requests for information about the AI tool, though these may be contested.
  • Emerging Vendor Liability: Holding AI vendors liable is a key development. The Mobley v. Workday lawsuit explores this, arguing vendors can be “agents” of employers. If successful, this could force vendors to prioritize fairness.
  • Documentation: Keep meticulous records of applications, communications, rejections (note timing for potential automation clues), and AEDT disclosures.

What Should You Do If You Suspect AI Hiring Discrimination?

If you believe artificial intelligence practices led to racial discrimination, age discrimination, sex discrimination, gender discrimination, or disability discrimination in New York:

  • Document Everything: Keep copies of job postings, applications, communications, rejection notices, and any AEDT disclosures.
  • Check for Disclosures: Look for required AEDT disclosures and audit summaries on the employer’s website, especially for NYC roles.
  • File a Charge: Contact the EEOC or the NYC Commission on Human Rights. Strict deadlines apply (often 180 or 300 days). Filing a charge is usually required before suing. State your suspicion about AI involvement.
  • Seek Legal Counsel: Consult an experienced New York discrimination lawyer. An attorney can assess your case, explain relevant laws (like NYC LL 144), guide evidence gathering, and represent you.

Call Us

We at Filippatos PLLC believe that everyone deserves the right to work freely and should be subject to a fair hiring process. If you believe you are experiencing discrimination at work due to reasons out of your control, please give us a call at 888-9-JOBLAW for a free consultation. We will do our utmost to help secure you the justice you deserve.