A technology company is facing a proposed class action that could redefine how federal consumer protection law applies to AI-driven hiring tools.
The lawsuit alleges that Eightfold AI Inc.’s résumé screening and candidate scoring technology functions like a background check agency, without complying with the disclosure, access, and dispute requirements of the federal Fair Credit Reporting Act (FCRA) or California’s parallel statute, the Investigative Consumer Reporting Agencies Act.
Rather than alleging discrimination, the plaintiffs are advancing a different theory: that AI-generated applicant evaluations may qualify as “consumer reports” under the FCRA, triggering longstanding compliance obligations.
A novel legal theory
The complaint, brought by workers’ rights groups Outten & Golden and Towards Justice, argues that the company’s platform assembles and evaluates personal information on job applicants and furnishes that information to employers for hiring decisions, placing it squarely within the FCRA’s definition of a “consumer reporting agency.”
The press release announcing the suit describes the case as targeting “secretive, AI-driven ‘consumer reports’ on job applicants – often without their knowledge, consent, or any opportunity to correct errors – before the reports are used to screen them for jobs.”
According to the filing, the platform allegedly gathers data from multiple sources, applies a proprietary large language model, and generates scores ranking applicants based on their “likelihood of success.” Lower-ranked candidates may be filtered out before a human reviews their application.
The lawsuit contends that “there is no ‘AI exemption’ from longstanding worker and consumer protections,” and argues that companies deploying such tools must follow the same procedures required of traditional background screening firms.
If the court agrees, AI hiring vendors could be required to provide applicants with disclosures, copies of reports, and opportunities to dispute inaccuracies before adverse decisions are made.
What FCRA requires
The FCRA governs how employers use third-party reports for employment purposes. Among other requirements, it mandates:
- Clear, stand-alone disclosure and written authorization before obtaining a report.
- Pre-adverse action notice if an employer intends to reject a candidate based on the report.
- Access and dispute rights allowing individuals to correct inaccurate information.
Historically, these rules have applied to credit bureaus and background check companies. This case tests whether algorithmic hiring scores fall within the same statutory framework.
Why employers are watching closely
Until now, most legal scrutiny of AI hiring tools has focused on discrimination claims. The new case shifts the focus to consumer protection law, potentially lowering the bar for plaintiffs. Unlike bias claims, which require proof of discriminatory impact, FCRA claims center on procedural compliance.
Because FCRA provides for statutory damages, typically $100 to $1,000 per willful violation, class actions can quickly escalate in scope and exposure.
The case remains in its early stages, and it is not yet clear whether the court will accept the plaintiffs’ theory that AI-generated hiring scores qualify as “consumer reports.”
New England Biz Law Update
