In mid-May a federal judge cleared the way for a nationwide class action against Workday, finding that its AI-driven hiring tools may have unfairly screened out applicants older than 40. The case sent a clear message: employers and AI vendors can face liability for algorithmic bias — even when the discrimination isn’t deliberate. Following is a brief review of the facts of the case, why the ruling matters, and next steps for employers.
Facts
The plaintiff, a Black man over age 40 who lives with anxiety and depression, applied to more than 100 jobs at companies using Workday’s AI hiring tools (i.e., personality and cognitive tests) — and was rejected every time. The plaintiff sued Workday and the judge allowed the case to go forward on the question of whether Workday’s AI recommendation system had a disparate impact on applicants over 40.
Why the ruling matters
This lawsuit is one of the first major challenges to AI-based hiring practices under federal discrimination laws. It shows that employers can still be held liable if vendor tools unfairly reject protected groups, even if the bias is not intentional. Moreover, courts may treat screening systems as a “unified policy” even when different employers use the tools differently.
Next steps
Employers need to be nimble and proactive to limit exposure related to similar AI hiring tools. Here are three steps your company should be taking:
Create AI governance framework that sets guardrails on how your company is going to use these algorithmic tools. Moreover, the crux of this is who watches the watchmen. Make sure you have some oversight of how to monitor the outcomes of your AI. This means you should provide training to teach employees when to override algorithmic rankings, and to audit results for desired (and nondiscriminatory) outcomes.
Keep clear records of hiring choices and the reasons behind them. Be cautious with tools that rely on vague or unexplainable scoring systems (i.e., the candidate is a “good fit,” a “good guy,” or a “personality hire”).
Ask vendors for documentation showing how their systems are tested for bias; if they balk, look harder. Insist on contractual assurances like data transparency and nondiscrimination. Ask about each vendor’s approach to bias mitigation or whether the vendor even conducts any bias audits.
We cannot sit back and hope the system plays fair. Employers need to make sure the rules are clear, AI tools are tested, and no one gets wrongly left on the bench.
Stephen Scott is a partner in the Portland office of Fisher Phillips, a national firm dedicated to representing employers’ interests in all aspects of workplace law. Contact him at [email protected].