Please ensure Javascript is enabled for purposes of website accessibility
Home / Legal News / Beyond automation: the quest for ethical AI in human resources

Beyond automation: the quest for ethical AI in human resources

In the legal sphere, being the first among companies often translates to bearing the brunt of significant lawsuits in uncharted legal territories.

Fortunately, the vanguard of the AI revolution won’t be uncharted territory for your company. On August 9, a pivotal development emerged as the U.S. Equal Employment Opportunity Commission unveiled that iTutorGroup had opted to settle for $365,000 a case that arose out of allegations surrounding its AI-driven hiring tool.

This tool allegedly had an automatic bias against female candidates older than 55 and male candidates older than 60. Consider this article as a comprehensive guide, with not only details of the EEOC case but also a curated list of 10 strategic measures. By following these steps, organizations can avoid encountering a fate akin to iTutorGroup.

What happened

An applicant for iTutorGroup spotted a peculiar pattern when allegedly resubmitting a resume with a younger birthdate and gaining an interview offer. The applicant then brought this matter to the attention of the EEOC, which promptly filed a lawsuit on behalf of more than 200 applicants, citing age and gender bias. The lawsuit contended that the company’s automated screening disproportionately excluded women older than 55 and men older than 60.

This settlement holds significance for two pivotal reasons: 1, it is the first of its kind and clear proof that the EEOC is embarking on a broader mission to ensure AI workplace tools align with antidiscrimination regulations; and 2, a staggering 79 percent to 85 percent of employers now incorporate AI in their recruiting and hiring processes. With those numbers poised to rise, compliance is necessary to minimize controllable risks.

10 ways to avoid being the next case

Test the tools: Before using AI tools in HR, test them with diverse data to prevent unintended discrimination.

Review regularly: Continuously assess AI tools to ensure compliance and fairness, avoiding blame on software vendors. The EEOC has clearly stated that employers can’t pass the buck and blame their software vendor if the AI product ends up committing discriminatory or biased acts with applicants or employees.

Perform bias audits: Consider AI bias audits to uncover unintentional discrimination, even if not legally required.

Train personnel: Train HR teams to use AI fairly and provide support to AI applications in HR functions.

Ensure policies are clear: Develop a comprehensive AI policy to guide responsible AI use in HR practices.

Allow open communication: Create an environment where applicants and employees can voice AI-related concerns.

Maintain human involvement: Keep human judgment in workplace decisions alongside AI support.

Establish a feedback loop: Encourage feedback from all stakeholders to identify and address biases or issues.

Seek expert guidance: Partner with legal experts knowledgeable in AI, data privacy, and workplace law.

Stay up to date: Keep pace with AI and HR developments by accessing updated information and insights available online.

Conclusion

In the dynamic world of AI integration in HR practices, safeguarding against unintended biases and discrimination stands as a paramount concern. The cautionary tale of iTutorGroup underscores the pitfalls that can emerge with new technology. Adopting proactive measures – such as thorough testing, ongoing evaluation, and human oversight – can help ensure a company’s AI-driven HR remains free from reproach.

Stephen Scott is a partner in the Portland office of Fisher Phillips, a national firm dedicated to representing employers’ interests in all aspects of workplace law.