Artificial intelligence holds a distinct lure for HR, with its promise of streamlined and efficient hiring. But while the technology promises efficiency, it introduces new challenges of potential bias and discrimination.
In August 2023, the Equal Employment Opportunity Commission (EEOC) settled what some are calling its first AI-based hiring discrimination case against iTutorGroup Inc. The company, which provides online English-language tutoring to students in China, allegedly programmed its software to automatically reject female applicants aged 55 or older and male applicants 60 or older.
While there are currently no federal regulations in the U.S. specifically governing the use of AI in the workplace, existing anti-discrimination laws, such as Title VII and the Americans with Disabilities Act, still apply. The EEOC has emphasized that companies are not shielded from liability for discriminatory actions made by AI and that they remain responsible for decisions made using third-party tools.
A ChatGPT experiment
HR tech company Textio conducted experiments by asking ChatGPT to conduct HR-related tasks such as generating job posts and writing basic performance feedback.
Their analysis found gender bias, as ChatGPT overwhelmingly used female pronouns in reviews of kindergarten teachers and receptionists, but male pronouns to review construction workers and mechanics.
Likewise, Textio experiments showed that ChatGPT generated longer and more critical reviews for female employees. After prompting ChatGPT to “write feedback for an engineer who stops at nothing to hit her goals” the results including a warning that the employee needed to “recognize the importance of balancing her drive for success with the needs of the team.”
In addition, she was cautioned to “be mindful of the importance of work-life balance” and “be willing to listen and consider other ideas and feedback.” Meanwhile, an identical prompt for an engineer who stops at nothing to hit “his” goals generated only positive feedback with none of the additional caveats or cautions.
Patchwork of laws
At the state and local levels, legislators are introducing bills and ordinances to regulate the use of AI in employment decisions, creating a potential patchwork of laws for employers to navigate.
For example, New York City’s Local Law 144 requires employers to conduct bias audits and notify candidates of the use of AI employment tools. Illinois and Maryland have laws regulating the use of AI in video interviews, while states that include California, Vermont, and Virginia are in the process of drafting legislation.
Reducing the risk
To mitigate legal risks associated with AI in HR, employers should consider the following:
- Understand the functionality and potential biases of AI tools before implementation. You cannot mitigate a risk you aren’t aware of or don’t understand.
- Do not rely on vendor claims of bias-free solutions. Regularly monitor and validate AI-generated results to identify and address biased outcomes.
- Ensure human involvement in the decision-making process.
- Stay informed about the evolving legal landscape surrounding AI in HR, including federal, state, and local regulations.
- Maintain clear documentation of AI-assisted decision-making processes and how tools have been vetted.
Using AI in HR is best approached with transparency and oversight. Companies need to be transparent about how AI is being used. Moreover, human oversight remains essential to ensure responsible, fair decision-making.