Please ensure Javascript is enabled for purposes of website accessibility
Home / Legal News / Take care when using AI in the workplace

Take care when using AI in the workplace

When used effectively, artificial intelligence, or “AI,” can eliminate much bias in decision-making. However, when misused, AI may actually promote the same bias it should thwart.

Generally, employers use AI in the workplace through supervised machine learning (SML) during the hiring process. SML uses algorithms to analyze applicant data and recommend which candidates will likely succeed. The algorithms must be programmed with training data to make accurate recommendations.

A typical example of training data is common words in a résumé. An employer gathers the résumés of their successful employees (“benchmark résumés”), and the SML will catalog all the words and key phrases. The software then compares all applicant résumés to these benchmark résumés. If words from the benchmark résumés appear in the applicant résumés, then those applicant résumés are “good.”

The pitfalls of SML

Employers must administer SML carefully; it is too easy for SML to exclude members of a protected class. This exclusion is a consequence of skewed or limited data sets.

For example, an employer could have benchmark résumés only from men that likely don’t have words commonly found in women’s résumés, such as involvement in a women’s soccer league or attendance at an all-girls high school. Therefore, a woman’s résumé that contains these examples will not register as valuable or beneficial to the job because the words “women” or “girl” doesn’t appear in the benchmark résumés. Here, the SML hinders women from getting a job; it applies bias rather than eliminating it from the hiring process.

What the law says about SML

There is no federal law and few state laws governing employer use of SML. Both the Equal Employment Opportunity Commission (EEOC) and the Department of Justice (DOJ) have provided guidance that reminds employers of their duties under the Americans with Disabilities Act (ADA). The guidance advises employers to do several things:

  • have a process to provide reasonable accommodations when using “algorithmic decision-making tools,”
  • avoid screening out individuals with disabilities who can do the job with reasonable accommodation,
  • take care to not participate in prohibited disability-related inquiries using algorithmic tools,
  • consider the impact when designing or choosing technological tools, and
  • remember the obligation to provide reasonable accommodations under the ADA.

Are you violating the ADA?

The EEOC and DOJ describe several instances where an employer may be in violation. One example is if there isn’t provision of a “reasonable accommodation” necessary for the algorithm to rate the applicant fairly. If an employer relies on an algorithmic tool that screens out an individual with a disability, it is likely in violation. It does not matter if the screening out was intentional or unintentional; it only matters that it occurs. Lastly, an employer may be in violation if it adopts an algorithmic decision-making tool that violates the ADA restrictions on disability-related inquiries and medical examinations.

Complying with the EEOC & DOJ

There are two simple ways to ensure compliance with the EEOC & DOJ guidance and avoid an ADA violation. The first is to conduct a bias audit. A bias audit is an impartial evaluation that tests the artificial intelligence tool’s disparate impact on protected classes. Conducting a bias audit provides tangible evidence of an active attempt to comply with the guidance.

Another step is to consider whether the algorithmic tool was made with individuals who have disabilities in mind. Employers can set up a systematic process to determine the answer to this question. The process would ask questions like: was special attention made to the user interface so that it is accessible to as many individuals with disabilities as possible? Are any materials provided to applicants presented in several alternative formats? Was there a determination of whether the algorithm disadvantages individuals with disabilities?

Conducting a bias audit and considering the algorithm’s impact on people with disabilities do not guarantee compliance with the EEOC and DOJ. However, they are tangible and objective steps that can be taken to protect applicants and the company.

Preparing for future legislation

Colorado, California, and Mississippi have laws that regulate AI in the workplace and schools. Alabama established the Council on Advanced Technology and Artificial Intelligence. New York City has created a robust law that regulates automated employment decision tools for candidates and employees who reside in the city.

The national trend indicates that other states may see similar legislation in the future. So, what can employers do to prepare? First, seek advice from employment experts when designing or purchasing an SML. Next, ensure that any tools comply with the ADA by asking the following:

  • Is the interface as inclusive as possible?
  • Is my company providing any necessary materials to applicants in various available formats?
  • Is my company providing reasonable accommodation to applicants?
  • Has there been a recent bias audit conducted on the tool?
  • Does the proposed SML screen out members of a protected class?

Finally, submit any proposed SML to trusted legal counsel before implementing the tool.

Blayne Soleymani-Pearson is an attorney with Barran Liebman.