Please ensure Javascript is enabled for purposes of website accessibility
Home / News / Expect a patchwork of state and local AI hiring laws

Expect a patchwork of state and local AI hiring laws

A New York City law going into effect this month will regulate how companies use artificial intelligence (AI) software in the hiring process.

The city passed the law in 2021 but didn’t adopt specific rules until April of this year. Per the regulations, companies must notify candidates when an AI system is being used and must hire independent auditors to check the technology for bias.

Labor analysts predict more cities and states will follow suit, passing a patchwork of laws before the federal government steps in. Illinois and Maryland already have AI-based hiring laws in place, and several other states — including California, New Jersey, New York, Vermont and the District of Columbia — are working on their own legislation.

In 2022, both the Equal Employment Opportunity Commission (EEOC) and the Department of Justice (DOJ) issued guidance outlining how AI and other automated hiring tools could violate the Americans with Disabilities Act (ADA).

And in April of this year, four federal agencies released a joint statement, pledging to “enforcement efforts to protect the public from bias in automated systems and artificial intelligence.” The EEOC, the DOJ, the Consumer Financial Protection Bureau (CFPB), and the Federal Trade Commission (FTC) have expressed concern that the use of AI in employment decision-making could lead to discriminatory outcomes.

Employer best practices

AI can be a powerful tool for making better hiring decisions. It can help employers identify top talent more quickly and efficiently, and make more objective and fair decisions.

However, AI is not perfect. It can be biased, and it can be used to discriminate against certain groups of people.

Employers who use AI in employment decisions need to be aware of the potential for bias and discrimination and take the following steps to mitigate the risks:

Audit for AI in the employment process. Identify any AI technology being used, such as tools used to screen resumes, conduct interviews, assess skills, and make hiring decisions. Consider, too, whether social media algorithms might be unfairly limiting who sees your job postings.

Use a diverse dataset to train the AI model. If the data is not diverse, the model is more likely to be biased.

Have human oversight of AI-based decisions. Human oversight can help to identify and correct any biases that may be present in the AI model’s output.

Monitor AI-based decisions for signs of bias. Employers should monitor the AI’s decisions to ensure that they are not disproportionately impacting certain groups.

Be transparent about the use of AI in employment decisions. Candidates should be informed about how AI is being used, and they should have the opportunity to challenge any decisions that they believe are unfair.

Be prepared for accommodation requests. Individuals with disabilities may request accommodation under the ADA. For example, an AI process that relies on text analysis may be biased against individuals with dyslexia or other learning disabilities, while an AI process that relies on video or audio analysis could be discriminatory to someone with a speech impediment.

AI is a rapidly developing area of law, and employment decisions appear to be an early legal frontier for regulating the technology. Employers should be aware of all the ways AI may be used in the workplace, e.g., hiring, employee monitoring, performance assessments, and should stay abreast of emerging local, state, and federal laws to ensure compliance.