It is no secret that artificial intelligence (AI) is transforming the workplace. From streamlining hiring processes to increasing productivity, AI has the potential to make our work lives easier and more efficient. But as with any new technology, AI comes with its fair share of risks.
Federal agencies are taking notice. The Equal Employment Opportunity Commission’s chair, Charlotte Burrows, recently called AI advancements a “new civil rights frontier.” And the Federal Trade Commission’s chair, Lina Khan, made it clear that there is no “AI exemption” to existing civil rights laws.
So, what does this mean for employers? Here is a closer look at how AI can lead to discrimination in the workplace and what employers can do to prevent it.
Recognize the uptick in workplace AI use
Employers are increasingly using AI for HR recruiting, sales and marketing, operations and logistics, customer service, and finance and accounting. In fact, a recent Fisher & Phillips Flash Survey showed that employers most commonly use AI for HR recruiting (48 percent), followed by sales and marketing (46 percent). Other popular uses include operations and logistics (32 percent), customer service (24 percent), and finance and accounting (24 percent).
The uptick in usage will continue to climb. But before your company goes “all in” on AI, be mindful that risk exists. For example, there have been lawsuits around the country related to AI’s discriminatory practice of weeding out females or individuals from certain zip codes. Regarding the zip code issue, a closer look at the data suggested the individuals being excluded from the hiring process fell into a protected status. So, while the “inputs” the employer gave the AI were nondiscriminatory, the results were. Since federal enforcement agencies are keeping a close eye on the trend – and have stated there is no AI exception to discrimination – it is important to be mindful of the risks and likely conduct an audit.
Consider conducting an audit
As noted above, while AI tools can streamline processes, they can also unintentionally lead to discriminatory practices. Like hiring managers, AI algorithms do not intentionally screen out candidates based on a protected category, but the AI algorithm may unintentionally screen out a disproportionate number of qualified candidates in a protected category. This could happen, for example, if the screening is based on qualities of the employer’s top-performing employees and if these workers are primarily from a specific demographic group (such as the AI hiring individuals only from a certain college).
Although many technology vendors may claim that the tool they have is “bias-free,” take a close look at what biases the technology claims to eliminate. For example, it may be focused on eliminating race, sex, national origin, color, or religious bias, but not necessarily focused on eliminating disability bias. You should also review the vendor’s contract carefully (specifically the indemnification provisions) to determine whether your company will be liable for any disparate impact claims.
Review the ‘Blueprint for an AI Bill of Rights’
The White House released a “Blueprint for an AI Bill of Rights” that includes five key principles to protect Americans in the age of artificial intelligence. The blueprint is designed to support policies and practices to protect individuals’ rights in the development and use of automated systems. For businesses, however, this is a strong sign from the White House that it is taking AI seriously. It is also an indication that future – and significant – legislation surrounding artificial intelligence will likely be proposed at the federal and state levels.
Monitor for new state and local laws
On July 5, a New York City law will take effect that requires employers to get a “bias audit” for all automated employment decision tools. Other states may follow suit. As is the case in most employment-related issues, it is better to be proactive than reactive. To that end, it is time to review policies and practices and consider performing an AI audit to flag and address potential biases in company systems.
Stephen Scott is a partner in the Portland office of Fisher Phillips, a national firm dedicated to representing employers’ interests in all aspects of workplace law.