Artificial intelligence development is moving at a fast pace and the faster it moves, the more rapidly the courts will need to play catch up, according to some area lawyers, who add insight to the issue.
Matthew D. Kohel, a partner at Saul Ewing, and Michele Gilman, a professor of law at the University of Baltimore School of Law, tackle issues regarding liability in AI applications across health care and employment.
Legal framework
According to Gilman, the U.S. lags behind its counterparts in AI regulation.
“We don’t have comprehensive AI regulation in the United States, so we must look to traditional doctrines such as torts, contracts and the like to protect consumers from harm,” she said. By contrast, the European Union has already enacted extensive AI-related legislation.
Cases pending
Kohel, who advises companies on AI-related matters, warns that liability is moving up the chain, therefore deployers of AI must proceed with caution. Kohel references a federal class action lawsuit against Workday, a cloud-based HR platform. “The suit alleges discrimination by the systems’ applicant screening algorithms,” said Kohel.
The plaintiff, a Black man over 40 with anxiety and depression, applied to more than 100 jobs through the Workday-powered applicant screening systems. He was rejected every time, even though he met the qualifications for the roles. Some rejections arrived overnight, causing him to conclude that his resumes weren’t being screened manually and that an algorithm was systemically screening him out.
Kohel also points to a case where a candidate was rejected until he filled out the application with a younger age, which led to a claim of algorithmic discrimination.
Hiring and AI
Gilman cites another case, this time related to AI and housing.
“In a fair housing discrimination case against the developer of a tenant-screening algorithm, the trial court held that the developer was not liable because it was clear in the contracts and marketing materials that the downstream users, such as landlords, were responsible for the tool’s outcomes. The case is being appealed, but it goes to show how analog-era laws are sometimes ill-fit for digital-era issues,” she said.
Gilman stressed the importance of crafting AI laws that prioritize impacted populations.
“Private companies should not be the arbiters of what these laws look like. We also need to guard against lobbyist-driven legislation,” she said.
Employer protections
So how are employers supposed to protect themselves? Gilman said that the Equal Opportunity Employment Commission issued guidance for employers on the use of AI, but that the Trump administration has since rescinded it.
“Existing federal and state anti-discrimination laws still apply, however,” she said.
Health care and AI
Gilman said that the courts will be sorting out liability in health care and AI in years to come.
“The issues are complex, particularly because of the ‘black box’ nature of AI tools — meaning that they are so complex that sometimes their own developers can’t always explain how certain outcomes are generated,” she said.
While Gilman cautions against overregulation that could hinder innovation, she insists that human oversight must remain central.
“Health care professionals should be responsible for diagnosis and treatment. AI should be a supplemental tool and not replace human judgement. If/when AI is used, the patient should be told,” she said.
A path forward
The experts all agree on the need for specific, robust laws governing AI.
“Developers, deployers and users of AI all have liability for AI-generated harms and laws will likely need to recognize new forms of harm generated by AI. At the end of the day, AI does not have human intelligence. It is a tool used by people and companies to carry out human objectives and the people and entities that adopt, develop and deploy it must be responsible for its outcomes,” Gilman said.