Please ensure Javascript is enabled for purposes of website accessibility
Home / Legal News / AI driven decisions in insurance and risks of liability

AI driven decisions in insurance and risks of liability

The use of artificial intelligence (“AI”) algorithms is increasing across insurance markets but poses real and specific risks of liability for companies attempting to streamline decision making.

For example, health insurance companies are under significant scrutiny for their use of algorithms in determining coverage. Algorithms are essentially a set of instructions that tell a computer how to operate on its own and are especially useful for industries that must process voluminous amounts of data and render decisions. Though AI is touted for increasing productivity, it is also susceptible to bias and may generate more risks than benefits.

For insurers, AI can be helpful in evaluating risks, e.g. predicting the likelihood of future claims, as well as setting policy pricing and detecting fraud in insurance applications. However, major insurance companies are facing lawsuits resulting from the use of AI algorithmic tools in underwriting and claims processing.

A class action lawsuit against Cigna Health and Life Insurance (“Cigna”) is ongoing in a federal court in California. In Kisting-Leung v. Cigna Corp., the plaintiffs allege Cigna utilizes an algorithm-based tool called PxDx that reviews health insurance claims and compares procedure codes with Cigna’s list of approved diagnosis codes for that procedure. The plaintiffs contend that the use of this algorithm resulted in the denial of claims for medically necessary procedures without actual review by a medical director or physician, as required by California law.

Many states have adopted legislation and policies to limit and govern the use of AI by insurance companies. As the use of AI algorithms becomes more common, and the risks of discrimination and unjust results become evident through litigation, more states will adopt comprehensive legislation to govern the use of such tools by insurance companies to ensure fair claims handling. Insurance companies are not alone in their use of AI for streamlining decisions; financial lenders and hiring employers should also closely monitor their use of algorithmic tools to avoid biased and improper outcomes that could result in significant litigation risk.

Taylor Wewers Bagby is an associate at Hall Estill’s Tulsa office.