Please ensure Javascript is enabled for purposes of website accessibility
Home / Legal News / AI’s big advances yield challenges for clients navigating benefits and risks

AI’s big advances yield challenges for clients navigating benefits and risks

Artificial intelligence is moving at lightning speed, and it has resulted in a quickly changing legal landscape on the federal, state, and now international levels.

The AI-X team at Faegre Drinker advises clients in various industries on regulations and risk management stemming from AI technologies so that they can leverage the value of AI and algorithms in a safe, legal, and ethical manner.

Launched in April 2022, the interdisciplinary team advises clients on emerging laws, regulations and industry standards relating to AI technologies, helps clients develop responsible AI governance models and risk management frameworks, with a focus on the expanding interest in and use of generative AI, and counsels them on regulatory compliance, best practices, risk management and risk allocation related to these technologies. It also provides advocacy when litigation involving these technologies arises.

Recently, Scott Kosnoff, insurance partner and co-leader of the AI-X team at Faegre Drinker, and Andy Taylor, associate attorney at Faegre Drinker, shared their insights on some of the most recent developments in AI. One of the biggest changes has happened on the international stage.

On March 21, the United Nations General Assembly unanimously adopted the resolution “Seizing the Opportunities of Safe, Secure, and Trustworthy Artificial Intelligence Systems for Sustainable Development.” Led by the United States, the resolution “lays out a path for international cooperation on AI, including to promote equitable access, take steps to manage the risks of AI, protect privacy, guarding against misuse, prevent exacerbated bias and discrimination,” according to the White House.

“Governments around the world recognize the benefits and risks of AI, even if the specifics are not fully known,” said Taylor. “The U.N. resolution was sufficiently high-level and noncontroversial to achieve wide consensus. Unlike the internet, for which member states took a generally hands-off approach when it was in its early stages of development and adoption in the 1990s, passage of this resolution signifies that governments intend to be proactive in regulating AI. It remains to be seen whether the General Assembly can find consensus on more granular AI resolutions.”

There has also been significant movement at the federal level. In October 2023, President Joe Biden issued an executive order that established “new standards for AI safety and security.” Six months out from that order, Kosnoff says that they have only begun to see follow ups to this order.

“We expect a steady stream of AI pronouncements from the federal government as a follow-up to the president’s AI executive order,” said Kosnoff. “For example, on March 28, the Office of Management and Budget issued its final memorandum on the use of AI by federal agencies and departments. The memorandum establishes an exacting set of governance and risk management requirements that are intended to protect the rights and safety of the public. Requirements include impact assessments, identification/mitigation of algorithmic discrimination, ongoing monitoring, human oversight and more.”

“Meanwhile, on March 26, the National Telecommunications and Information Administration issued a report on AI accountability that, among other things, recommends that federal government suppliers, contractors, and grantees be required to ‘adopt sound AI governance and assurance practices for AI used in connection with the contract or grant, including using AI standards and risk management practices recognized by federal agencies, as applicable,’” added Kosnoff.

Recently, U.S. Deputy Attorney General Lisa Monaco unveiled “Justice AI,” a new initiative from the Department of Justice. As part of this initiative, individuals “across civil society, academia, science, and industry” will work together to explicate the impact of AI on the department’s mission. Department prosecutors are now also able to seek enhanced punishment for offenses made significantly more dangerous through the misuse of AI technology, in an effort to “deepen accountability and exert deterrence.”

While Kosnoff is keeping an eye on federal and international developments, as those will impact the clients he serves, he also is paying close attention to what is happening on the state level. In December 2023, the National Association of Insurance Commissioners (NAIC) unanimously adopted a model AI governance bulletin encouraging testing to identify potential “unfair discrimination in the decisions and outcomes resulting from the use of Predictive Models and AI Systems” and requires insurers to adopt AI governance and risk management framework.

So far, eight states have adopted the model bulletin. At least six more have signaled that they will likely adopt the model bulletin. “The bulletin provides states a flexible, non-prescriptive way to address insurers’ use of AI without having to enact legislation or adopt new regulations,” Kosnoff explained.

“Some insurance regulators are concerned about data and models provided by third-party vendors, but they don’t have jurisdiction over the third parties. The NAIC’s AI model bulletin gets at the issue indirectly by describing how insurers should deal with third-party vendors. The NAIC’s Third-Party Data and Models Task Force is considering whether there may be a more optimal approach,” Kosnoff added.

This is far from the final word on the matter, as conversations about AI continue on multiple levels of government. “Governments at all levels are struggling with how to encourage innovation while protecting individuals who could be harmed by the new technology,” Kosnoff said. “The developments reported in our AI briefing contribute to the evolving regulation of AI and those who use it.”