Artificial intelligence is dominating conversations across the corporate world, regardless of industry. It is becoming a key element in corporate compliance, and the U.S. Department of Justice has taken notice.
The DOJ announced in September 2024 that when evaluating a company’s compliance program, it will now review how the business assesses and manages risks associated with its use of technology, including AI. As a result, companies must reexamine their compliance efforts to stay on the right side of the DOJ’s updated guidance. Equally important is implementing a culture of compliance and ensuring accountability where employees do not follow company policies.
DOJ guidance
The DOJ has revised its Evaluation of Corporate Compliance Programs (ECCP) — guidance that prosecutors use to evaluate a business’s compliance program, in connection with investigations, prosecutions, monetary penalties, and reporting obligations. As a result, businesses often use the ECCP to assess and refine their compliance efforts.
The ECCP assists federal prosecutors in making “a reasonable, individualized” evaluation of a business’s compliance program, with the DOJ considering factors such as a business’s size, industry, territorial reach, regulatory obligations, and other internal and external circumstances. In each case, the DOJ evaluates a program’s design, application, and effectiveness.
Under the updated guidance, the DOJ will now review the technology, including AI, that a company uses to conduct business and whether the company has assessed and mitigated risk associated with the technology. The DOJ will also seek to determine whether a business monitors and tests new technologies, including AI, to make sure they are functioning as intended and in line with the company’s policies and procedures. For example, the DOJ will ask: Does the business have a system for assessing the potential impact of AI on the company’s efforts to comply with criminal law? Does it integrate AI-related risks into its enterprise risk management system? How is the business mitigating potential, unintended consequences and misuses of emerging technologies, such as false approvals and AI-generated documents? How is it testing the technology’s reliability?
The DOJ will also examine whether a business allocates an equal amount of assets, resources, and technology to its compliance and risk management personnel, compared with other areas of the company. In other words, is there an imbalance between the technology and resources that a business uses to identify market opportunities and those that the business uses for compliance and risk management?
Stay prepared and comply with DOJ guidance
Businesses should consistently review and update their compliance programs — including their policies, procedures, and training programs — to reflect DOJ-related updates and assess how the updates could impact business operations.
Policies and procedures should be revised to reflect emerging risks associated with using AI, which can take many forms. The DOJ defines AI as “any artificial system” that “performs tasks under varying and unpredictable circumstances without significant human oversight;” “can learn from experience and improve performance when exposed to data sets;” was “developed in computer software, physical hardware, or other context that solves tasks requiring human-like perception, cognition, planning, learning, communication, or physical action”; or was “designed to think or act like a human” or “act rationally . . . using perception, planning, reasoning, learning, communicating, decision making, and acting.”
Businesses should determine if they have sufficient processes for identifying AI as used in their operations and establish controls—including human oversight—in order to:
- Monitor the reliability, trustworthiness and legal compliance of their AI systems;
- Mitigate the risks as needed; and
- Integrate the risks into their enterprise risk management efforts.
This requires maintaining skilled personnel and sound data and metrics to measure AI’s utility and risks.
Businesses should periodically consider “lessons learned” from their use of AI and review how peer companies in the same industry or geographic region deploy and manage the technology.
Additionally, businesses should:
- Regularly monitor and test their AI systems;
- Determine whether the technology is functioning as intended;
- Make sure the technology does not violate company ethics or policies;
- Take corrective action when needed; and
Provide regular employee trainings on specific emerging technologies, the business’ controls for managing its technology, potential concerns that employees must be aware of and compliance issues faced by peer companies.
Businesses should also implement specific policies and protocols for employees to proactively identify any AI-related operational concerns and immediately report any instances of noncompliance or negative outcomes through proper channels.
Bottom line: The DOJ is paying attention to how companies use AI and other technologies. Businesses should view this as an opportunity to better protect themselves and stay one step ahead of the government’s guidance.
Timothy Sini is a partner in law firm Nixon Peabody’s Government Investigations & White-Collar Defense practice. He was formerly a federal prosecutor in the Southern District of New York and the elected Suffolk County District Attorney, where he led one of the largest prosecutors’ offices in the nation.
Ryan Maloney is an associate in Nixon Peabody’s Complex Disputes practice, based in the firm’s Long Island office, where he represents clients in a wide range of commercial litigation matters and investigations.