Please ensure Javascript is enabled for purposes of website accessibility
Home / News / AI contracting pitfalls in-house counsel can’t afford to miss

AI contracting pitfalls in-house counsel can’t afford to miss

Imagine rolling out a generative AI chatbot for your customer service platform to improve response times and efficiency and to lower costs. While it appears promising for a while, it all grinds to a halt when the chatbot recommends illegal return policies or offers inaccurate safety advice on regulated products or even misrepresents the company’s obligations.

As corporate counsel for the company, sweat beads on your forehead as you try to figure out if there is any way to claim indemnity when the lawsuit lands squarely on the company’s doorstep. Unfortunately, the AI vendor’s contract disclaimed any liability for the chatbot’s outputs.

This scenario is not a crossover episode of “Suits” or “Black Mirror”; it is a very real scenario that is on the horizon, if it has not happened already.

In 2024, Moffatt v. Air Canada, 2024 BCCRT 149, proves that this is a looming reality. In Moffatt, Air Canada rolled out an AI chatbot that incorrectly suggested to a user that bereavement fare refunds could be requested up to a certain number of days after the ticket purchase. The user relied on the guidance, booked at full price, and was later denied a refund.

The British Columbia Civil Resolution Tribunal found Air Canada was liable for negligent misrepresentation, further explaining that the chatbot’s misinformation constituted part of the company’s website content, thus the chatbot could not be treated as a separate legal entity.

As AI tools proliferate across industries, ranging from automated marketing and sales to consulting to chatbots to even legal research, companies are entering into agreements with AI vendors at an unprecedented rate. The problem arises when many of these contracts are based on templates designed for traditional software systems, not probabilistic models capable of making autonomous decisions — and mistakes.

With this rapidly evolving landscape, the question is not just what AI can do, but who is responsible when it makes mistakes.

Most traditional indemnity provisions, warranty language, and service level agreements or SLAs often fail to account for the unique risks posed by AI: hallucinations, discriminatory impacts through bias, data leakage, and regulatory scrutiny.

Now that litigation related to AI harms is accelerating, corporate counsel must rethink and re-strategize how risk is allocated in vendor contracts or run the risk of being left without a working parachute.

How do traditional vendor contracts fall short with AI?

Most technology contracts were designed for deterministic software — systems that perform specific tasks using fixed logic and within predictable parameters. Examples include payroll platforms, CRM systems, or even automatic document assembly tools. Such agreements can often rely on time-tested language covering bugs, data security and IP infringement.

But where does AI fit into that framework since it is not as clean-cut as the rest?

AI systems, particularly those powered by large language models or other machine-learning algorithms, are probabilistic and adaptive since they do not follow a rigid script. Instead, they generate outputs based on complex statistical patterns learned from massive datasets, several of which remain clouded to end users. The resulting system is one that may produce accurate, even functional responses, a majority of the time. But caution: It may hallucinate, introduce bias, or evolve in ways that were never even contractually contemplated.

Quintessential contract problem

Consider a typical SaaS agreement for an AI-powered productivity tool. A SaaS agreement is a contract between a software provider (also known as the vendor) and a customer (also known as an organization or individual). The agreement provides the terms and conditions that govern the use of the software delivered via the internet. These contracts typically include detailing the subscription terms, pricing, intellectual property rights, and other pertinent aspects of the tailored service.

However, it is not uncommon for vendors to disclaim all responsibility for output accuracy by stating that results are “for informational purposes only.” It is also not uncommon for there to be no warranties tied to the performance benchmarks or content reliability; indemnity provisions that cover only third-party IP infringement (as opposed to regulatory fines or discriminatory results or business disruptions from flawed outputs); or SLAs that address uptime without including remediation timelines if the model starts generating harmful content.

In the context of traditional non-AI software, this might be fine and dandy. However, for AI software, these limitations can leave companies dangerously exposed. It would be one thing if the risk were just a technical malfunction, but it can also include reputational harm, regulatory scrutiny, and eventually downstream litigation resulting from outputs the company did not author but is on the hook for.

Now, vendor disclaimers are getting even broader, going so far as to disclaim all liability for damages arising from reliance on output, including errors, omissions or regulatory consequences.

These broad disclaimers are problematic since they shift the burden entirely to the business user, who more often than not lacks meaningful insight into the model’s training data, logic or ongoing updates. However, when something goes wrong, it is the company deploying the software, not the vendor, that will be sued.

It is imperative that in-house counsel recognize that AI vendor contracts are not mere IT purchases but are risk-transfer instruments, and right now, most of that risk is going in the wrong direction.

Building AI-specific protections into vendor agreements

Now that the risks and gaps have been made apparent, the next step is mitigation. As corporate counsel, it is imperative that AI vendor contracts are treated not as boilerplate technology agreements, but as living, breathing risk-allocation instruments that reflect the unique dangers posed by generative and adaptive systems.

Fortunately, there are emerging best practices that in-house lawyers can push for, even in asymmetric negotiations. There are five key areas where companies should steer their attention in negotiations when contracting with AI vendors:

Output liability and indemnification: Ask for indemnity for third-party claims arising from AI-generated outputs, not only protections with IP in the software code. Specifically negotiate coverage for misstatements, hallucinations or biased results, especially if your company works in a regulated context such as employment, finance or health care. Furthermore, stipulate representations that the training data was lawfully sourced and used. If a vendor is unwilling to offer indemnity, ask for access to an errors and omissions (E&O) insurance certificate or negotiate a risk-transfer provision that ties the liability to a specified capped value (e.g., one to two times the contract value).

Performance and safety warranties: Ask for affirmative warranties that the AI system will not intentionally produce misleading or unlawful outputs under normal use and inputs. Try to negotiate commitments to monitor and mitigate model drift, bias and unsafe behavior through periodic reviews and even retraining.

Lastly, make sure SLAs go beyond uptime and include response times for flagging, tagging and remediating harmful outputs. The last point is critical since traditional SLAs often fail to address AI model behavior.

Audit and transparency rights: Negotiate a clause that includes a right to request documentation and summaries (including notifications of the change) of training data, model update schedules, material model changes, performance degradation, and safety testing protocols. These transparency terms can be critical to meeting due diligence and supervisory obligations under the EU AI Act, the GDPR, and evolving regulatory guidelines.

Human-in-the-loop and fail-safe mechanisms: Negotiate guarantees that the tool can be deployed with human oversight, especially for high-risk use matters. Should the company be responsible for what the AI says, you need to have a method to intercept it before it reaches the public or desired end user population.

Exit and suspension clauses for misbehavior: Request the explicit right to terminate or suspend use if the AI tool produces harmful, discriminatory or legally noncompliant outputs. Negotiate contractual remedies that are beyond mere refunds, such as cooperation in mitigation, legal defense support, or notifications to affected users. As a bonus, request a model retraining clause that will allow your company to require adjustments from the vendor if the output quality begins to materially degrade.

Corporate counsel are on the front lines of this shift in AI vendor risks. The good news? As counsel, you do not need to reinvent the wheel. By updating core contract provisions, pushing for AI-specific guardrails, and prioritizing such deployment as a live source of risk as opposed to a static tool, you can help your organization seize the benefits of AI innovation without finding itself in litigation traps, cultivating across industries.

In the age of autonomous tools, risk does not disappear; it just travels. The question is, does your contract know where it is going?


Harshita K. Ganesh is an attorney at CMBG3 Law in Boston. She can be contacted at [email protected].