Please ensure Javascript is enabled for purposes of website accessibility
Home / News / AI chat histories could become evidence in court

AI chat histories could become evidence in court

Two recent federal court decisions are raising a new question for employers: Could your employees’ AI chat histories show up as evidence in a lawsuit?

The answer, at least for now, is that it depends on the circumstances. And that uncertainty highlights the risk.

What’s happening

As generative AI tools like ChatGPT and Claude become part of everyday work, courts are starting to treat AI conversations as a new category of business records potentially subject to discovery in litigation.

Two February 2026 decisions, issued the same day by different federal courts, highlight just how unsettled this area is:

  • In Warner v. Gilbarco, Inc., a federal court in Michigan found that a plaintiff’s use of ChatGPT to help prepare her pro se case was protected as work product and not discoverable.
  • In United States v. Heppner, a federal court in New York reached the opposite conclusion, holding that AI conversations generated by a criminal defendant were not protected and could be used as evidence.

Why this matters for HR and business teams

For most organizations, this is an issue of clear and present concern. Employees are already using AI tools to draft emails and documents, summarize workplace issues, research HR policies or employment questions, and prepare responses to complaints or disputes.

Those interactions can create a digital paper trail that may later be:

  • Requested in litigation
  • Reviewed by opposing counsel
  • Interpreted as evidence of intent, decision-making, or knowledge

Importantly, courts may treat those records as shared with a third party, which can weaken confidentiality claims.

The key risk: the ‘it feels private’ problem

AI tools often feel like internal assistants. But legally, they may function more like external platforms.

That distinction matters. In at least one ruling, a court emphasized that AI tools are not lawyers, conversations may not be confidential, and data may be stored or shared under platform terms.

As a result, employees could unintentionally create discoverable records while thinking they’re working privately.

What employers should do now

This isn’t about banning AI. It’s about using it with your eyes open.

Here are a few practical steps:

  • Set clearer boundaries for sensitive use
    AI can be helpful for productivity, but using it for employee relations issues, disciplinary decisions, or legal questions can introduce risk if those chats are later reviewed.
  • Train managers and HR teams
    Most employees don’t realize that AI prompts and outputs may be stored and potentially disclosed.
  • Revisit your AI and data policies
    Clarify what can be entered into AI tools, when to involve legal, and which platforms are approved.
  • Think ahead about records and discovery
    AI-generated content may need to be preserved like other business records.
  • Assume variability across courts
    As Warner and Heppner show, outcomes may depend heavily on context, including how the tool is used and whether counsel is involved.

AI is already part of how work gets done. Now it’s becoming part of how work gets examined in court.

For employers, the shift is simple but important: Treat AI interactions less like private notes and more like potential business records, because that’s how courts may see them.