Please ensure Javascript is enabled for purposes of website accessibility
Home / Legal News / Artificial intelligence, accuracy and the Shaboozey shuffle

Artificial intelligence, accuracy and the Shaboozey shuffle

As summer winds down, I look forward with excitement to the school year. It produces new friendships, football, and “recaps” from my kids (theirs alone to share) about each day.

During the summer, I have a vague idea of the activities, adventures, or people they are seeing. But when school is in session, every day is a new day. Maybe they are learning about sharks, the Oregon trail, or getting spelling tests that I would certainly fail.

And as I look forward, I am reminded of the silly shtick my daughter and I engaged in all last year. I asked who she played with and what they did. No matter her answer, I would ask how Shaboozey was doing. She would say that person was not real and then I would play Spotify and inform her that Shaboozey was very real. This would cause some confusion and self-doubt as to whether this country music star was in her pre-school, because in her mind, he is real and they do listen to music, so maybe he was playing up in the loft?

While a fake reality about one of my daughter’s favorite musicians has no known (current) downside, the same is not true related to GenAI in the workplace.

Here are three examples of potential hallucinations in the workplace: 1, chatbots that make unauthorized promises can inadvertently bind you to false obligations; 2, fabricated citations, inaccurate policies, or misapplied legal analysis may expose you to sanctions or lawsuits; and 3, hallucinated outputs pose a serious risk of slipping into public-facing disclosures or regulatory filings, creating compliance and reputational concerns.

That begs the question as to what to do when the hallucinations seem real. The key is to create a process to help minimize your company’s exposure through this five-step process:

  • Require human oversight. Review GenAI outputs before publishing, especially in legal, HR, compliance, or other high-stakes contexts.
  • Train for hallucination awareness. Teach employees to spot red flags (i.e., overconfidence or missing links) and verify outputs (check for fake citations) with independent sources.
  • Adopt safe AI tools and practices. Avoid vague queries and instead use structured prompts with clear context to reduce hallucinations.
  • Monitor and govern AI use. Track and label AI-generated content, conduct regular audits, and assign an oversight role or committee to update policies and handle incidents.
  • Set boundaries and transparency rules. Limit GenAI to drafting (not final authority) for sensitive content, disclose AI involvement when appropriate, and define company-wide usage policies.

Just as my daughter needed to separate her real classmates from the imagined presence of Shaboozey in the loft, we need to separate AI’s creative fictions from reliable facts. The difference is that while a child’s confusion is endearing (or at least that is what I tell myself), hallucinations within your company’s policies, procedures, hiring strategy, or workflow carry very real consequences. My daughter knows Shaboozey isn’t hiding in the loft, and you need to make sure your company knows when AI is hiding fiction in your work and call it out before it becomes costly.

Stephen Scott is a partner in the Portland office of Fisher Phillips, a national firm dedicated to representing employers’ interests in all aspects of workplace law.