Please ensure Javascript is enabled for purposes of website accessibility
Home / Legal News / Chatbots yield brave new world for employers

Chatbots yield brave new world for employers

Using ChatGPT on laptop

DEPOSIT PHOTOS

The headlines are buzzing with questions about ChatGPT and what it can do. Although there are more questions than answers at this point regarding its capabilities, it seems to be here to stay.

What, then, are the implications of ChatGPT for employers? Two law professors and an attorney weigh in.

ChatGPT is a type of artificial intelligence — specifically, a language model that interacts conversationally. It has been trained, and it now is being tested to see how well it performs in parsing data it has not seen before.

While it has a ways to go, ChatGPT has done remarkably well. It even has scored a C+ on law school exams.

“Artificial intelligence (AI) is exciting and, frankly, a little worrisome,” declares David Larson, professor at Mitchell Hamline School of Law. This statement seems to capture the sentiment of most people, employers included. While it is impressive that ChatGPT scored a C+ on a law school exam, that score is far from perfect. This is because of ChatGPT’s current limitations.

“These tools hallucinate,” says Daniel Schwarcz, professor at University of Minnesota Law School, referring to language models such as ChatGPT. By “hallucinate,” Schwarcz is referring to mistakes in ChatGPT’s generated text that are plausible semantically or syntactically but are truly incorrect or illogical.

“The problem is, you don’t necessarily know if they’re hallucinating. It’s easy for someone looking for a shortcut just to, frankly, uncritically take what they get from these materials and use them without thinking it through,” he says.

“This is a very attractive shortcut,” admits Schwarcz. “These tools have immense capacity to look really good on first glance.”

“ChatGPT is not perfect. It tends to give simple answers that are not always correct,” says Gerald Hathaway, a Faegre Drinker partner. “The use of only ChatGPT takes the human judgment factor out of the process, and the human judgment factor is important.”

“If it is to be used, it should be overseen by a human analyst,” Hathaway adds.

“AI chatbots only collect and rely upon currently available information. So at least as things stand now, they will be unable to develop new legal theories that might be critical to a favorable result,” says Larson. “Yet, technology is improving rapidly, and this limitation likely will be overcome.”

However, there are paths forward for use of ChatGPT in the workplace. “An AI chatbot could be used by lawyers to create first drafts of pleadings, briefs, or legal memoranda. If attorneys are procrastinating or simply unsure about how to begin analyzing a particular problem, then an AI chatbot might help them get started,” affirms Larson.

“AI chatbots such as ChatGPT and Bing Chat have tremendous potential for helping us work more efficiently,” Larson contends. “They can collect and summarize information in ways that will shift our work timeline forward. We can begin higher level work more quickly because preliminary or foundational tasks have already been completed.”

This does not mean that employers are welcoming ChatGPT into the office with open arms. Many employers have not adopted policies regarding ChatGPT use.

“Things have moved so quickly,” Schwarcz says. “Most employers have not had an opportunity to go through the processes — which usually are a bit more deliberative and are, generally, not set up for shocks.”

“A lot of places right now are taking a precautionary approach,” Schwarcz says.

There are some employers, according to Hathaway, who are embracing ChatGPT.  “Perhaps some small employers think that usage is putting them on the cutting edge of technology, and so they may encourage the use, perhaps as an experiment, for now. But larger companies view the use purely as an experiment, to see how it does.”

“Many companies are simply forbidding employees to rely on, or even use, ChatGPT for work-related tasks,” reports Hathaway.  “A company is free to enforce its rules, and it can mete out discipline, including termination, if an employee does not follow a company’s directive.”

This includes law firms. “Law firms, such as mine, forbid using ChatGPT to create legal documents,” says Hathaway. “The algorithms that make up ChatGPT do not replace a well-trained legal mind, at least not yet.”

“While some of us lawyers have some misgivings about the broad ban, I personally think it is the correct policy,” says Hathaway. “Employers engage employees for the employees’ talents, acumen and judgment. If an employee uses ChatGPT or another AI service for the employee’s output, the employer is not getting what it bargained for.”

Employers have been cautious about ChatGPT because, although there are potential benefits of using ChatGPT, there are real risks.

“One big risk is privacy-related,” Schwarcz affirms. “If people put proprietary information into these tools, there is no guarantee the information is going to stay that way. It will be available.”

“ChatGPT runs roughshod over IP rights, and it may use protected works in the creation of its output. A human relying on ChatGPT may not be aware that the output is actually an unlawful derivative work based on copyrighted material. That can expose the company to damages for copyright infringement,” Hathaway says.

“Because there are no citations or attribution, employees relying on chatbots to produce work product that will be shared and circulated may be guilty of plagiarism,” Larson cautions.

“The processing algorithms are collectively pretty much a black box, and since the process cannot, practically, be reconstructed, the output cannot be solidly defended, particularly if it is used to make decisions that gets legally challenged,” Hathaway claims.

Whatever employers ultimately decide — to try to harness ChatGPT or ban it — Schwarcz says it is critical that employers begin to review their policies if they have not already. “I absolutely think that employers should have certain policies,” he avers. “There are privacy concerns, accuracy concerns—and a number of other concerns, as well, and those pose risks, obviously. They can be sued. They can lose clients. They can have reputational consequences.”