Please ensure Javascript is enabled for purposes of website accessibility
Home / Legal News / Minimizing deepfake risks in business

Minimizing deepfake risks in business

With the rise of AI, and access to technology, we are entering the age of the deepfake. In this article, I will discuss: 1, what a deepfake is; 2, societal concerns; 3, why employers need to care; and 4, steps to protect your company from deepfakes.

What are deepfakes?

Deepfakes, at their core, are highly convincing imitations created through AI, encompassing videos, audio clips, photos, text messages, and various media formats. Their primary aim is to deceive by making fabricated content indistinguishable from reality.

Due to significant advancements in AI, accessible programs and apps now empower users worldwide to create and manipulate images, videos, and recordings with remarkable realism. Despite the complexity of the process, the tools for crafting deepfakes have become widely available, with a recent report indicating that advanced deepfake technology will be accessible to everyone, including beginners, in the coming year.

What sets today’s deepfakes apart from those of a few years ago? The integration of deep learning AI by skilled artists, combined with the widespread use of generative AI technology and accessible modeling systems, has heightened the believability of deepfakes in 2023. Looking ahead to 2024, it’s clear that upcoming deepfakes will likely raise the bar in terms of realism, underscoring the continuous evolution of this technology.

Major societal concerns

Not surprisingly, deepfake creators are not just making amusing and entertaining videos; they are being used for more nefarious means.

Earlier this year, a group of students in New York made a deepfake recording of their principal making racist remarks and threatening students. When the recording was circulated among the school community, a firestorm of controversy erupted. Just last month, high school students in New Jersey were caught making deepfake nude images of fellow students and sharing these images via group texts.

But the deepfake controversy goes well beyond problems among teens and at schools. We have seen allegations in court that parties are submitting fake evidence using deepfakes, fake videos created by political parties and candidates depicting their rivals or problems that might befall society should their rival win an election, forged speeches from government officials conveying controversial statements to the general public, and allegations that wartime footage has actually been digitally manipulated.

In fact, last summer the FBI released a public service announcement warning of malicious actors “manipulating benign photographs or videos” to create deepfakes and target victims. This warning is an acknowledgment that deepfakes can meticulously craft an illusion designed to destroy a career and reputation, and we must all come to terms with this reality and understand that we can no longer automatically trust what we see with our own eyes.

Why you should care

Any problem impacting an entire society usually impacts the workplace, and deepfakes are no exception. Some ways that deepfakes are already hitting the workplace:

  • Technology could be used to create offensive content at the workplace. We will soon see cases involving fake videos or audio recordings of workers doing something offensive or otherwise improper. Deepfakes relied upon during internal investigations could lead to wrongful disciplinary actions or terminations.
  • The FBI issued a warning that deepfakes were being used in remote job interviews.
  • A deepfake could be used to mimic a top executive’s voice to help hackers steal money from the company.

Steps to mitigate deepfake risks

  • Offer your workers deepfake training: Employers should educate themselves and their employees about the existence and potential dangers of deepfakes. This includes understanding how they work, their potential impact on the organization, and the importance of staying vigilant. This must involve fostering a culture of skepticism, like how employees are now on guard for phishing emails. As part of overall training, make sure cybersecurity training is up to date and required for all employees.
  • Develop channels for open communication. Employees must feel comfortable questioning the legitimacy of information and reporting any suspicious activity. Encourage employees to speak up. Give examples of requests that are abnormal or outside normal company procedures.
  • Make sure IT adopts robust authentication protocols. Employers should establish strong authentication measures for access to sensitive information, systems, and accounts. This may include multi-factor authentication, biometric verification, or other secure methods to minimize the risk of unauthorized access.
  • Invest in new threat-detection tools. This is a developing area. Technology to protect against deepfakes has not kept up with deepfake technology overall. But stay up to speed in this area and look for opportunities to invest in advanced technologies such as AI-powered deepfake detection tools as they will help identify and flag potential deepfakes.
  • Review company policies. Make sure your policies prohibit employees from creating deepfakes involving company personnel and proprietary information. There are very few legitimate business uses for most companies. In general, employees should not be allowed to use employer resources or data to create deepfakes.

To fortify business integrity, deploying robust cybersecurity measures and cultivating a culture of skepticism is crucial. By staying ahead of technological threats, we work toward a future where authenticity remains untarnished in both personal and professional realms.

Stephen Scott is a partner in the Portland office of Fisher Phillips, a national firm dedicated to representing employers’ interests in all aspects of workplace law.