AI Deepfakes Spawn New Breed of Workplace Harassment Lawsuits

Feb. 24, 2026, 10:00 AM UTC

The misuse of artificial intelligence to generate fake, sexually explicit, or harassing content is introducing new forms of workplace disputes and potentially significant liability for employers.

The evolving technology has already triggered lawsuits, including cases brought by a Washington State Patrol trooper and a Nashville TV meteorologist—both allegedly targeted in demeaning or sexualized AI-generated images that their employers inadequately addressed.

“The advent of deepfakes sort of presents employers with a whole new frontier of challenges,” said Robert T. Szyba, a partner at Seyfarth Shaw LLP.

Doctored images, video, or audio recordings targeting an employee based on their gender or other protected traits could give rise to workplace harassment or discrimination claims. These would be analyzed under the same legal framework courts use to assess traditional employment bias litigation under Title VII of the 1964 Civil Rights Act.

The latest disputes are happening in tandem with increased scrutiny of AI companies from federal and state policy makers for enabling the creation of fake, sexualized, or humiliating images, including non-consensual deepfake pornography. States are increasingly responding with a patchwork of civil and/or criminal measures to protect victims from the harms of AI-generated deepfakes, potentially intensifying litigation risks for both companies and employees.

As AI technology advances to produce realistic content, recent studies also show that the images are spreading across industries, targeting workers and consumers and costing businesses millions via fraud.

While AI-generated deepfakes aren’t yet widespread in the workplace, their prevalence is expected to grow as AI use becomes more commonplace, said Margo Wolf O’Donnell, a partner and co-chair of Benesch’s labor and employment practice group.

“These cases are being filed because I’m sure the plaintiffs’ bar is becoming more attuned to these issues, and so are employees,” she said.

Emerging Litigation

Cases leading the charge in this new arena provide compliance lessons for employers, showing that doctored images can buttress or independently trigger harassment or bias claims, regardless of whether the conduct at issue occurred in person or off-duty, said Schwanda Rountree, a co-managing partner at Sanford Heisler Sharp LLP.

In his December lawsuit filed in state court, 19-year veteran Washington State Patrol trooper Collin Pearson alleged a pattern of workplace bias, harassment, and constitutional violations by other officers, including an internal investigation about an absence that “outed” his sexuality.

Agency personnel also allegedly created and widely circulated an AI-generated video in December depicting Pearson and another uniformed trooper “engaging in intimate kissing, while an audible voice states, ‘This is SWAT training, no homo.’” The complaint characterized the video as a “derogatory” attempt to ridicule Pearson’s sexual orientation.

Meanwhile, Tennessee NewsChannel 5 former meteorologist Bree Smith Friedrichs alleged in her December federal suit that the station’s “culture of sexism, harassment, and retaliation” forced her to decline a contract renewal and depart the previous January.

She said management failed to properly investigate her concerns, which other female colleagues also experienced. Their failure to address threatening, anonymous “deepfake” sexual images of Friedrichs and the scammers who used her likeness to defraud viewers “was the last straw,” according to the suit.

A Baltimore high school athletic director took a plea deal last year for creating a deepfake audio of the school’s principal making racist and antisemitic comments. He was sentenced to four months in jail, and the principal settled his negligence and defamation lawsuit against school officials.

Legislative Actions

Several states, including California, Florida, and Illinois, have enacted similar measures allowing people targeted by AI deepfakes to seek civil and criminal penalties.

Friedrichs was a prominent advocate for Tennessee’s “Preventing Deepfake Images Act” last year.

Efforts to grant victims federal civil rights to sue over nonconsensual, sexually explicit AI-generated images are being debated in the US House following the Senate’s recent passage of the Defiance Act. The measure would build on last year’s Take It Down Act, championed by First Lady Melania Trump, which requires social media companies to remove such content within 48 hours of a victim’s request.

But in the workplace, a derogatory image may contribute to a hostile work environment claim under Title VII and analogous state laws if it’s circulated or discussed among coworkers and causes negative consequences, including affecting the victim’s ability to perform their job, Rountree said.

Other factors courts would consider include whether workplace equipment was used, the involvement of a manager in the underlying misconduct, or if the employer failed to act promptly to address the issue, she said.

“The employer doesn’t need to have created the deep fake” to face liability, Rountree said. “What I would be paying attention to if I had a case like this is what steps the employer took when they became aware of this activity. Where they get sort of hung up or in trouble is failing to act reasonably to correct it when they knew or should have known that this was occurring.”

Legal Guardrails

Beyond employment anti-harassment and bias laws, employers could face liability under federal or state privacy and defamation statutes if images are shared externally or stored on company systems, attorneys said.

To mitigate risks, companies should adopt and consistently enforce clear policies that restrict AI misuse across the workforce, and update anti-harassment policies to address evolving technologies and “put employees on guard,” Seyfarth’s Szyba noted.

“Policies that are sort of high-level and generic sometimes could leave a little bit to be desired because they insufficiently give guidance or govern conduct,” he said.

Guidance on off-duty conduct is also pertinent, Szyba said. Many workplace legal issues in the digital age stem from off-duty conduct that eventually seeps into the workplace, and “employers are increasingly taking on or being put in the position” to “police” those activities, he said.

“It is very hard to control,” Szyba added.

To contact the reporter on this story: Khorri Atkinson in Washington at katkinson@bloombergindustry.com

To contact the editors responsible for this story: Genevieve Douglas at gdouglas@bloomberglaw.com; Jay-Anne B. Casuga at jcasuga@bloomberglaw.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.