Close Menu
Creative Learning GuildCreative Learning Guild
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    Creative Learning GuildCreative Learning Guild
    Subscribe
    • Home
    • All
    • News
    • Trending
    • Celebrities
    • Privacy Policy
    • About
    • Contact Us
    • Terms Of Service
    Creative Learning GuildCreative Learning Guild
    Home » Deepfakes in the Workplace: AI Spawn a Terrifying New Breed of Harassment Lawsuits
    Technology

    Deepfakes in the Workplace: AI Spawn a Terrifying New Breed of Harassment Lawsuits

    Janine HellerBy Janine HellerApril 19, 2026No Comments6 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Sometimes, while reading a civil complaint filed in some unremarkable federal district or sitting in a courthouse hallway, you realize that the law is actually finding it difficult to keep up. The case may involve a woman who never posed for any explicit photos, but there is one that was created in a matter of seconds using a tool that anyone can download.

    Her countenance. The file has her name on it. It was forwarded by her coworkers via a group chat at work. And when her employer found out, they did almost nothing. That isn’t speculative. Tennessee is that. Washington State is that. The lawsuits are just starting to pile up, and that is the workplace of 2024 and 2025.

    CategoryDetails
    TopicDeepfakes in the Workplace & Workplace Harassment Lawsuits
    Primary Legal FrameworkTitle VII of the Civil Rights Act of 1964
    Key Federal AgencyU.S. Equal Employment Opportunity Commission (EEOC)
    Legislation ReferencedTAKE IT DOWN Act (2025), Defiance Act, Preventing Deepfake Images Act (Tennessee)
    States with Enacted MeasuresCalifornia, Florida, Illinois (and growing)
    Documented Deepfake GrowthFrom 500,000 files (2023) → estimated 8 million by 2025
    Fraud Surge RateOver 3,000% increase in deepfake-related fraud attempts in 2023
    Q1 2025 Incidents179 major incidents — already surpassing all of 2024
    Notable Case (California)Police captain awarded $4 million after AI-generated image circulated at work
    Notable Case (Tennessee)NewsChannel 5 meteorologist Bree Smith Friedrichs — deepfake sexual images ignored by management
    Notable Case (Washington State)Trooper Collin Pearson — AI-generated video depicting him in intimate scenario circulated by colleagues
    Key Legal Expert (Defense)Robert T. Szyba, Partner, Seyfarth Shaw LLP
    Key Legal Expert (Plaintiff)Schwanda Rountree, Co-Managing Partner, Sanford Heisler Sharp LLP
    Employer Liability TriggerFailure to act when employer “knew or should have known”
    Reference SourcesEEOC Harassment Guidance · Seyfarth Shaw Analysis

    A truly new legal frontier is being introduced by the misuse of artificial intelligence to create fake, sexualized, or demeaning content, which employment lawyers describe with a kind of cautious gravity. In a federal lawsuit filed in December, Bree Smith Friedrichs, a former meteorologist at Nashville’s NewsChannel 5, claimed that while management failed to conduct a thorough investigation, anonymous deepfake sexual images of her circulated without repercussions. According to the lawsuit, she had already experienced a culture of retaliation and sexism.

    She claimed that the deepfakes were the final straw. Collin Pearson, a 19-year veteran trooper in Washington State, filed a lawsuit, claiming that colleagues in the agency produced and disseminated an AI-generated video that showed him and another officer performing an intimate act, along with audio that made fun of his sexual orientation. It wasn’t some obscure part of the internet. It spread among peers in uniform.

    Deepfakes in the Workplace
    Deepfakes in the Workplace

    The legal framework that is developing around these cases is what elevates them above unsettling anecdotes. According to Title VII of the Civil Rights Act, which forbids hostile work environments based on gender, race, religion, and other protected characteristics, courts are increasingly being asked to assess AI-generated content using the same standards as traditional workplace harassment. It is not necessary for the behavior to take place in a conference room.

    It doesn’t even need to start on business machinery. According to lawyers, what counts are whether the employer knew—or should have known—and whether they took appropriate action. Many businesses are failing at the final step. “The employer doesn’t need to have created the deepfake,” stated Schwanda Rountree of Sanford Heisler Sharp.

    In situations such as these, she keeps an eye on what the employer actually did as soon as they learned about it. She pointed out that organizations typically run into major legal issues when they don’t act responsibly. It’s possible that many HR departments haven’t thought about how a fake video of an employee kissing a coworker might violate their harassment policy because they’re still getting used to dealing with basic digital misconduct. Most likely because it was impossible for it to exist until recently.

    One of the most obvious early indicators of this direction was given by a California jury. Following the circulation of an explicit AI-generated image of her at work, a police captain received a four million dollar award. The appellate court upheld the decision, concluding that, in accordance with California law, the distribution of such fake content constituted illegal harassment.

    No one had to provide evidence that the image was authentic in that case. It was necessary to demonstrate that it spread, that coworkers witnessed it, and that the result was an untenable work environment. The rest was determined by the employer’s response, or lack thereof.

    Although deepfakes aren’t yet widespread in the workplace, the statistics showing their proliferation are difficult to ignore. In 2023, the technology generated an estimated 500,000 files. That number was estimated to be about eight million by 2025. According to reports, deepfake fraud attempts increased by over 3,000 percent in just one year.

    179 significant incidents were reported in the first quarter of 2025 alone, surpassing the total for the entire year before. These statistics are not abstract. They stand in for actual people whose voices and faces are increasingly being weaponized, altered, and cloned in work environments.

    Though not in a cohesive manner, lawmakers have started to react. Legislation permitting victims to seek civil and, in certain situations, criminal penalties has been passed in California, Florida, and Illinois. With bipartisan support, the federal Defiance Act was approved by the Senate. First Lady Melania Trump has supported the TAKE IT DOWN Act, which mandates that social media companies remove nonconsensual intimate deepfake content within 48 hours of a victim’s request.

    Friedrichs was a vocal supporter of the Preventing Deepfake Images Act, which was passed in Tennessee as a result of her personal experience. The legislative direction is evident, but it is a patchwork that is still brittle in some areas.

    The exposure for employers extends beyond allegations of discrimination in the workplace. Depending on how fake content is produced, stored, and disseminated, privacy laws, defamation laws, and criminal liability for cyber harassment may all be relevant. Seyfarth Shaw’s Robert Szyba has been straightforward about what businesses should do: update anti-harassment policies to specifically identify AI-generated content, expand guidance to off-duty behavior that permeates the workplace, and cease using general, generic language that provides employees with no real guidance.

    “Policies that are sort of high-level and generic,” he stated, “sometimes could leave a little bit to be desired.” That’s a tactful way of saying that businesses are vulnerable.

    Observing the number of these cases suggests that the plaintiffs’ bar has stabilized. Employees and their lawyers are becoming more conscious of the fact that a false image spreading throughout the workplace is not only dehumanizing but also potentially actionable. The law is moving, even though it’s still catching up. And the reckoning is coming sooner than most expected for employers who thought this was someone else’s issue.


    Disclaimer

    Nothing published on Creative Learning Guild — including news articles, legal news, lawsuit summaries, settlement guides, legal analysis, financial commentary, expert opinion, educational content, or any other material — constitutes legal advice, financial advice, investment advice, or professional counsel of any kind. All content on this website is provided strictly for informational, educational, and news reporting purposes only. Consult your legal or financial advisor before taking any step.

    Deepfakes in the Workplace
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Janine Heller

    Related Posts

    The Unclear Legal Landscape Spawns a Rush of AI Licensing Deals Amid 100+ Copyright Cases

    April 19, 2026

    An AI Companion Chatbot Lawsuit Reveals Something Deeply Uncomfortable About How Lonely Adults Are Using These Tools

    April 19, 2026

    Amazon Is Being Sued by YouTubers Who Say It Scraped Their Videos to Train an AI Tool Without Permission

    April 19, 2026
    Leave A Reply Cancel Reply

    You must be logged in to post a comment.

    Society

    The Lawsuit That Could Force Every EdTech Company to Reveal What It Knows About Your Child

    By Janine HellerApril 19, 20260

    When you discover something you totally trusted was never completely honest with you, a certain…

    Teaching Behind Bars: The Invisible Obstacles Facing Inmates Seeking Degrees in Illinois

    April 19, 2026

    Why the Future of American Public Education Hinging on a Tiny Idaho District

    April 19, 2026

    Harvard Rejected a Federal Demand and Now Faces the Consequences. Other Universities Are Watching Closely

    April 19, 2026

    The Unclear Legal Landscape Spawns a Rush of AI Licensing Deals Amid 100+ Copyright Cases

    April 19, 2026

    An AI Companion Chatbot Lawsuit Reveals Something Deeply Uncomfortable About How Lonely Adults Are Using These Tools

    April 19, 2026

    Amazon Is Being Sued by YouTubers Who Say It Scraped Their Videos to Train an AI Tool Without Permission

    April 19, 2026

    Deepfakes in the Workplace: AI Spawn a Terrifying New Breed of Harassment Lawsuits

    April 19, 2026

    Google Is Paying $135 Million to Settle a Data Transfer Lawsuit. Here’s Who Qualifies and How to Claim

    April 19, 2026

    The Danish School With No Bells, No Homework, and Consistently Happy, High-Achieving Students

    April 19, 2026
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • Privacy Policy
    • About
    • Contact Us
    • Terms Of Service
    © 2026 ThemeSphere. Designed by ThemeSphere.

    Type above and press Enter to search. Press Esc to cancel.