Most legal shifts have a point at which a case becomes a cautionary tale rather than an intriguing one. It seems that the time has come for workplace harassment enabled by AI. A $4 million jury award in favor of a police captain was recently upheld by a California appellate court following the distribution of an AI-generated, sexually explicit image of her to her coworkers.
At about the same time, a state trooper from Washington filed a lawsuit alleging that his supervisor had created and disseminated a deepfake video of him having a private moment with a coworker using AI tools. Both cases ended up in the system of workplace law. They were both stuck.
| Key Case & Legal Information | Details |
|---|---|
| Subject | AI-Enabled Workplace Harassment Litigation |
| Landmark Verdict | $4 million jury verdict affirmed by California appellate court |
| Plaintiff (Case 1) | Police Captain — California |
| Nature of Harm (Case 1) | Sexually explicit AI-generated image resembling her circulated among colleagues |
| Plaintiff (Case 2) | Washington State Trooper |
| Nature of Harm (Case 2) | Supervisor used AI to create deepfake video depicting him kissing a co-worker |
| Relevant Federal Law | Title VII, Americans with Disabilities Act (ADA), Fair Credit Reporting Act (FCRA) |
| EEOC Position | Sharing AI-generated and deepfake images constitutes unlawful harassment under existing guidance |
| Key Legislation | Federal TAKE IT DOWN Act; Florida’s Brooke’s Law — both mandate removal of nonconsensual AI content within 48 hours |
| Expert Commentary | Bradford Kelley, Shareholder at Littler Mendelson (AI & Employment Law) |
| Key Risk for Employers | Existing anti-harassment policies largely silent on AI-generated harassment content |
| Recommended Action | Update policy language, retool training, prepare investigation infrastructure |
It’s difficult to ignore how rapidly these circumstances are growing and how unprepared the majority of organizations appear to be. Over the past few years, HR departments have been frantically drafting AI policies that were designed to address a different type of issue.
They were designed to manage intellectual property exposure, safeguard sensitive data, and prevent proprietary information from being used in public tools. They were not designed to handle the situation where a worker opens a free AI program on a Tuesday afternoon and, by the end of the day, uses it to damage a coworker’s reputation.

Bradford Kelley, a shareholder at Littler Mendelson who specializes in employment law and artificial intelligence, has been keeping a close eye on this area. He makes a crucial distinction that many HR directors continue to overlook. “It’s not just deepfakes,” he informed the HR Executive. “If somebody uses a generative AI tool to generate a song that shows they’re romantically interested in a colleague, that’s not necessarily a deepfake issue, but it’s definitely an issue where AI could be weaponized.”
That framing is important. A fake love song, a mocking audio clip, or a fictional conversation between two real people are all examples of harassment that can be created in less than five minutes by someone with no technical expertise. However, none of these are deepfakes in the conventional sense.
This is the part of the story that needs more attention than it currently receives. For the most part, there is no longer a barrier to producing harassing AI content. In the past, it required expertise, time, and equipment—a degree of work that was discouraging in and of itself. There is no longer any friction. All that’s left is the possibility of severe injury and a legal system that is gradually but clearly beginning to react.
On this front, the U.S. Equal Employment Opportunity Commission has already taken clear action. According to its enforcement guidelines, sharing deepfake and AI-generated images may be illegal harassment based on protected characteristics. Furthermore, it goes far beyond sexual content. Regardless of whether the term “deepfake” was used, AI tools can be used to create imagery that targets a person’s race, religion, disability, or national origin.
This could result in Title VII exposure, possible ADA claims, and hostile work environment liability. The possible repercussions, according to Littler lawyers, include “employment discrimination, privacy law violations, intentional infliction of emotional distress and even criminal liability.”It is not a specific legal niche. That covers the majority of the map of employment law.
Additionally, laws are becoming more stringent. Both Florida’s Brooke’s Law and the federal TAKE IT DOWN Act require the removal of nonconsensual intimate AI-generated content within 48 hours, indicating that lawmakers are starting to view this as an emergency rather than a minor concern.
A working group is putting pressure on Colorado’s 2024 AI law to be repealed before its June 2026 effective date, indicating that the state’s regulatory landscape is still genuinely unclear. However, the direction of travel appears to be fairly obvious.
It appears that the legal system is catching up to what employees are truly going through, something that employment policy has been slow to do. The typical HR investigation was designed for a world in which establishing authorship was not too difficult.
The conventional investigation framework begins to falter when a harasser can merely assert that an AI produced something on its own, that a file was altered, or that attribution is ambiguous. These complications are not speculative. They’re on their way.
Kelley and his Littler colleagues suggest a fairly straightforward set of answers. The production or dissemination of AI-generated content that targets coworkers based on protected characteristics should be expressly covered by anti-harassment policies; the language should be specific enough that no one can legitimately claim confusion.
In order to clearly identify the romantic song and the fake conversation as violations rather than gray areas, standard harassment training must include specific examples of AI-facilitated misconduct. Additionally, HR departments should consider how they will handle digital evidence in situations involving AI before a complaint comes in.
It’s possible to see something more significant than a few odd lawsuits as this develops. It appears to be the beginning of a much larger reckoning, one that will eventually force every organization to take seriously what AI tools can do to actual people in actual workplaces when used improperly. The $4 million California verdict is the kind of sum that usually makes a difference. Whether it will move them quickly enough is still up in the air.
Disclaimer
Nothing published on Creative Learning Guild — including news articles, legal news, lawsuit summaries, settlement guides, legal analysis, financial commentary, expert opinion, educational content, or any other material — constitutes legal advice, financial advice, investment advice, or professional counsel of any kind. All content on this website is provided strictly for informational, educational, and news reporting purposes only. Consult your legal or financial advisor before taking any step.
