Three judges perused a legal brief in a federal courthouse in New Orleans and discovered twenty-one falsehoods. fake case citations. false information. quotes that had never been produced by a court. The kind of mistakes that would have required a great deal of intentional dishonesty to produce in a different era. It simply means not verifying what your AI tool wrote for you in 2026.
Heather Hersh of FCRA Attorneys was fined $2,500 by the 5th U.S. Circuit Court of Appeals on February 18th for using artificial intelligence to draft a large portion of a brief submitted in connection with an appeal without verifying a single word of it.
| Key Information | Details |
|---|---|
| Case Name | Fletcher v. Experian Info Solutions |
| Court | 5th U.S. Circuit Court of Appeals, New Orleans |
| Date of Ruling | February 18, 2026 |
| Lawyer Sanctioned | Heather Hersh, FCRA Attorneys |
| Fine Imposed | $2,500 |
| Number of Fabrications Found | 21 instances of fabricated quotations or misrepresentations |
| Writing Judge | U.S. Circuit Judge Jennifer Walker Elrod |
| Original Case Context | Fair Credit Reporting Act lawsuit; sanctions against founding attorney Shawn Jaffer |
| Original Sanctions (Lower Court) | $23,000 in attorneys’ fees against Jaffer and his firm |
| AI Hallucination Cases on Record | 239 documented cases in U.S. courts as of February 2026 |
| First High-Profile AI Brief Case | 2023 |
| 5th Circuit AI Rule | Considered but rejected in 2024; existing rules deemed sufficient |
The brief was a part of a larger case involving violations of the Fair Credit Reporting Act and an appeal of a $23,000 sanctions order against Shawn Jaffer, the firm’s founding attorney. Rarely do cases like this make national headlines. For the wrong reasons, this one did.
The fine isn’t what makes the story truly uncomfortable. That’s what transpired when the court questioned Hersh about the mistakes. Hersh’s first reaction, according to Circuit Judge Jennifer Walker Elrod’s ruling, was to blame legal databases, saying she had relied on publicly accessible case sources that she thought were reliable. She made no mention of using AI.

Not until the court made a specific request. Elrod described the response as “not credible” and “misleading in several respects,” pointing out that a more forthcoming admission would probably have led to a less severe punishment. Hersh ducked instead. She sidestepped. The court took notice.
This case seems to reflect a larger issue about how the legal profession is still, in some way, finding it difficult to deal with a tool that has been in the news for almost three years in early 2026. Elrod had a direct opinion about it.
According to her, AI-hallucinated case citations “have increasingly become an even greater problem in our courts,” and the issue “shows no sign of abating.” Damien Charlotin, a data scientist and attorney from France, has been discreetly monitoring these occurrences in a public database; as of the date of the ruling, it contained 239 documented cases from US courts. It took some time for that number to show up. Event by event, moment by moment, it grew steadily.
The first well-known case surfaced in 2023 when two New York lawyers filed a brief created by ChatGPT that included references to nonexistent cases. The coverage that followed was widespread and even mocking. For early adopters who hadn’t fully grasped the technology yet, it served as a warning. However, the warning story was unsuccessful.
The Fifth Circuit actually considered enacting a formal appellate-level rule that would have been the first in the country to specifically regulate the use of generative AI by attorneys. However, in 2024, the court decided that the current professional conduct rules were sufficient. The computation now appears somewhat differently.
It’s difficult to ignore the specific form of this failure. Hersh wasn’t a novice who was intimidated by modern technology. She was filing a case that already involved sanctions before one of the most important federal circuit courts in the nation while practicing in a specialty area of consumer protection law. The stakes were readable.
The public has long been aware of the dangers of using AI-generated content in court documents. Elrod responded directly: “If it were ever an excuse to plead ignorance of the risks of using generative AI to draft a brief without verifying its output, it is certainly no longer so.”
It’s reasonable to wonder if $2,500 is the appropriate amount. Online and offline observers have noted that the wasted resources—judge time, clerk time, opposing counsel hours—probably far outweigh that amount. Some contend that the fine is essentially symbolic, a mild reprimand for behavior that would resemble presenting fake evidence if AI weren’t the mechanism.
The number may represent the court’s recognition that this is still a developing field of professional discipline and that establishing precedent is more important than leading by example. Or perhaps, in the era of machine-generated text, the courts are still figuring out how to calculate the cost of dishonesty.
The Fifth Circuit’s frustration is evident. Additionally, a circuit judge’s measured expression of frustration in a federal court opinion carries special weight. There will be more cases. The database continues to expand.
Disclaimer
Nothing published on Creative Learning Guild — including news articles, legal news, lawsuit summaries, settlement guides, legal analysis, financial commentary, expert opinion, educational content, or any other material — constitutes legal advice, financial advice, investment advice, or professional counsel of any kind. All content on this website is provided strictly for informational, educational, and news reporting purposes only. Consult your legal or financial advisor before taking any step.
