The 5th U.S. Circuit Court of Appeals received the brief formatted, numbered, and citing authority just like any other filing, and it took the judges who were reviewing it an unsettlingly short time to realize there was a problem. Not a single thing is incorrect. It wasn’t a thoughtless mistake or an incorrect page number. In a brief that Texas lawyer Heather Hersh had submitted in a Fair Credit Reporting Act appeal, there were twenty-one instances of made-up quotes or grave legal and factual misrepresentations. A show-cause order was issued by the court. Hersh answered. The three-judge panel, presided over by Circuit Judge Jennifer Walker Elrod, fined her $2,500 after finding the response to be untrustworthy. Then Elrod wrote something worth carefully reading: the punishment would probably have been less if Hersh had just acknowledged what had happened and accepted responsibility. She paid as much for the evasion as for the initial mistake.
| Category | Details |
|---|---|
| Case | Fletcher v. Experian Info Solutions |
| Court | 5th U.S. Circuit Court of Appeals (New Orleans) |
| Date of Sanction | February 18, 2026 |
| Sanctioned Attorney | Heather Hersh, FCRA Attorneys |
| Fine Amount | $2,500 |
| Nature of Violations | 21 fabricated quotations or serious misrepresentations of law/fact |
| Writing Judge | U.S. Circuit Judge Jennifer Walker Elrod |
| Related Case (Texas) | Shawn Jaffer / Jaffer & Associates — $23,000 in attorneys’ fees |
| Oregon Case (March 2026) | Bill Ghiorso — $10,000 fine (15 fake citations, 9 fake quotes) |
| Louisiana Case | Veteran lawyer — $1,000 fine for ChatGPT-generated errors |
| AI Hallucination Database | 239 documented cases (as of Feb 2026), maintained by Damien Charlotin |
| 5th Circuit AI Rule | Considered but not adopted in 2024 |
| First High-Profile Case | 2023 |

This is a familiar type of story at this point. Since the first well-known instance of a lawyer presenting AI-hallucinated case citations to a court occurred in 2023, the frequency of these occurrences has increased from concerning to draining. As of mid-February 2026, there were 239 such cases in U.S. courts, according to a database kept by French attorney and data scientist Damien Charlotin. It’s not a rounding error. This is a documented pattern that occurs in many jurisdictions and involves lawyers from different-sized firms handling cases that range from routine civil matters to more significant proceedings. Judge Elrod stated in her opinion that the issue “shows no sign of abating.” Lawyers continue to submit briefs they haven’t reviewed despite nearly three years of media attention, penalties, and public humiliation.
The larger picture is made more frustrating by the particulars of Hersh’s case. She first claimed to have relied on publicly accessible versions of the cases through reputable legal databases when confronted with the fabrications. She identified those databases as the cause of the errors. This was deemed implausible by the court. She didn’t acknowledge using AI until someone specifically asked if she had. Elrod’s use of the phrase “misled, evaded, and violated her duties as an officer of this court” in the opinion is particularly pointed.” In that context, a $2,500 fine seems almost restrained.
The Oregon case, which was decided in March, adopted a more stringent stance. Bill Ghiorso, an attorney, submitted a brief to the Oregon Court of Appeals that contained nine made-up quotes and fifteen fictitious citations. The judges capped the amount at $10,000 after the court’s formula—$500 for each fictitious citation and $1,000 for each false quotation—produced a theoretical total of $16,500. According to reports, Ghiorso had his employees ask Google if the cases were genuine, and Google’s AI search function verified that they were. The process of fact-checking AI-generated content involved asking another AI if it was accurate, which is a detail that is hard to read without feeling a little grim. It isn’t.
Each of these cases has a commonality that extends beyond personal carelessness. For drafting, organizing, and summarizing, the tools being used—whether ChatGPT, Gemini, or AI-assisted legal research platforms—are actually helpful. They are unreliable when it comes to generating citations. They have no idea when they are making up a case. They are unable to disclose whether the quote they generated originated from an actual opinion or was put together using legalese that seems plausible. Verification is a necessary step. That’s the whole point. And practically all of the lawyers who are being sanctioned are those who did not participate.
It’s unclear if fines between $2,500 and $10,000 are genuinely altering behavior on a large scale. In 2024, the Fifth Circuit debated whether to adopt a specific rule governing the use of AI in legal filings, but ultimately decided that the current professional conduct regulations were sufficient. It appears harder and harder to defend that conclusion as the number of cases keeps rising. The distinction between “this attorney made a mistake” and “the legal profession has a systemic problem it is not adequately addressing” eventually becomes difficult to maintain.
Disclaimer
Nothing published on Creative Learning Guild — including news articles, legal news, lawsuit summaries, settlement guides, legal analysis, financial commentary, expert opinion, educational content, or any other material — constitutes legal advice, financial advice, investment advice, or professional counsel of any kind. All content on this website is provided strictly for informational, educational, and news reporting purposes only. Consult your legal or financial advisor before taking any step.
