When a judge has to inform a lawyer that the case he cited just doesn’t exist, a certain kind of quiet embarrassment descends upon the courtroom. That is basically what happened when Circuit Judge Jordon Kimura examined a brief submitted by Mark Valencia, a practicing lawyer at the Honolulu firm Case Lombardi, and found references to cases that had never been decided by courts that might as well have been made up.
Valencia gave a succinct and, in a sense, illuminating explanation: his associate had drafted the brief using an AI program, and he hadn’t bothered to verify. He claimed that it was an honest error. Kimura refused to believe it.
| Information Category | Details |
|---|---|
| Court Name | Hawaiʻi Supreme Court |
| Acting Chief Justice | Sabrina McKenna |
| Report Presented | AI and the Court System — Long-Awaited Committee Report |
| Key Rule Referenced | Rule 11, Hawaiʻi Rules of Civil Procedure |
| Attorney Sanctioned | Mark Valencia, Case Lombardi firm |
| Presiding Judge (Valencia case) | Circuit Judge Jordon Kimura |
| Nature of Violation | AI-generated hallucinated citations filed in court briefs |
| Global AI-Hallucination Cases Documented | 690 worldwide, 470 in the U.S. as of report date |
| Notable U.S. Sanction Case | Chicago Housing Authority — $59,500 in sanctions imposed |
| Hawaiʻi Federal Court Stance | Mandatory disclosure of AI use; verification required |
| Report’s Core Conclusion | Rule 11 is “broad enough to provide adequate safeguards” |
| Further Reference | American Bar Association on AI in Legal Practice |
If it weren’t for the fact that another attorney from the same firm was giving a judge a nearly identical explanation at the same time, the incident might have appeared to be a singular embarrassment. All six of the citations in Kaʻōnohiokalā J. Aukai IV’s brief were faulty; two were entirely made up, and the remaining citations misrepresented the content of actual cases.
AI hallucinations were also the source of his explanation. Circuit Judge Kelsey Kawano, the judge in question, accepted the apology and decided against imposing any penalties. It’s difficult to ignore the pattern emerging here: the errors are the same, the justifications are the same, and the outcomes have been remarkably inconsistent, at least in state court.

All of this came to the Hawaiʻi Supreme Court last Monday when acting Chief Justice Sabrina McKenna received a long-awaited report on artificial intelligence and the legal system from a group of judges and legal experts.
The report is comprehensive and well-considered, examining the potential benefits of AI tools for courts, including document processing, transcript generation, and support for non-English-speaking self-represented litigants. These are valid and authentic uses. However, the section that attracted the most attention deals with the obvious risk: what happens when attorneys stop using their own judgment and instead rely on AI.
In that regard, the report’s conclusion is, to put it kindly, optimistic. According to the report, Rule 11 of the Hawaiʻi Rules of Civil Procedure, which mandates that statements made to the court be truthful and gives judges the authority to punish infractions, is “broad enough to provide adequate safeguards.”
The legal community believes that this conclusion may be undervaluing the issue, or at the very least, the enforcement culture. In theory, the misconduct is covered by the rule. That much is accurate. It is a completely different matter whether or not it is applied consistently.
The federal courts in Hawaii have adopted a much tougher stance. When using AI to create a document, attorneys working in federal court are required to disclose it and ensure that nothing in it is fake. Federal Rules of Civil Procedure have penalties for violations. According to reports, the Hawaiʻi Supreme Court is debating whether to implement comparable regulations. It seems like a long overdue consideration.
It’s not a local issue. As of the day the Hawaiʻi report was presented, there were 690 cases worldwide, more than twice as many as there had been a few months earlier, according to French researcher Damien Charlotin’s global database of court rulings involving AI-generated hallucinations. 470 of them lived in the United States.
In one particularly noteworthy instance, a company utilizing AI created a brief in a $24 million jury verdict case for the Chicago Housing Authority that was rife with fake citations. The court levied $59,500 in fines after opposing counsel pointed out fourteen incorrect citations, referring to the submission of false citations as “a grave threat to the judicial branch.”
The phrase “grave threat” seems to capture the essence of what is truly taking place. Credibility is the foundation of courts. Not only is it a clerical error, but citing a case that doesn’t exist undermines the idea that legal arguments are based on actual facts. The Hawaiʻi Supreme Court will eventually have to provide more than a report to address the question of whether Rule 11 alone, when applied inconsistently, with apologies accepted and sanctions occasionally declined, is truly sufficient to hold that line.
Disclaimer
Nothing published on Creative Learning Guild — including news articles, legal news, lawsuit summaries, settlement guides, legal analysis, financial commentary, expert opinion, educational content, or any other material — constitutes legal advice, financial advice, investment advice, or professional counsel of any kind. All content on this website is provided strictly for informational, educational, and news reporting purposes only. Consult your legal or financial advisor before taking any step.
