Filing 400 legal motions at once requires a certain level of audacity. It wasn’t the audacity of an experienced litigator sorting through a mountain of valid cases; rather, it was the audacity of someone who gave the task to a machine, pressed a button, and seemed to assume the courts wouldn’t notice.
That’s essentially what happened when a paralegal flooded a court system with hundreds of concurrently filed motions using an AI tool, seemingly unaffected by the consequences. The judge responded harshly, calling this “an assault on the judicial system.”
| Category | Details |
|---|---|
| Incident | Paralegal filed 400 motions simultaneously using AI-generated legal documents |
| Court Response | Judge described the mass filing as “an assault on the judicial system” |
| Primary Concern | AI hallucinations — fabricated case citations appearing in legal submissions |
| Legal Framework | Hamid jurisdiction used in UK courts to regulate AI misuse in proceedings |
| Related Case (UK) | R (Ayinde) v Haringey LBC [2025] EWHC 1383 (Admin) — five non-existent cases cited |
| Related Case (Canada) | Justice Joseph F. Kenkel ordered lawyer Arvin Ross to refile submissions after fictitious citations found |
| Related Case (US) | Mata v Avianca Inc (2023) — ChatGPT-generated cases cited before federal court |
| Regulatory Bodies Involved | Bar Standards Board (BSB), Solicitors Regulation Authority (SRA) |
| Expert Warning | Amy Salyzyn, University of Ottawa — warns of potential miscarriage of justice |
| Judicial Warning (UK) | Dame Victoria Sharp, President of King’s Bench Division, warned lawyers could face criminal charges |
| Possible Consequences | Contempt of court, perverting the course of justice, regulatory referral, cost sanctions |
| Industry Guidance | Bar Council guidance (2024), SRA Risk Outlook (2023), UK Courts and Tribunals Judiciary AI guidelines |
It’s difficult to avoid thinking about that phrase for a while. An attack. Not a mistake. Not a mistake. An assault is the term used to describe actions that seem purposefully hostile to the targeted institution. In some ways, it doesn’t matter if the paralegal had bad intentions or just didn’t consider the repercussions. Regardless of intent, there was actual harm done to court resources, the integrity of individual filings, and the trust that keeps legal proceedings cohesive.
This is not an isolated incident. For the past two years, courts in the US, Canada, and the UK have been debating what happens when lawyers use generative AI without the necessary verification procedures. Justice Joseph F. Kenkel of Ontario ordered criminal defense attorney Arvin Ross to completely refile his submissions after discovering that one of the cited cases seemed to be wholly fictitious and that several others led to unrelated civil matters that had no bearing on the argument being made. Kenkel wrote, “The errors are numerous and substantial,” and there’s a certain tiredness that seems to be telling.

The Divisional Court in London heard two cases under what is known as the Hamid jurisdiction, a legal framework that permits judges to oversee their own processes. In one instance, a Haringey Law Centre solicitor and barrister cited five fictitious cases. In another, a lawyer provided a witness statement based on forty-five authorities, eighteen of which were found to be false.
Both groups of attorneys were directed to their respective regulatory agencies. The President of the King’s Bench Division, Dame Victoria Sharp, was straightforward in her assessment: ChatGPT and other open-source AI tools are just not able to perform trustworthy legal research. It wasn’t a recommendation. It was a menacing warning.
The possibility of downstream harm is what makes this truly concerning rather than just embarrassing. According to Amy Salyzyn, an associate professor at the University of Ottawa’s faculty of law, courts shouldn’t base decisions about a person’s money, freedom, or rights on something wholly fictional. When said that way, it seems obvious.
Nevertheless, it continues to occur. In the 2023 case of Mata v. Avianca, a federal court in New York was presented with citations generated by ChatGPT that were simply nonexistent. The attorneys received sanctions. The profession took the story as a warning and, in many cases, continued doing what it had been doing.
The legal community seems to be genuinely divided about how to respond to this situation. Some senior practitioners contend that seasoned attorneys can serve as a useful check because they are more likely to identify irregularities in AI-generated content because they are familiar with the appearance and feel of real case law.
Others think that rather than general consumer goods that were never intended for the demands of litigation, the answer lies in ringfenced, purpose-built AI research tools with built-in verification. There is merit to both arguments. On their own, both are most likely insufficient.
A simpler point that is frequently overlooked in the discussion is that the paralegal who filed 400 motions at once wasn’t showcasing the potential of AI. They were illustrating the consequences of using efficiency as the sole metric. The purpose of courts is to settle disputes thoughtfully, methodically, and with consideration for each individual case.
It is not a workflow optimization to file 400 documents at once. No institution should be required to take this stress test. The judge was correct to call it what it was, and before the next threshold is crossed, the legal profession as a whole would be wise to take the criticism seriously.
Disclaimer
Nothing published on Creative Learning Guild — including news articles, legal news, lawsuit summaries, settlement guides, legal analysis, financial commentary, expert opinion, educational content, or any other material — constitutes legal advice, financial advice, investment advice, or professional counsel of any kind. All content on this website is provided strictly for informational, educational, and news reporting purposes only. Consult your legal or financial advisor before taking any step.
