A student is sitting down to write a final exam that will determine a significant portion of their professional trajectory somewhere in a law school building—pick any one, the details are beginning to feel interchangeable. They have been studying doctrine, creating outlines, and reading cases for weeks. The test is important. In contrast to most other graduate programs, law school grades have a significant impact on clerkship applications, firm hiring decisions, and the unseen sorting mechanisms that continue to operate in the legal profession for decades after graduation.
Now imagine that a language model was given the student’s response and asked to determine a score by the professor who was grading the exam.
Although institutional and legal reconsideration regarding AI and academic evaluation has been developing for some time, 2025 and 2026 have made it impossible to ignore. A law school professor who was fired for using AI grading tools filed a wrongful termination lawsuit. The case recently passed certification and is at the nexus of academic freedom, employment law, and a set of institutional policies that most universities haven’t yet drafted. The lawsuit raises issues that law school deans and HR departments are secretly fearing: Was there a formal policy? Did she break it? Is it reasonable to fire someone for using a technology that half of her department’s employees are also using? These are not rhetorical inquiries. At some point, courts will have answers.
Here, the background context is helpful. A professor at Texas A&M University–Commerce failed a class of agriculture students in 2023 after using ChatGPT to review their papers and believing the chatbot had written each and every one of them. The chatbot didn’t. According to AI researchers, it was “hallucinating”—producing confidently incorrect output without realizing it was incorrect. The incident demonstrated a dynamic that has since spread throughout higher education: instructors are making significant decisions about students’ academic futures based on AI output that, when pressed, the tool admits is probabilistic rather than certain. The university eventually resolved the issue.
The Yale case, which was submitted in early 2025, presented the issue in a different light. After his exam was flagged by a program known as GPTZero, an EMBA student—a French investor and entrepreneur who filed under the pseudonym John Doe—was suspended for a year and received a failing grade. The exam was “unusually long and elaborate in formatting,” according to the professor who referred the case to the Honor Committee, and three essay responses had a high probability of being artificial intelligence-generated. The pupil denied utilizing AI. He sought the advice of Texas-based AI specialist Scott Aaronson, who told him unequivocally that it is mathematically impossible for any detection tool to identify AI use with 100% certainty and that GPTZero, in particular, operates in probabilities rather than facts. The student even sent in GPTZero scans of scholarly articles authored by Yale academics, including a former president of the university, which the program identified as possibly AI-generated. Despite this, he was suspended for “not being forthcoming”—a charge that surfaced after, not before, his hearing.
A Law School Professor Was Fired for Using AI to Grade Exams. Her Wrongful Termination Suit Just Got Certified
| Category | Details |
|---|---|
| Central Legal Issue | Wrongful termination of a law school professor for using AI tools to grade student exams |
| Related 2023 Incident | Texas A&M University–Commerce — Professor Jared Mumm failed entire agriculture class after ChatGPT falsely claimed to have written every submitted paper |
| Related 2025 Yale Case | Anonymous EMBA student (John Doe) sues Yale for wrongful suspension after exam flagged by AI detection tool GPTZero; alleges national origin discrimination |
| Yale Case Filed | U.S. District Court, Bridgeport, Connecticut |
| Yale Charges Against Student | “Not being forthcoming” to Honor Committee; violation of examination rules |
| Yale Student’s Penalty | One-year suspension; failing grade in course |
| AI Detection Tool Used at Yale | GPTZero / ChatGPTZero |
| AI Expert Consulted (Yale Case) | Scott Aaronson — Texas-based AI expert; noted GPTZero cannot detect AI use at 100% certainty |
| Key Research (2026) | University of Minnesota Law Professor Daniel Schwarcz co-authored study finding generative AI can grade law school exams at near-human accuracy when given proper rubrics |
| ABA Standards on AI Grading | None — no current ABA standard addresses how faculty should approach AI-assisted grading |
| Wrongful Termination Grounds | Potential breach of contract, retaliation, violation of public policy, discrimination |
| Broader Context | Students increasingly accused of AI use; professors quietly using AI to grade; no institutional consensus |

The lawsuit claims retaliation, discrimination on the basis of national origin, and breach of contract. Yale objected to the student’s request for anonymity, claiming that the details of the complaint made it easy to determine who he was. The case is being heard in Bridgeport’s federal court.
In the meantime, AI grading research has advanced more quickly than the policy. Daniel Schwarcz, a professor of law at the University of Minnesota, co-authored a study in early 2026 that found that, given well-crafted rubrics, generative AI tools can grade law school exams with nearly human accuracy. In a February 2026 episode of the ABA Journal’s Legal Rebels podcast, Schwarcz—who has spent years researching AI and law—discussed the findings. This discussion raised the obvious follow-up: if AI can grade consistently, who determines when and whether professors can use it, and under what circumstances?
As of right now, very few people have made a decision. Regarding how law professors should handle AI-assisted grading, the American Bar Association does not have a standard. Individual institutions have been making do on the fly, creating policies that are often untested in employment disputes and applied inconsistently. In a school that hasn’t released any written guidelines, a professor who uses an AI tool to assist with grading is in truly ambiguous territory. And so is the organization that later fires her for it.
There’s a sense that litigation was inevitable. Some of the people grading students’ work are covertly adopting the same technology that students are being disciplined for using, and the rules governing each side of that equation are not symmetrical, inconsistently written, and frequently not written at all. In particular, law schools are in an uncomfortable position because, three years ago, their internal governance on AI was about as advanced as that of any other industry. These institutions train people to navigate unclear regulations.
It’s still unclear if the wrongful termination case will result in a decision that is widely applicable or if it will be settled amicably without setting a precedent. It has already forced a discussion that was taking place in faculty meetings and hallways into a courtroom, where a judge will clearly see the lack of a clear policy. That might be the most beneficial thing it does, regardless of the result.
