On a Thursday in April 2026, a sitting Supreme Court justice said something in a room at the University of Alabama School of Law that sounded more like a silent alarm than a legal observation. One student asked Sonia Sotomayor, the longest-serving liberal justice on the court, about artificial intelligence’s place in the legal system. Her response lacked tact and consideration. She described it as “a very bad thing.” “It shows we’re way too predictable,” she said, referring specifically to AI models that have become fairly adept at forecasting Supreme Court decisions. She went on to say that if an AI system can predict outcomes with that degree of success, the court might not be “opening its minds to new ideas enough.” This statement, when you think about it, is more impactful than the statistic itself. She was talking about a closed loop. A justice is unnerved by the reflection of a machine that maps the court’s actions and then mirrors them back.
These days, there is ample evidence of the numbers underlying that discomfort. AI models that predict appellate court outcomes with an accuracy of roughly 70 to 71% have been developed recently; some models have achieved an accuracy of 70.2% across nearly two centuries of U.S. Supreme Court decisions. Attorneys and legal scholars have taken notice of SCOTUSbot, an unofficial tool designed to predict the court’s decisions, and are attempting to determine what to do with this capability. AI was able to accurately predict 88% of prosecution decisions and 82% of asylum outcomes using American legal data, according to a 2018 study by Professor Elliott Ash of the University of Warwick. At the time, those figures were startling. Since then, they have only climbed.
Seventy-one percent seems like a lot. Indeed, it is. However, it’s what the accuracy suggests rather than the accuracy itself that is unsettling. An AI must have discovered recurring patterns in judges’ reasoning and decision-making in order to forecast court outcomes at that rate. patterns that are trustworthy enough to model. patterns that are consistent enough for a machine to pick them up, draw conclusions from them, and use them in the future. The unsettling interpretation of that figure is that, at least at the appellate level, judicial decision-making may be more influenced by past conduct than most people, including the judges themselves, would be willing to acknowledge.
| Field | Details |
|---|---|
| Study Focus | AI prediction of appellate and Supreme Court judicial outcomes |
| Accuracy Rate (Appellate Courts) | Approximately 71% on high-confidence predictions |
| Accuracy Rate (Supreme Court, historical) | 70.2% across nearly two centuries of U.S. Supreme Court decisions |
| Notable AI Tool | “SCOTUSbot” (The Economist) — designed to anticipate Supreme Court rulings |
| Research Context | Studies published 2024–2026; methods based on identifying “logical paths” from historical cases |
| Key Critic | U.S. Supreme Court Justice Sonia Sotomayor |
| Sotomayor’s Comments | Delivered at University of Alabama School of Law, April 2026 |
| Her Assessment | Called it “a very bad thing” — said it shows the court is “way too predictable” |
| Key Risk Identified | AI trained on biased historical data amplifies existing systemic disparities |
| Earlier Benchmark | AI correctly predicted 88% of U.S. prosecution decisions; 82% of asylum outcomes (2018 study) |
| Researcher | Professor Elliott Ash, University of Warwick (2018 foundational study) |
| Broader Implication | Predictability in judicial outcomes may signal over-reliance on precedent over genuine deliberation |

All of this has a bias issue, which researchers are generally honest about. Case data from the past is used to train these AI systems. Human biases can be found in historical rulings on bail, sentencing, immigration, and property; some are evident, some are systemic, and some operate covertly over decades. A model is learning more than just legal reasoning when it gains the ability to forecast results from that data. It’s also about understanding the prejudice and error patterns that influenced those results. Any automated decision system trained on biased data will likewise be biased, as Professor Ash stated clearly in his seminal work. In other words, what is anticipated isn’t always what justice looks like in theory. Because of its unique history, this system has a tendency to produce it.
It’s instructive to see how the legal system is handling all of this. There is a discernible drift but no consensus. AI was the focus of Chief Justice John Roberts’ entire 2023 year-end report. During last week’s oral arguments, Justice Samuel Alito stopped the proceedings to ask a well-known proponent of artificial intelligence if the court should just ask Claude to make the decision. The attorney politely declined. However, if it was a joke, it revealed a truth. The notion is no longer ridiculous enough to be considered wholly ludicrous.
It’s important to focus on what Sotomayor actually said rather than just the headline figure because her remarks at Alabama were among her longest public statements on the topic. AI was not rejected by her. She tells students that all new associates at the large law firms where her former clerks now work are expected to use AI, and she encourages the class to become proficient in it before graduating. She uses it, or at least acknowledges its existence. Her worry was more focused and, in a sense, more intriguing. AI, according to her, is “a sophisticated human” that receives all of its inputs from people and carries “the very best in us and the very worst in us.” Although it may appear to be a dystopian warning, that framing is not. This observation relates to inheritance. What we create is a reflection of who we are.
It’s still unclear if a 71% prediction rate is a sign of judicial rigidity, a victory for machine learning, or both. There seems to be a good argument for reading it both ways. Judges are either applying principled consistency, which is arguably what law should look like, or they are acting automatically in ways that prevent the sincere reconsideration that complex cases occasionally require if the patterns are real and teachable. You can’t tell which of those two possibilities is true just by looking at the number; they feel so different. That’s probably the most truthful way to describe it. A statistic is not self-explanatory. It simply remains at 71%, posing a query that the legal system is still unsure how to respond to.
Disclaimer
Nothing published on Creative Learning Guild — including news articles, legal news, lawsuit summaries, settlement guides, legal analysis, financial commentary, expert opinion, educational content, or any other material — constitutes legal advice, financial advice, investment advice, or professional counsel of any kind. All content on this website is provided strictly for informational, educational, and news reporting purposes only. Consult your legal or financial advisor before taking any step.
