One particular moment from Justice Sonia Sotomayor’s recent speech at the University of Alabama School of Law sticks out. It’s not the well-crafted legal analysis or the thoughtfully phrased policy opinions, but rather a single sentence that struck a chord with a room full of aspiring attorneys. “It shows we’re way too predictable.” She wasn’t discussing a pattern in oral arguments or a colleague’s disagreement.
She was discussing artificial intelligence, particularly AI models that have become remarkably adept at predicting the Supreme Court’s decision before the justices have even heard a case.
| Category | Details |
|---|---|
| Full Name | Sonia Maria Sotomayor |
| Date of Birth | June 25, 1954 |
| Place of Birth | The Bronx, New York City, New York |
| Age | 71 |
| Nationality | American |
| Ethnicity | Puerto Rican-American |
| Education | Princeton University (A.B., 1976); Yale Law School (J.D., 1979) |
| Current Role | Associate Justice, U.S. Supreme Court |
| Appointed By | President Barack Obama |
| Year Appointed | 2009 |
| Judicial Ideology | Liberal / Progressive |
| Prior Roles | U.S. District Judge (S.D.N.Y.), U.S. Court of Appeals Judge (2nd Circuit) |
| Notable Distinction | First Latina Justice in U.S. Supreme Court history |
| Recent Event | Spoke at University of Alabama School of Law, April 2026 |
Sitting with that admission for a moment is worthwhile. One of the nine Supreme Court justices tasked with interpreting the nation’s highest law, a sitting justice, expressed public concern that a machine could read her court like a used paperback. It’s not a technical observation. That is more akin to an institutional admission.
Sotomayor’s remarks were in response to a question posed by a law professor regarding the benefits and drawbacks of artificial intelligence in the judiciary. As courtrooms around the nation experiment with the technology in everything from case management to legal research, it is becoming more difficult to avoid answering this question.

She only mentioned that a colleague had brought the issue to her attention, without naming a particular model. However, it wasn’t really which algorithm or which company she was worried about. It was about what that level of predictability truly says about the court.
Using case data dating back to 1816, a 2017 peer-reviewed study discovered that a machine-learning algorithm accurately predicted about 71.9% of individual justices’ votes and roughly 70.2% of the court’s actual decisions. That figure, which was already startling when the study was released, is probably higher now.
Since then, AI systems have advanced significantly, and the current court—locked into a 6-3 conservative supermajority—presents a clearer ideological signal than any bench in recent memory. It’s possible that the algorithms are actually identifying something more mechanical—a dependable sorting of cases along preset lines—rather than judicial reasoning at all.
Underneath Sotomayor’s words is an unsettling implication. “We may not be stepping out of our normal thinking and opening our minds to new ideas enough,” she replied. It read more like an internal critique than a critique of AI, with a senior justice openly questioning whether the court she sits on is carrying out the demanding cognitive work required by the organization. Even though she can’t express it completely from the inside, there’s a feeling that she understands how it appears from the outside.
The contradictions in the Alabama appearance were what made it so intriguing. In the same sentence that described AI predictive forecasting as “a very bad thing,” Sotomayor advised all of the students present to become proficient in it as a tool before graduating. She recalled having dinner with former law clerks who are now employed by large firms, and they informed her that using AI is expected of all new associates.
Artificial intelligence, according to her, is “a sophisticated human” that can sustain both the best and worst aspects of human nature. That doesn’t sound like someone who is discounting technology. It is the language of someone who has given it considerable thought and come to a truly complex conclusion.
It’s difficult to ignore how this incident relates to a larger conflict within the court. A few weeks prior, during oral arguments in a case involving a prominent proponent of artificial intelligence, Justice Samuel Alito asked the attorney if they should simply let Claude, Anthropic’s AI system, make the final decision. The attorney politely declined.
However, the exchange revealed something genuine: the justices are aware of these tools, thinking about them, poking fun at them, and even making jokes about them. This suggests that the conversation taking place inside the marble halls of the court is more advanced than most public statements reveal.
Observing this from the outside, it’s remarkable how little the AI forecasting narrative actually has to do with AI. It concerns the consequences of an institution becoming so structurally predictable that its results can be predicted ahead of time. The predictability is being measured by the algorithm, not created. And at least one justice appears to recognize the distinction.
Disclaimer
Nothing published on Creative Learning Guild — including news articles, legal news, lawsuit summaries, settlement guides, legal analysis, financial commentary, expert opinion, educational content, or any other material — constitutes legal advice, financial advice, investment advice, or professional counsel of any kind. All content on this website is provided strictly for informational, educational, and news reporting purposes only. Consult your legal or financial advisor before taking any step.
