Alaska’s courts’ announcement of their AI aspirations has an almost poignant quality. The concept was fairly simple: create a chatbot that could guide bereaved locals through the probate process, explain the proper paperwork to submit, and spare them the trouble of navigating a legal system that, to the majority of people, seems to have been created to be confusing. They estimated three months. a tidy project. Then came reality.
Instead, the Alaska Court System endured a fifteen-month ordeal through one of the nation’s most transparent government AI development experiments. The Alaska Virtual Assistant, or AVA as the team refers to it, ended up serving as an inadvertent mirror for everything the AI industry would rather avoid talking about: hallucinations, false confidence, and the huge discrepancy between what a system can do in a demo and what it consistently does when someone whose father recently passed away is typing questions at midnight.
| Project Name | Alaska Virtual Assistant (AVA) |
| Developed By | Alaska Court System, in partnership with LawDroid (Tom Martin) |
| Purpose | Help self-represented residents navigate probate forms and court procedures |
| Original Timeline | 3 months (planned) |
| Actual Duration | Over 15 months and ongoing as of early 2026 |
| Key Figure | Stacey Marz, Administrative Director, Alaska Court System |
| Supporting Organization | National Center for State Courts (NCSC) |
| AI Models Used | OpenAI GPT family (with retrieval-augmented generation) |
| Testing Scope | Reduced from 91 questions to 16 refined test questions |
| Estimated Query Cost | Approx. 11 cents per 20 queries (model inference only) |
| Planned Launch | Late January 2026 (tentative) |
| Notable Issue | Hallucinations, including references to a non-existent Alaska law school |
| Government AI Adoption Rate | Less than 6% of local government practitioners prioritize AI for service delivery (Deloitte) |
The story becomes truly unsettling and ridiculous when it comes to the hallucination problem. When testers once asked AVA where users could get legal assistance, the chatbot suggested that they get in touch with a network of law school alumni. Alaska lacks a law school, which is a big catch. The model may have taken this from somewhere in its larger training set, boldly portraying a fiction as fact in the manner that AI systems have an almost theatrical talent for.
This was not a minor error. A fabricated resource is precisely the type of error that causes actual harm in probate cases, where a misinterpreted instruction could result in a family losing property or missing a crucial filing deadline.

The Alaska Court System’s administrative director, Stacey Marz, put it this way: if someone acts based on false information from AVA, they may actually suffer harm. It’s worth pondering that statement. It directly contradicts the “ship fast, iterate later” mentality that has characterized tech product development for more than 20 years. After the damage is done, courts are not allowed to covertly fix a bad update. A family that submitted the incorrect probate form due to a confident AI response also does not.
AVA’s technical infrastructure was developed by LawDroid’s lawyer and AI developer, Tom Martin, who put a lot of effort into trying to control the chatbot’s personality. Personality design for a legal assistant may seem odd, but it turns out to be important. In the dry words of one consultant, early versions were overly sentimental, sending condolences to users who were already fed up with everyone in their lives expressing sympathy for their loss.
AI empathy was eliminated. What was left was a system that had to be precise, reliable, and modest enough to stay inside the confines of the Alaska Court System’s own probate documents instead of straying into the larger internet.
The irony in this situation is difficult to ignore. The larger AI sector has spent years promising that these tools would democratize access to services that have traditionally been restricted by cost and complexity, such as information and legal advice. For a population that already faces logistical and geographic challenges that most states do not, Alaska made a sincere effort to fulfill that promise.
Furthermore, the findings indicate that creating a trustworthy, responsible version of that promise requires far more patience than the hype cycle takes into consideration.
The testing stage uncovered its own issues. Initially, the team created a bank of 91 test questions that covered a variety of potential queries from users. The list was reduced to 16 targeted questions because it proved to be too time-consuming to properly evaluate—given the stakes, each response required human legal review.
Focused regression testing can be rigorous, so it’s not necessarily a retreat. However, the change does highlight a fact about government AI development: the final mile of accountability is always the most costly, and it is rarely included in the initial budget or schedule.
In the end, AVA was slated to launch in late January 2026, but planned launches in government technology tend to remain conditional. For the time being, the project is more valuable than a success story would have been because it provides a thorough, firsthand account of what it really takes to apply AI tools to public services where there is little room for error and the users are actual people in challenging circumstances. Three months, according to the hype. However, the work stated otherwise. It’s not a failure. The truth simply appears like that.
Disclaimer
Nothing published on Creative Learning Guild — including news articles, legal news, lawsuit summaries, settlement guides, legal analysis, financial commentary, expert opinion, educational content, or any other material — constitutes legal advice, financial advice, investment advice, or professional counsel of any kind. All content on this website is provided strictly for informational, educational, and news reporting purposes only. Consult your legal or financial advisor before taking any step.
