<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Responsible AI Use for Courts Archives - Creative Learning Guild</title>
	<atom:link href="https://creativelearningguild.co.uk/tag/responsible-ai-use-for-courts/feed/" rel="self" type="application/rss+xml" />
	<link>https://creativelearningguild.co.uk/tag/responsible-ai-use-for-courts/</link>
	<description>The Creative Learning Guild—an NGO advancing access to education in arts and crafts. From workshops to accredited life-skills courses, each post explores real stories and impact-driven projects promoting lifelong learning.</description>
	<lastBuildDate>Sun, 12 Apr 2026 07:54:33 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>Responsible AI Use for Courts: How to Manage Hallucinations and Ensure Veracity</title>
		<link>https://creativelearningguild.co.uk/ai/responsible-ai-use-for-courts-how-to-manage-hallucinations-and-ensure-veracity/</link>
					<comments>https://creativelearningguild.co.uk/ai/responsible-ai-use-for-courts-how-to-manage-hallucinations-and-ensure-veracity/#respond</comments>
		
		<dc:creator><![CDATA[errica]]></dc:creator>
		<pubDate>Sun, 12 Apr 2026 07:54:32 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Responsible AI Use for Courts]]></category>
		<guid isPermaLink="false">https://creativelearningguild.co.uk/?p=8404</guid>

					<description><![CDATA[<p>A specific type of document that would not have existed three years ago is now appearing on judges&#8217; desks. At first glance, it appears to be competent legal work. The formatting is neat. The arguments are structured. The citations, which include case names, reporters, and page numbers, are present and precisely where they should be, [...]</p>
<p>The post <a href="https://creativelearningguild.co.uk/ai/responsible-ai-use-for-courts-how-to-manage-hallucinations-and-ensure-veracity/">Responsible AI Use for Courts: How to Manage Hallucinations and Ensure Veracity</a> appeared first on <a href="https://creativelearningguild.co.uk">Creative Learning Guild</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>A specific <a href="https://creativelearningguild.co.uk/global/the-board-of-peace-charter-inside-the-secret-document-signed-at-davos/" type="post" id="3973">type of document</a> that would not have existed three years ago is now appearing on <a href="https://creativelearningguild.co.uk/finance/how-steve-bisciottis-net-worth-climbed-to-8-5-billion/" type="post" id="3202">judges&#8217; desks</a>. At first glance, it appears to be competent legal work. The formatting is neat. The <a href="https://creativelearningguild.co.uk/education/the-first-peer-reviewed-study-on-ai-in-higher-education-is-out-the-findings-are-unsettling/" type="post" id="8386">arguments are structured</a>. The citations, which include case names, reporters, and page numbers, are present and precisely where they should be, lending the entire document credibility. A clerk then makes an attempt to remove one of the cases. There is no such thing. The citation is a self-assured, properly formatted creation. The AI that created it was unaware that it was creating something. That&#8217;s the issue, and it&#8217;s showing up in courtrooms all over the nation at a rate for which the legal system was completely unprepared.</p>



<p>A known drawback of these tools since they became widely accessible is AI hallucinations, the industry term for when a language model produces content that sounds plausible and accurate but isn&#8217;t. However, they are especially significant because of the legal context. In a court document, a hallucinated citation is not an abstract error. It causes due process issues, wastes judicial time, burdens opposing counsel, and, in certain situations, results in penalties for the lawyers or litigants who submitted it without verification. The courts have been reacting to the growing number of incidents for the past two years in a somewhat uneven manner by imposing new requirements, holding educational hearings, and delaying cases while everyone works out what the rules should be.</p>



<h2 class="wp-block-heading">Key Information: AI in Courts — Hallucinations and Responsible Use</h2>







<figure class="wp-block-image size-large"><img fetchpriority="high" decoding="async" width="1024" height="529" src="https://creativelearningguild.co.uk/wp-content/uploads/2026/04/Screenshot-2026-04-12-124207-1024x529.png" alt="Responsible AI Use for Courts: How to Manage Hallucinations and Ensure Veracity" class="wp-image-8405" title="Responsible AI Use for Courts: How to Manage Hallucinations and Ensure Veracity" srcset="https://creativelearningguild.co.uk/wp-content/uploads/2026/04/Screenshot-2026-04-12-124207-1024x529.png 1024w, https://creativelearningguild.co.uk/wp-content/uploads/2026/04/Screenshot-2026-04-12-124207-300x155.png 300w, https://creativelearningguild.co.uk/wp-content/uploads/2026/04/Screenshot-2026-04-12-124207-768x397.png 768w, https://creativelearningguild.co.uk/wp-content/uploads/2026/04/Screenshot-2026-04-12-124207-150x77.png 150w, https://creativelearningguild.co.uk/wp-content/uploads/2026/04/Screenshot-2026-04-12-124207-450x232.png 450w, https://creativelearningguild.co.uk/wp-content/uploads/2026/04/Screenshot-2026-04-12-124207.png 1175w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption class="wp-element-caption">Responsible AI Use for Courts: How to Manage Hallucinations and Ensure Veracity</figcaption></figure>



<p>The individuals who are actually seated on the bench have shown the most obvious change in perspective. During a Thomson Reuters webinar on the reliability of AI in courts, Judge Debra McLaughlin discussed how the filing profile of self-represented litigants has clearly changed. These documents used to be handwritten or simply typed, with few citations and little legal justification. They are now well-structured and heavily cited, which seems like a good thing until you see how often the citations are useless. The documents appear to have been created by a lawyer. Upon closer inspection, the content is frequently not. Judges who used to spend their time reading arguments now devote a substantial amount of extra time to verifying each cited case; this type of work was not previously present in this volume.</p>



<p>In the Thomson Reuters report, U.S. Magistrate Judge Maritza Braswell stated clearly that while the idea of AI hallucinations may be novel, the fundamental issue—presenting false information to a court as trustworthy—is as old as the profession itself. AI has scaled that issue and dressed it in elegant language, making it more difficult to understand at first glance. A fictitious case in a legal brief that calls for human deceit. Only inadequate verification is needed now. That type of failure necessitates different institutional responses.</p>



<p>According to research released by LeanLaw in December 2025, up to 34% of legal AI tools experience hallucinations. That figure is impressive and worthwhile to consider. This indicates that approximately one in three legal outputs produced by AI contain an error, such as an incorrect citation, a misstated rule, or a completely fluent statement of an invented authority. Since legal arguments rely on the accuracy of all supporting authorities, the standard for acceptable error rates in legal work has always been essentially zero. An AI tool isn&#8217;t a <a href="https://creativelearningguild.co.uk/global/the-productivity-paradox-why-working-more-than-4-hours-a-day-is-killing-your-creativity/" type="post" id="4843">productivity tool</a> if it fabricates one-third of its citations. It&#8217;s a well-formatted liability generator.</p>



<p>The growing consensus among judges and legal experts is based on the seemingly obvious idea that AI output should not be trusted until it has been independently verified. This is a change from the previous &#8220;trust but verify&#8221; framing, which still implied a baseline of reliability, according to Thomson Reuters panelists. Verification is the primary act of professional responsibility, not a secondary step, according to the more recent standard. Before an AI-generated document is used in a courtroom, each citation, statute, and rule must be verified against original sources. The efficiency gains from using AI are not eliminated by this requirement, but if verification is viewed as optional, those gains quickly disappear.</p>



<p>There&#8217;s a sense that the legal profession is conducting a real-time experiment in how a high-stakes institution absorbs a potent but unreliable tool as all of this takes shape in courtrooms and legal conferences across the nation. The emerging solution makes sense: choose legal-specific tools over general-purpose ones; treat verification as non-negotiable; keep humans informed at every stage; and use AI as a thought partner rather than a decision-maker. Additionally, it is labor-intensive and blames the practitioners who use AI for its shortcomings rather than the tools that make the mistakes.</p>



<p><br>In order to at least document the instances in which AI was used, some courts have started requiring disclosure of AI use in filings. The District of Colorado has released guidelines recognizing that while AI can lower expenses and increase productivity, the attorney is still responsible for verification. A guide for practitioners has been released by the National Center for State Courts. In the Thomson Reuters article, Holland &amp; Knight cybersecurity lawyer Mark Francis stressed that the first step in using generative AI responsibly is to comprehend how it operates, particularly that it is intended to predict expected language rather than generate accurate information. It is intended to sound correct. not to be correct.</p>



<p>The key to everything is that distinction. Language models are not optimized for legal accuracy, but rather for coherence and fluency. Because they have been trained on vast volumes of legal text, they generate outputs that read like legal work. However, reading like legal work and creating trustworthy legal work are two different things. The difference is where sanctions occur, where cases are postponed, and where the legitimacy of AI in legal contexts is either carefully managed or deteriorates due to an accumulation of mistakes. The courts that are handling this the best are those that view AI verification as the standard of care that the profession has always required, rather than as an additional burden. Technology evolved. The duty didn&#8217;t.</p>
<p>The post <a href="https://creativelearningguild.co.uk/ai/responsible-ai-use-for-courts-how-to-manage-hallucinations-and-ensure-veracity/">Responsible AI Use for Courts: How to Manage Hallucinations and Ensure Veracity</a> appeared first on <a href="https://creativelearningguild.co.uk">Creative Learning Guild</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://creativelearningguild.co.uk/ai/responsible-ai-use-for-courts-how-to-manage-hallucinations-and-ensure-veracity/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
