Skip to content

Judicial Guidance on AI use in Litigation

In its recent decision in Guerin v O’Doherty [2026] IECA 48 (“Guerin”), the Irish Court of Appeal set out general principles for the use of AI in litigation. The court stressed that all parties to litigation (whether legally represented or not) have an obligation not to mislead the court by relying on fabricated authorities or unsupported propositions.  The judgment highlights some of the risks associated with the use of generative AI in conducting litigation.

Hallucination risk and impact

The defendant (a litigant in person) used AI to prepare their written submissions. This resulted in the submissions referencing authorities that did not exist, as a result of so-called “hallucinations”. The court observed these types of issues should be expected, given the nature of large language model (“LLM”) artificial intelligence systems.

The court expressed its view that the apparent failure by the defendant to verify the authorities they had cited, or even that they had used AI to assist their preparation of the submissions, had resulted in wasted time and costs.  The court highlighted the “unfair burden” this imposed on the plaintiff, as well as the potential for bringing the administration of justice into disrepute.

Obligation not to mislead the court

The court emphasised that “[p]arties, whether represented or not, have an obligation not to mislead the court, which includes the obligation not to rely upon or advance submissions based upon “fake” authorities or propositions which have no basis in law.”

The court acknowledged that lawyers are also subject to professional and ethical obligations (including in relation to the use of AI and other technologies) that do not apply to litigants in person.

While not referenced in this decision, it is important to note that specific guidance for solicitors has been issued by the Law Society on the use of AI, including its “Guidelines for the use of generative artificial intelligence” published in December 2025, which followed its Guidance Note “Mitigating AI Hallucinations”.

General principles for use of AI in litigation

The court set out the following principles of general application regarding the use of AI in litigation:

  • Duty not to mislead the court: although parties are entitled to use AI to assist with research, this must be done responsibly and the parties must not (even inadvertently) mislead the court by advancing propositions on the basis of hallucinated authorities;
  • Inform parties: any party that has used AI in the preparation of its material should expressly inform the other parties to the litigation that they have done so;
  • Responsibility of the parties: self-represented parties are responsible for the written or oral submissions in the case in the same way as lawyers representing a party are;
  • Independent verification: any party using AI to assist with research must independently verify their submissions are accurate;  and
  • No citation of unverified authorities: authorities should not be cited by a party unless they have been verified as genuine and, at least arguably, as authority for the proposition being advanced.

Other Irish case law

The decision in Guerin is the first time that the Irish courts have set out detailed guidance on the use of AI in litigation. It follows other instances of the Irish courts dealing with the use of AI producing inaccurate results.

In Von Geitz v Kelly & Ors and Von Geitz v Robertson & Ors [2026] IECA 29 the Court of Appeal stressed that it is the responsibility of every litigant to check the results of any AI tool used to “ensure that in the short term that his opponents were not sent on a wild goose chase and ultimately that the Court was not presented with rubbish”.

In Malone & McEvoy v Laois County Council & Ors [2025] IEHC 345, a quotation in written submissions alleged to be from a CJEU decision could not be found and when queried by the court no response was given as to who had generated the original submission.  The court noted that “appreciable judicial time was wasted on the issue – not least trying to find the source of the quotation” and stressed the “vital importance of precision and accuracy in written submissions. That duty lies on lay litigants as much as on lawyers”. Given there was no impact on the outcome and the court’s acceptance that there was no intention to mislead, no sanction followed.

The use of AI was also implied in John Coulston & Ors v Elliott [2024] IEHC 697, following the inclusion of an inaccurate submission by the defendants, who acted as litigants in person for most of the case. The court observed that “the general public should be warned against the use of generative AI devices and programs in matters of law”. The plaintiffs were permitted to provide a one-page rebuttal to the inaccurate submission (which meant no extra costs would be incurred) and the court ultimately rejected the initial submission.

Consequences

In Guerin, the court emphasised that it did not consider the defendant had intentionally misled the court or that the defendant was even aware of the presence of hallucinations in their written submissions. Although the court noted that it would have preferred the defendant to have notified the plaintiff’s solicitors of their use of AI in the production of their written submissions, no adverse conclusions were ultimately made against the defendant on the basis that, at the time, no guidance was available to litigants in person in relation to the use of AI in legal proceedings.

In the future, Irish courts are unlikely to be as sympathetic if similar issues arise.  The court in Guerin, warned “[p]arties should be aware that the court has a variety of sanctions open to it where parties use AI in breach of these guidelines and where their use has the potential to mislead the court”. This will include the possibility of adverse cost implications for parties.

Conclusion

AI is becoming increasingly accessible and offers many advantages and efficiencies across a broad range of tasks.  Those using it need to proceed cautiously, however, as even the most advanced tools are still prone to “hallucinations”.

Litigators should take special note of the warnings issued by the Court of Appeal in Guerin.

Contact Us

For more information in relation to the use of AI in disputes, please reach out to our Disputes and Investigations team or your usual Matheson contact.

© 2026 Matheson LLP | All Rights Reserved