In a new Workplace Relations Commission (the “WRC”) decision (Fernando Oliveira v Ryanair DAC), (the “Ryanair case”), the WRC characterised the employee claimant’s submission, drafted with the assistance of Artificial Intelligence (“AI”), as “rife with citations that were not relevant, mis-quoted and in many instances, non-existent.”
The use of Generative AI tools in employment disputes is an increasing and potentially worrying new trend. The recently published WRC Guidance on the use of AI tools to prepare material for submission to the WRC (the “WRC Guidance”) sheds some much needed light on this new approach to employment law litigation in Ireland.
In this article as part of our AI series, we consider the implications of the use of AI in employment disputes.
Litigation Risk
The Ryanair case serves as a cautionary tale of the dangers of utilising AI in drafting legal submissions.
So what happened? A Ryanair cabin crew member brought claims against their employer, Ryanair, including allegations of race and family status discrimination, harassment and victimisation.
Ryanair refuted the allegations as frivolous, vexatious and bound to fail. Ryanair raised concerns that the employee’s written submissions were generated by AI. Ryanair pointed out that the case citations (i) did not appear to give the outcome relied upon; (ii) contained phantom determinations; and (iii) incorrectly referenced awards of compensation in unsuccessful claims.
The employee argued that Ryanair’s insinuation that his submissions “may have been generated with the assistance of artificial intelligence” was baseless, unprofessional, and designed to distract from the merits of the case. However, he later acknowledged that he “may” have used AI.
Here the employee’s claims were ultimately dismissed as not well-founded for reasons unrelated to the employee’s AI usage. The Adjudication Officer (“AO”) stated that she was not particularly concerned about whether the employee complainant had used AI or not. However, the AO was clear that parties making legal submissions have an obligation to ensure that they are relevant, accurate and not misleading. The “phantom citations” the employee sought to rely upon were described as egregious and an abuse of process.
The WRC Guidance acknowledges that AI tools can be helpful to organise submissions, check grammar and punctuation, and generate preliminary drafts. Nonetheless, the WRC re-iterates its position that such tools have a tendency to produce inaccurate or misleading information.
The use of AI in legal proceedings is far from confined to the WRC. Indeed in recent times, there have been several instances in which Irish court judges have had to grapple with submissions which have been drafted using Generative AI tools.
Of particular concern in the use of AI in this context is the risk of inadvertent breaches of data protection. In a recent Irish High Court appeal hearing, the Circuit Court had ordered that all parties should remain anonymous. However, the defendant successfully applied to have a 43 page report, which had been prepared by him with the assistance of AI and without professional legal oversight, read by the court. The Court commented that;
“If material has been put into an AI tool, which you concede has been done, that is a breach because anything that has been put online can be made publicly available by an AI tool.”
It is evident from these comments that the judiciary will view the disclosure of information to an AI tool as a breach of confidentiality. For legal practitioners taking such an approach, the risk of breaching legal privilege is self-evident. The WRC Guidance also notes that the use of free online AI tools poses a risk in terms of their retention of personal or commercially sensitive data.
In another recent decision, Ireland’s High Court espoused strong comments regarding the possibility that the lay litigant defendant had used AI in drafting his submissions. In the Court’s view;
“If they have used a generative AI program, they have been fooled. Such programs often sound persuasive but can be fatally flawed. Sadly, in this case the argument was indeed fatally flawed. The general public should be warned against the use of generative AI devices and programs in matters of law.”
Further, this month it was reported that a company director used artificial intelligence to “churn out reams” of legal papers which resulted in Court criticising and striking out several motions filed by the director as “an abuse of process”.
Given the considerable costs and implications at stake in High Court claims, these comments are significant alarm bells to be heeded by any litigant or legal professional considering taking such an approach.
Employers Beware
Many, if not most employers are now utilising AI to streamline workstreams across their business. In the employment sphere, AI has become a common tool used in the areas of recruitment and performance analysis.
Given the complexity and cost of employment investigations and HR procedures, using AI to improve efficiency is attractive to employers, although inevitably this comes with inherent risks.
As discussed in further detail here, biased data may make its way into AI systems with potentially catastrophic results for employers that use such tools. Any employee that feels that they have been unlawfully discriminated against based on one of the protected equality grounds may challenge any outcome which they deem discriminatory. Although we are yet to see such a case in Ireland, employers in the UK and US have had successful actions taken by employees on this basis.
Further, the very nature of fact gathering investigations is that they will necessarily involve sensitive and often personal data. In recent times, the Data Protection Commission clamped down on several multinational tech companies for using such data to train their AI models. Inevitably, the input of any such data by an employer into an AI tool comes with inherent risk and potentially exposes any business that takes such an approach to large fines.
The EU AI Act came into force in Ireland in August 2024 and is being implemented on a phased basis. The Act applies a risk based regulatory system for the use of AI applications in the workplace. Under the Act, “AI systems intended to be used to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships” are marked “high risk”. As such, they will need to comply with strict regulatory requirements which are due to be in effect from August 2026. However, this timeline may be delayed by up to a maximum of 16 months pursuant to the draft EU Digital Omnibus Regulations- see further information in our article here).
An entity that deploys a high risk AI system in the EU will be obliged to:
- ensure proper use according to instructions;
- assign competent human oversight to natural persons with competence, training and authority;
- maintain logs for a period of six months;
- notify workers and representatives before using such a tool;
- notify individuals that have been subject to decisions made by such a tool;
- carry out a data protection impact assessment; and
- submit annual reports to surveillance and data protection authorities.
Failure to comply with these obligations can lead to fines of up to €15m or 3% of worldwide annual turnover.
Automated Decision Making
Article 22(1) GDPR prohibits organisations from making solely automated decisions which have a legal or other similar significant effect on individuals.
Article 22(2) does however contain certain carveouts from this general prohibition which permits automated decision making where; (i) the individual consents to it; (ii) where it is necessary to perform a contract between the data subject and data controller; or (iii) where it is authorised by the EU or Member State law.
The Court of Justice of the European Union (the “CJEU”) were charged with considering the definition of automated decision making in Case C-634/21 SCHUFA. In that case, the CJEU had to consider whether SCHUFA, a German credit rating institution engaged itself in automated decision making in establishing credit scores and providing these to banks as part of their review of loan applications.
In its judgment, the CJEU held that three conditions need to be met to be considered as being engaged in automated decision making under Article 22. These are that; (i) a decision must be made; (ii) it must be based solely on automated processing, including profiling; and (iii) it must produce legal effects concerning the individual or otherwise produce an effect that is equivalent or similarly significant in its impact on the individual. The CJEU found that all of these conditions were met.
Given the somewhat restrictive interpretation taken by the CJEU in SCHUFA, employers that engage in automated decision making take on significant compliance risk when it comes to GDPR. As set out above, where a company is deemed to have engaged in automated decision making, they will need to be able to justify this on the limited grounds set out in Article 22(2) GDPR.
In terms of harnessing AI to make decisions in employment disputes, given the legislative provisions and known risks set out above it is highly unlikely that any such process would pass muster in terms of fair procedures.
Conclusion
The use of AI to streamline workforce management has undoubtedly changed the roles of many HR professionals. Given the increased efficiencies experienced in these spheres, it is understandable that employers are keen to harness the potential of AI to streamline internal processes, including dispute resolution processes.
As set out above, lay litigants have already fallen foul of AI’s fallacies. Legal professionals that are grappling with their use of AI must undoubtedly sit up and take note given the risks AI poses to legal privilege and data protection obligations. Further, the accuracy and impartiality of AI tools which are currently in use have not (yet!) reached such a level as to make them the arbiter of employment disputes.
The WRC Guidance takes a pragmatic view and provides some best practice tips to be borne in mind when using AI. It also provides for the optional disclosure of AI use in order to increase transparency. Nonetheless, the Guidance is clear in setting out the risks associated with AI use and specifically states that an AO may refuse to admit material in order to ensure that the hearing progresses expeditiously.
Given the protections afforded to individuals under the GDPR, the judgment in SCHUFA, and the strict regulatory provisions set to be implemented next year under the new AI Act, the use of AI in employment disputes is a step not to be taken lightly.
One day it may be that AI plays a more front and centre role in the resolution of employment disputes. However, in order for this to be so, both the technology and the relevant legislative provisions must progress significantly to ensure that employers and employees alike have trust in employment processes and the protections which these afford.
Matheson’s Employment, Pensions and Benefits Group is available to guide you through navigating employment law claims in Ireland, so please do reach to our team or your usual Matheson contact.
This article was co-authored by Employment, Pensions and Benefits partners, Ailbhe Dennehy and Alice Duffy, associate Naomi Douglas and trainee, Liam McCarthy.