June 24, 2024

AI Here, AI There, AI Everywhere: Practical Challenges Litigating in an AI World

In the final instalment of our AI in the Courtroom series, we explore practical challenges that may arise when litigating in an AI world, and within the current framework of the Rules of Civil Procedure, Practice Directions, and common law. While the law is not entirely unequipped to deal with these challenges, evolution in the Rules and common law will likely be necessary as AI becomes more commonly used by various participants in the litigation process.

1. Risk of AI-generated Evidence

Civil litigators habitually challenge the admissibility of evidence on the basis of its relevance or reliability. Authenticity—whether a document or other piece of evidence actually is what it purports to be—is less frequently the basis of an objection. It is uncommon in civil litigation to dispute that an email was sent by the person in the “from” line, received by the person in the “to” line, at the time indicated on the message. But where there is concern that evidence was generated by AI and is not “real”, the way to challenge the admissibility of that evidence is through objecting to its authenticity.

The problem of deep fakes will arise in litigation when there is a dispute between the parties about whether a specific piece of evidence, such as a text message, voice note, or video, is real or the product of AI. A dispute about deep fakes is at its core a dispute about authenticity. While expert evidence might be able to resolve this question, not every case is going to involve experts. And unlike forgery, which previously required some level of skill to do well, anyone with a computer and access to the internet can now create a deep fake if they choose. How, then, can Courts address this problem?

The threshold for authentication of evidence is low. In R v CB, the Court of Appeal for Ontario held that to the extent there is a dispute about whether the evidence has been tampered with, there must be an “air of reality” to the claim about tampering, and in any event the issue of tampering would likely go to the weight of the evidence rather than its admissibility given the low threshold for authentication.

These principles may have worked well in an era before generative AI made it easy to fake a text message conversation or even a voice recording. However, with the advent of generative AI, there is a risk that evidence will be admitted even where there is a serious dispute about whether it was tampered with or created by AI because the threshold for authenticity is so low. Significant trial time may then be wasted adducing evidence about the alleged tampering and/or “deep fake” nature of the evidence, only for that evidence to go to weight rather than keeping the tampered with or deep fake document out of the court record in the first place.

The risks and problems posed by deep fakes in the era of generative AI is real. But wariness of deep fakes has another, equally challenging problem for litigators: what happens when a party knows a document is real, but alleges it is a deep fake in an effort to discredit that evidence or the other party? The only remedy to this problem currently available to Ontario courts is a heightened costs award. In Jurrius v Rassuli, a family law dispute, the father alleged that a photograph of a replica gun strapped to the child’s crib included in the applicant mother’s materials was “doctored” or “photoshopped”. On cross-examination at trial, he admitted he had in fact strapped the replica gun to the child’s crib and knew the photograph in the mother’s materials was valid. The father’s misrepresentation about the photograph was criticized in strong terms and an important basis for the court’s award of full costs. But a costs award made after the litigation is over is small comfort, given the seriousness of the allegation that evidence is fake (whether a deep fake or otherwise).

2. Expert Evidence Dependent on AI

Experts play a critical role in complex cases before the courts, but they can only play that role well if they are properly qualified and abide by their duties to the court.

As AI tools proliferate, courts will have to grapple with whether expert opinions that rely on AI or were generated by AI should be admitted as evidence. At the very least, the usual rules of evidence would apply: the four criteria for the admissibility of expert evidence are:

(1) relevance;

(2) necessity in assisting the trier of fact;

(3) the absence of any exclusionary rule; and

(4) proper qualification (R v Mohan).

These criteria provide significant discretion to the Court to, for example, exclude expert evidence on the basis that a generative AI model came to the “opinion” reported by the expert. Such “opinions” would – arguably – not have come from the qualified expert and would not be admissible. In contrast, an expert using an AI tool to assist in their analysis, would pose less concerns.

While Canadian courts have started to publish practice directions that address the use of AI by counsel and the Court, none have – to the authors’ knowledge – yet addressed the use of AI by expert witnesses. For example, the Federal Court’s Notice states: “This Notice requires counsel, parties, and interveners in legal proceedings at the Federal Court to make a Declaration for AI-generated content (the “Declaration”), and to consider certain principles (the “Principles”) when using AI to prepare documentation filed with the Court.” There is no mention of experts.

It would therefore appear that generative AI and other AI tools can be used by experts to generate expert reports and inform their opinions without disclosure being required. Arguably, existing rules and codes of conduct may apply to prevent such situations in certain circumstances, for example, by requiring an expert to disclose the methodology used for any testing he or she conducted. But these kinds of requirements do not explicitly apply to AI and are open to interpretation.

Given the centrality of expert opinions to certain kinds of cases, addressing the use of AI by experts will be critical to ensuring the fairness and transparency of the litigation process.

3. Use of AI by Decision-Makers

Judges and administrative decision-makers will certainly not be immune from the lure of using AI in generating decisions. And neither should they be, so long as safeguards are in place to protect from bias and ensure procedural fairness. Court systems in Ontario and across Canada are in crisis, and AI may be part of a solution to that crisis. This is nothing new. The legal profession has (slowly, begrudgingly) embraced technology in the last few decades – from word processing, to legal research databases, to e-discovery tools – resulting in great gains of efficiency.

Thus far, courts are taking it slowly with AI. For example, the Federal Court has addressed this issue in its "Interim Principles and Guidelines on the Court’s Use of Artificial Intelligence" stating that it:

“will not use AI, and more specifically automated decision-making tools, in making its judgments and orders, without first engaging in public consultation.”

This is a reasonable starting stance. The public needs to be confident that its judicial and quasi-judicial decision-makers are not delegating their responsibilities away. One of our colleagues has explored the impact of AI on administrative law and procedural rights more fully here.

Decisions from administrative decision-makers have already started to be challenged on the basis that the decision-maker used an AI tool. For example, in Haghshenas v Canada (Citizenship and Immigration), the applicant argued that a decision made by an immigration officer with respect to a work permit was unreasonable and not procedurally fair as it was reached with the help of an AI system called Chinook.

We pause here to say that we question whether the Court should have accepted that Chinook was properly characterised as an AI tool. In fact, Immigration, Refugees and Citizenship Canada’s statement on “Chinook Development and Implementation in Decision-Making” states that:

“Chinook is a tool designed to simplify the visual representation of a client’s information. It does not utilize artificial intelligence (AI), nor advanced analytics for decision-making, and there are no built-in decision-making algorithms.”

Regardless, the Court proceeded as if an AI tool had in fact been used in the decision-making process.

In dismissing the application, the Court determined that the decision was made by the officer, not by Chinook, though the officer did consider input compiled by the AI. The Court highlighted that the use of AI was irrelevant to the judicial review application because the officer ultimately made the administrative decision. The Court concluded on this issue with:

“Whether a decision is reasonable or unreasonable will determine if it is upheld or set aside, whether or not artificial intelligence was used. To hold otherwise would elevate process over substance.”

While this is an attractive framing, it fails to acknowledge that reasonableness review may be hampered by the use of AI tools, for example, if their results are not explainable (see our previous blog which describes explainable vs non-explainable AI, and why judges need to understand the difference). As Courts’ understating of AI becomes more nuanced, we expect to see more detailed and nuanced guidance on when use of AI in decision-making is acceptable and when it is not.

Takeaways

Whether addressing the possibility of deep fake evidence, AI-generated expert opinions, or robot decision-makers, what the cases described above tell us is that counsel and the Courts must remain vigilant in ensuring that no part of the litigation ecosystem is abdicating their responsibilities to AI, even if AI is here, there, and everywhere.

This is the final instalment of our 5-Part Series on AI in the Courtroom, which includes the below blogs.