Whose Responsibility Is It Anyway? Chatbots and Legal Issues in Moffatt v Air Canada
On February 14, 2024, the British Columbia Civil Resolution Tribunal (which is the equivalent of Ontario’s Small Claims Court) issued its decision in Moffatt v Air Canada. In less than a week, the decision has made international news as a result of the involvement of Air Canada’s chatbot.
The plaintiff, Mr. Moffatt, booked a flight with Air Canada in November 2022 following the death of their grandmother. It was undisputed that Air Canada has a bereavement policy that offers certain accommodations, including reduced fares, to passengers traveling due to the death of an immediate family member. But an Air Canada chatbot gave Mr. Moffat information about the bereavement policy that contradicted the policy itself. The dispute was about the application of the bereavement policy in light of the chatbot’s incorrect advice.
Facts
While researching flights, Mr. Moffatt used a chatbot on Air Canada’s website. There was no evidence at the trial about the nature of Air Canada’s chatbot (a point to which we will come back), but the trial judge found that the parties implicitly agreed Mr. Moffatt was not chatting with an actual Air Canada employee. Mr. Moffatt asked the Air Canada chatbot about bereavement fares. The chatbot’s response included a statement that a ticket could be submitted to Air Canada for a reduced bereavement rate after it had already been purchased or after travel had already occurred, so long as the request for reimbursement was submitted within 90 days of the date the ticket was issued. The response also included a link to the Air Canada webpage addressing their bereavement policy, which stated that the bereavement policy did not apply after travel was already completed.
Mr. Moffatt relied on the information provided by the chatbot and booked flights between Vancouver and Toronto. They submitted their request for reimbursement after travel had occurred but well within the 90-day deadline specified by the chatbot. Mr. Moffatt sent Air Canada a screenshot from the chatbot that set out the 90-day window to request a reduced rate. Several days later, an Air Canada representative responded and admitted the chatbot had provided “misleading words” but pointed out that the chatbot had also linked to Air Canada’s webpage setting out the bereavement policy. The parties were unable to resolve their dispute and Mr. Moffatt brought a claim before the British Columbia Civil Resolution Tribunal.
Legal Issues
Mr. Moffatt was self-represented at trial. The trial judge concluded they were alleging negligent misrepresentation even though those words were not specifically used. To succeed in negligent misrepresentation, Mr. Moffatt needed to prove Air Canada owed them a duty of care; it made a representation that was untrue, inaccurate or misleading; Mr. Moffatt reasonably relied on that representation; and Mr. Moffatt’s reliance resulted in damages.
Air Canada was represented by an employee at the trial. It is unclear whether that employee was a lawyer. Its primary defence was that it could not be held liable for information provided by one of its agents, servants, or representatives, including a chatbot. It appears that no evidence was led at trial about the nature of the chatbot or who was responsible for designing or programming it. The trial judge found that this argument amounted to Air Canada claiming the chatbot was a separate legal entity responsible for its own actions. The trial judge rejected the argument on the basis that the chatbot was still part of Air Canada’s website and it should have been obvious to Air Canada that it was responsible for all the information on its website. Air Canada failed to explain why consumers should have understood that one part of its website was more accurate than its chatbot on the same website.
Air Canada also argued that it was not liable due to certain terms or conditions of its tariff, but did not introduce a copy of the tariff into evidence at trial. The trial judge found that Air Canada was a sophisticated litigant who should have known that if it wanted to rely on a contractual defence, it had to provide the contract.
The trial judge concluded that Mr. Moffatt made out all the elements of negligent misrepresentation. Air Canada owed them a duty of care given their commercial relationship of service provider and consumer. That duty of care required Air Canada to take reasonable care to ensure its representations were accurate and not misleading. The trial judge accepted that Mr. Moffatt relied on the chatbot to provide accurate information and that reliance was reasonable in the circumstances. The trial judge also accepted that Mr. Moffatt would not have travelled last-minute if they had known they would be paying the full fare, and awarded damages in the amount of the difference between the bereavement fare and the fare Mr. Moffatt actually paid.
Discussion
One wonders if certain legal issues in this case would have been fought with more or different evidence had the monetary stakes been higher than approximately $900.
Air Canada defended the case on the basis that it was not responsible for its chatbot, but failed to identify who was responsible for its chatbot if not itself, and did not lead any evidence supporting its position. It is unsurprising that the trial judge had no sympathy for this argument in the absence of any evidence about who was actually responsible for the chatbot. But let’s play out what might happen in a case where Air Canada decided to pursue its defence that it was not responsible for its chatbot more seriously.
Air Canada led no evidence at trial about how its chatbot was created and trained. Let’s assume Air Canada used an AI-based conversational bot (not a rule-based bot) that relied on AI algorithms and machine learning to process customer inputs. Air Canada may have used a chatbot platform or framework as the base model for its chatbot. If it was an AI and not a rules-based bot, Air Canada then would have needed to train the bot on a massive data set to allow the bot to understand user intent.
Assuming all of this to be true, Air Canada could have named the company from which it bought the chatbot platform or framework as a third party to the litigation and claimed contribution and indemnity for its chatbot’s inaccurate advice, rather than blaming a nameless third party for the chatbot’s actions. If that happened, presumably the plaintiff would seek to name that company as a defendant in the action, overcoming any limitation period issue based on the doctrine of discoverability (at least in Ontario). But would Mr. Moffatt have a cause of action in negligence or negligent misrepresentation against the chatbot platform company? They would not have one in contract law, the other typical relationship between parties to disputes at common law. More specifically, does a chatbot platform/framework company owe a duty of care to the customers of its customers? That is a question Ontario courts have not yet decided. But as generative AI-based chatbots increase in use, it is only a matter of time before this issue arises.
The dispute between Air Canada and the chatbot platform company also raises interesting legal issues. Presumably, the chatbot platform company would defend the third party claim (and perhaps the litigation entirely) on the basis that Air Canada’s chatbot made the negligent misrepresentation due to improper or incomplete training by Air Canada, not due to any inherent deficiency in the platform. Air Canada would likely need to lead evidence of how the chatbot was trained. In turn, the chatbot platform company would likely need to lead evidence of how the chatbot processed that training. How would this evidence be led at trial? How would either side authenticate their AI-based evidence? These are also difficult legal questions that do not currently have a single answer.
I want to leave this blog post with one final observation. At the time of publication, it has been one week since Moffatt v Air Canada was decided. I first read the case on Friday, February 16, 2024, just as I was wrapping up for the long weekend. Over the ensuing Family Day long weekend, coverage on this case exploded in legal blogs and national and international news outlets (see examples here, here, and here). The case was worth less than $1,000, yet has garnered international attention worthy of a much more significant monetary claim—all due to the involvement of the chatbot. This case’s notoriety should serve as a warning sign for companies engaged in litigation involving issues related to generative AI. Publicity and reputational risk are always present in litigation due to the public nature of court filings and decisions. But the risk may be higher than usual in cases where generative AI is involved, given the careful attention being paid to this topic in circles far beyond the law.