British Columbia Tribunal Finds Representation Made by AI Chatbot Binding on Company

As AI becomes more accessible to corporate entities, many seek to optimise efficiency by automating processes where possible. A very common example of this is the use of AI "chatbots", programs on a website which allow visitors to ask questions to an AI interface and receive help as though they were speaking with a human representative.

The efficiency gained by the employ of chatbots is not without potential pitfalls, as the British Columbia Civil Resolution Tribunal's February 14, 2024 decision in Moffatt v. Air Canada demonstrates. Here, a customer of the respondent airline sued for negligent misrepresentation after being informed that information he relied on when buying a ticket was incorrect.

Mr. Jake Moffatt was researching flights following the death of his grandmother on November 11, 2022. A chatbot on the website told him that bereavement fares, special discounts given to customers who are travelling due to the death of a loved one, could be applied retroactively, up to 90 days from the purchasing of the ticket, in addition to providing a link to the website page discussing this topic. Relying on this information, Mr. Moffatt purchased a one-way ticket between Toronto and Vancouver on November 11, 2022, and then on November 16, another one-way ticket, this time back to Toronto from Vancouver, at regular fare, expecting to apply retroactively for the bereavement fare after his return. The following day, he applied for the bereavement fares on a retroactive basis. From December 2022 through to February 2023, Mr. Moffatt was in contact with Air Canada, attempting to receive a reimbursement from the application of the special rate.

Air Canada told him that the chatbot had provided "misleading words" regarding the application of bereavement fares, but pointed out that the webpage which it had linked contained the correct policy. Air Canada ultimately refused to provide the refund.

The tribunal interpreted Mr. Moffatt's claim against Air Canada as a case of negligent misrepresentation. As a customer, the airline owed him a duty of care. By allowing a chatbot to negligently make an incorrect representation to Mr. Moffatt, the company breached this duty of care. Mr. Moffatt then relied upon that negligent misrepresentation, to his financial detriment.

Air Canada argued that it was not responsible for representations made by its agents, representatives or servants, including the chatbot, and also suggested that the chatbot was its own entity. Both submissions were rejected by the tribunal, which stated "It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot." The tribunal also took issue with Air Canada's suggestion that customers would bear the responsibility of having to ensure that information found on the company website was accurate. Finally, Air Canada, while arguing that it was "not liable due to certain terms or conditions of its tariff" failed to provide the tribunal with a copy of the tariff.

The tribunal calculated the difference between the bereavement fare rate and the full price Mr. Moffatt paid to $650.88 in damages, plus pre-judgment interest and legal fees.

Moffatt, while only a tribunal case, should be of concern to any organization that seeks to use AI, including chatbots, to interact with the public, such as in providing information on a topic. Courts may treat statements made by chatbots the same way as representations by individuals speaking on behalf of the organization, which the public can expect to rely on. Charities and not-for-profits should therefore be mindful that the use of chatbots carries risks, including the production of inaccurate information that can mislead the public.

New EU Artificial Intelligence Law Will Impact Canadian Organizations Serving EU Residents

The European Union ("EU") has become the first governing body to adopt comprehensive artificial intelligence ("AI") regulations. On March 13, 2024, the EU Parliament adopted the Artificial Intelligence Act (the "Act"), which aims to ensure "safety and compliance with fundamental rights, while boosting innovation." The Act, like the EU's General Data Protection Regulation, has an extraterritorial component, which applies to certain Canadian organizations. In particular, Canadian organizations that use AI components, such as chatbots, in providing services online to EU consumers will be caught under the Act.

The new rules, which establish obligations for AI based on its potential risks and level of impact, include a ban on certain AI applications that pose threats to citizens' rights. These banned applications encompass biometric categorization systems based on sensitive characteristics, untargeted scraping of facial images for facial recognition databases, emotion recognition in workplaces and schools, social scoring, predictive policing solely based on profiling individuals, and AI that manipulates human behavior or exploits vulnerabilities.

High-risk AI systems, posing significant potential harm to health, safety, fundamental rights, environment, democracy, and the rule of law, are subjected to clear obligations. These encompass risk assessment and reduction, maintenance of use logs, transparency, accuracy, and human oversight. Potential victims may submit complaints regarding AI systems to consumer protection bodies, to be set up locally by each member state, and receive explanations for decisions affecting their rights.

Transparency requirements are outlined for general-purpose AI ("GPAI") systems, including compliance with EU copyright law and publishing detailed training content summaries. Stringent measures are imposed on powerful GPAI models to mitigate systemic risks and ensure incident reporting. Moreover, deepfakes - artificial or manipulated images, audio, or video content - must be clearly labeled.

As indicated above, the Act will affect organizations outside of the EU providing online services containing AI elements and accessible to EU consumers. Organizations which use AI in their processes are referred to as "deployers" by the Act, and have a number of obligations under the Act. For example, deployers who utilize "high-risk" AI systems will be required to ensure that human oversight is present in any decision-making process influenced by AI. As well, deployers are required to ensure that the AI system is accordance with the system provider's instructions, and that there is general oversight in its application. Fundamental rights impact assessments will generally be required of most organizations.

Canadian charities and not-for-profits that utilize AI and have donors and/or customers in the EU should be aware of the new regulations. Penalties for noncompliance are harsh — 35 million Euros or 7% of total worldwide annual turnover — whichever is higher. While many provisions will take effect two years after the Act comes into force, the regulations pertaining to banned AI practices and associated penalties will take effect six months after the Act comes into force, and generative AI provisions will take effect twelve months after the Act comes into force.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mr Adriel Clayton
Carters Professional Corporation
211 Broadway
Orangeville
Ontario
L9W 1K4
CANADA
URL: www.carters.ca

© Mondaq Ltd, 2024 - Tel. +44 (0)20 8544 8300 - http://www.mondaq.com, source Business Briefing