British Columbia Tribunal Finds Representation Made by AI Chatbot Binding on Company
As AI becomes more accessible to corporate entities, many seek to optimise efficiency by automating processes where possible. A very common example of this is the use of AI "chatbots", programs on a website which allow visitors to ask questions to an AI interface and receive help as though they were speaking with a human representative.
The efficiency gained by the employ of chatbots is not without potential pitfalls, as the
Mr.
The tribunal interpreted
The tribunal calculated the difference between the bereavement fare rate and the full price
Moffatt, while only a tribunal case, should be of concern to any organization that seeks to use AI, including chatbots, to interact with the public, such as in providing information on a topic. Courts may treat statements made by chatbots the same way as representations by individuals speaking on behalf of the organization, which the public can expect to rely on. Charities and not-for-profits should therefore be mindful that the use of chatbots carries risks, including the production of inaccurate information that can mislead the public.
New EU Artificial Intelligence Law Will Impact Canadian Organizations Serving EU Residents
The
The new rules, which establish obligations for AI based on its potential risks and level of impact, include a ban on certain AI applications that pose threats to citizens' rights. These banned applications encompass biometric categorization systems based on sensitive characteristics, untargeted scraping of facial images for facial recognition databases, emotion recognition in workplaces and schools, social scoring, predictive policing solely based on profiling individuals, and AI that manipulates human behavior or exploits vulnerabilities.
High-risk AI systems, posing significant potential harm to health, safety, fundamental rights, environment, democracy, and the rule of law, are subjected to clear obligations. These encompass risk assessment and reduction, maintenance of use logs, transparency, accuracy, and human oversight. Potential victims may submit complaints regarding AI systems to consumer protection bodies, to be set up locally by each member state, and receive explanations for decisions affecting their rights.
Transparency requirements are outlined for general-purpose AI ("GPAI") systems, including compliance with EU copyright law and publishing detailed training content summaries. Stringent measures are imposed on powerful GPAI models to mitigate systemic risks and ensure incident reporting. Moreover, deepfakes - artificial or manipulated images, audio, or video content - must be clearly labeled.
As indicated above, the Act will affect organizations outside of the EU providing online services containing AI elements and accessible to EU consumers. Organizations which use AI in their processes are referred to as "deployers" by the Act, and have a number of obligations under the Act. For example, deployers who utilize "high-risk" AI systems will be required to ensure that human oversight is present in any decision-making process influenced by AI. As well, deployers are required to ensure that the AI system is accordance with the system provider's instructions, and that there is general oversight in its application. Fundamental rights impact assessments will generally be required of most organizations.
Canadian charities and not-for-profits that utilize AI and have donors and/or customers in the EU should be aware of the new regulations. Penalties for noncompliance are harsh —
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
Mr
L9W 1K4
URL: www.carters.ca
© Mondaq Ltd, 2024 - Tel. +44 (0)20 8544 8300 - http://www.mondaq.com, source