Misleading Chatbots, Corporate Responsibility, and the Myth of Unregulated AI

Should companies be responsible for what their client-facing chatbot communicates? According to a recent ruling by a Canadian court (Moffatt versus Air Canada), a company is liable for what its site’s chatbot tells a customer. While these legal precedents are already well-established in other contexts, this case is one of the first public discussions around chatbots — which dynamically formulate responses based on customer input — and liability. As this example shows, and this article argues, before applying new laws and regulations to AI, it is important to make the best use of the existing ones.

Moffat versus Air Canada

In November 2022, following the death of his grandmother, Jake Moffatt booked a flight with Air Canada. The airline offers discounted bereavement fees for those travelling for the death of an immediate family member, and their chatbot informed him he could apply for these retroactively. However, Air Canada later claimed that is not their policy and pointed to webpages that stated that the bereavement policy does not apply to travel that has already occurred.

Air Canada argued it should not be responsible for information provided by one of its agents, servants, or representatives, including a chatbot powered by artificial intelligence (AI). 

The judge ruled this is not the case and that a customer should not have to decide between conflicting information provided by a company as to which is accurate, noting: 

“In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission. While a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.”

Existing Legal Principles Apply

Customer-facing chatbots are increasingly deployed to cut staffing costs and boost profitability, with one estimate that the market for chatbots will grow by 470% between 2023 and 2028. However, since the output of these AI systems is not reviewed by humans, and generative AI is still error-prone, inaccuracies in the information provided by them are common. 

While companies may, as Air Canada did, disavow errors made by their chatbots,  the well-established legal principle that employers are responsible for their employees’ actions when doing their jobs, even if they do it poorly, applies. 

The hype of AI novelty has resulted in attempts to skirt precedents and existing legal frameworks. Still, while the misleading information was generated responsively by software rather than spoken by a human, the basic legal question is unchanged. If a chatbot provides inaccurate information to a customer, that information is still a result of the actions of an employee. In this case, an employee’s decision to use a chatbot. The employer is responsible for its own communication, whatever form it may take. 

Corporations looking to automate and reduce headcount to improve profitability are likely to use AI systems to interact with customers even more. Gartner claims that globally, “conversational AI” will reduce labour costs for contact centres by $80 billion USD in 2026. Savings on these costs, however, means reduced or no human review of the communications. Companies are likely to try to avoid liability for those AI-generated communications, probably using legal provisions such as disclaimers buried in website terms and conditions. They might even seek to ‘rent’ chatbots from a separate legal entity. Courts and consumer protectors should refuse these attempts. 

In many cases, the risks posed to citizens and consumers by AI are already covered by existing laws. The Air Canada case is a good example. The novelty of AI should not be used as a justification for infringing existing rules and regulations.

Attempts to escape responsibility through disclaimers should similarly be rejected by those who claim the banner of responsible AI.

The South African Regulatory Toolkit

African information and consumer protection agencies have an opportunity to act pre-emptively. In South Africa, for example, automated decision-making that uses personal information and results in legal consequences or affects a data subject to a substantial degree is prohibited. Unless the decision is governed by a law or a code of conduct (see Section 71 of POPIA). 

A code of conduct that authorises automated decision-making must first be approved by the Information Regulator. The Information Regulator can require that all codes of conduct explicitly prohibit automated decision-makers from attempting to escape liability for miscommunications by their automated systems, including through disclaimers.

South Africa’s National Consumer Commission has extensive powers to address the issue under the Consumer Protection Act. Misleading information provided by a chatbot that affects aspects of a transaction; such as the price or conditions of the transaction, is prohibited (Section 29). Anyone marketing a product or service is under a duty to correct an apparent misapprehension by a consumer when failure to do so amounts to a misrepresentation (Section 41(1)(c)). Attempts to contract out of responsibility for chatbot communications fall foul of the prohibition on unfair or unjust terms (Section 48).

While these powers are likely sufficient to deal with individual instances of misleading chatbots, the increasing installation of automated systems, including chatbots, at scale also requires market-shaping regulatory action. The National Consumer Commission has some scope for this too. It can create a voluntary code of conduct for automated communications on transactional information (Section 93(1)), and it could recommend to the Minister to proclaim an industry code (Section 82) for a specific industry that uses chatbots. In formulating these, the Commission can cooperate with other agencies, such as the Information Regulator, where both have jurisdiction (Section 83).

Lessons for AI Regulation in Africa

Not all jurisdictions in Africa have the kind of regulatory toolkit that South Africa has. Those who don’t could follow the guidance provided by the African Union Data Policy Framework in institutions with the capacity to regulate data and, thus, data-dependent AI.  

Pressure to regulate AI is likely to increase. During 2023, jurisdictions around the world, notably within the European Union, have begun to extensively regulate AI. Although these endeavours may be perceived as largely positive, it is essential to recognise that AI is not entirely unregulated. 

Numerous overarching categories of existing laws already apply to chatbots, automated systems, and other AI systems. While the application of existing rules might sometimes not be clear, the Air Canada case illustrates that existing, well-established rules can be employed in some instances involving automated systems to produce appropriate, equitable outcomes. 

This challenges the notion that AI is entirely unregulated or necessarily requires distinct regulation for each aspect. While new regulation is necessary in some instances, it might also be used as a pretext to lower the standard of applicable regulation. 

Thanks to Dr. Scott Timcke for his insights.