How the Air Canada Case Highlights the Dangers of AI Chatbots
Marian Briceno
In Featured, Horatio Insights
Feb 20 2024
About
In a world where experts are raising concerns about the dangers of artificial intelligence, the recent Air Canada court case brings light to one of its main manifestations: AI chatbots.
What happened with Air Canada?
In 2022, Canadian citizen Jake Moffatt used the airline’s chatbot to inquire about bereavement fares and the requisites for qualification for a last-minute trip to a funeral. The chatbot provided false information, Ars Technica reports that it replied: “If you need to travel immediately or have already travelled and would like to submit your ticket for a reduced bereavement rate, kindly do so within 90 days of the date your ticket was issued by completing our Ticket Refund Application form.”
Moffatt took the advice and was surprised when his request for a refund was rejected. The airline argued against the refund based on the fact that the bot linked to the company’s bereavement policy, which states: “Please be aware that our Bereavement policy does not allow refunds for travel that has already happened.”
Despite receiving a $200 credit to use on a future flight, Moffatt remained unsatisfied and proceeded to file a complaint in a small claims court. Air Canada continued to base their defense on the link to the company’s real policy that had been shared by the chatbot, as well as arguing that the chatbot was a “separate legal entity” and therefore should be liable for the mistake. Experts have said that Moffatt’s case “appears to be the first time a Canadian company tried to argue that it wasn’t liable for information provided by its chatbot.”
Following months of holding out, Air Canada lost the small claims case, as the court determined that the airline couldn’t provide a clear explanation of why the bereavement travel webpage should be trusted over the chatbot, as they are both parts of the webpage. This led the Tribunal to determine that the claim against Air Canada constituted “negligent misrepresentation.”
The Importance of the Human Touch
The first case of its kind, it has now set a precedent in Canadian law. It also carries a lot of implications for the future use of AI chatbots in customer service.
The argument that artificial intelligence can’t fully replace humans is further supported by this case. It is now clearer than ever that the human touch is not going anywhere. Furthermore, in situations that are highly emotional and personal, chatbots are not a replacement for the nuanced skills human agents can provide. Additionally, this case teaches brands and consumers alike that you should value companies who still give the option to speak first-hand to agents in an easy manner should you need to, such as in cases of bereavement or other instances. CX support that understands omnichannel support is necessary, but not a replacement for the human touch is the kind of company you want to protect your brand.