Hey Chatbot: What are the legal issues?

Emily Dorotheou in Advertising

in Advertising

 

Following ADTEKR’s recent introduction to chatbots, we’re now turning our attention to the key legal issues faced by companies and chatbot platforms when advertising to consumers using this channel of communication. Although chatbots offer a huge marketing opportunity, companies will need to be aware of the potential legal challenges and bear the following in mind when navigating this tricky legal landscape.

Terms & Conditions

Chatbots need to be able to recognise if and when a user may be required to accept terms and conditions. This is especially important if chatbots are being used to facilitate online transactions or provide any type of advice. The chatbots will need to present the terms and conditions in an appropriate format and then keep a record of the terms accepted by the user. Owners of chatbots will also need to bear in mind that any terms and conditions provided by the chatbot will need to contain all necessary information to ensure compliance with UK and EU regulations and company policy.

Owners of chatbots, and their lawyers, will also need to consider how a chatbot should react if a user starts asking questions about the terms or, for those more argumentative amongst us, start negotiating the terms. If a chatbot, whether knowingly or accidentally, makes any kind of representation to a user, will this become a part of the terms and conditions?

Advice and Disclaimers

The introduction of chatbots in highly regulated industries such as financial services, medical, legal and pensions highlights an array of potential issues relating to liability and the extent of advice provided. Any company giving a chatbot authority to advise users will need to ensure that the chatbot has access to a large volume of up to date information in order to understand instructions and questions, and provide helpful and relevant responses. If chatbots are advising on user health issues, financial decisions or legal issues, it will be very important that the advice is informed, correct and highlights all associated risks. In the event that they are unable to answer, a clear disclaimer and potential human intervention trigger will need to be considered.

Where chatbots fail to understand instructions or questions, a request for human intervention should be triggered and the chatbot should then be overridden by a human controller. The human controller’s next steps should then be provided to the chatbot, so allowing the chatbot to learn from its mistakes and avoid the need for human interaction in similar circumstances.

User data and third parties

Chatbots have the potential to collect a large volume of information about customers and data protection will be a key issue for companies using chatbots. Chatbots will, at a minimum, likely have access to a user’s IP address, conversation transcript with a user and any personal information provided by a user. It is important that data privacy policies cover this technology and companies have sufficient authority to process user information and pass this collated information onto third parties.

Rogue chatbots

The technology underpinning chatbots is highly complex and contains varying qualities of underlying data. Users will expect chatbot technology and any answers provided by the chatbots to be accurate. Developers will therefore need to spend time ensuring the following techniques are employed to train chatbots: ensure chatbots learn from their interactions with humans; provide human support to chatbots until they are sufficiently developed; and code a large number of algorithms to provide a wide range of reactions and responses.

Companies should be cautious about potential detrimental, abusive and incorrect responses that a chatbot may give and bear in mind the effect a chatbot can have on a company’s image and profile. For example, there have been a number of recent chatbot errors that have caused embarrassment to companies and brands, given inappropriate answers or recommended competitor products, all of which erode the potential benefit that chatbots can bring to companies.

In order to minimise the effect of rogue chatbots, companies should quickly react to any complaints made by the public about their dealings with the chatbot; time moves very quickly online and companies should be quick to react to avoid a social media PR storm. If companies have enough resources, it’s recommended that they review a sample of chatbot conversations at random times throughout the year to ensure that the interactions are consistent with their brand ethos. If a chatbot does go rogue, companies and developers need to quickly determine whether the chatbot can be corrected behind the scenes or whether the chatbot needs to be removed from the platform to be amended.  Companies should therefore incorporate the risks of using chatbots into their risk and crisis management planning.

Advertising regulation

The Advertising Standards Authority is the regulatory body which overseas advertising in the United Kingdom across all media. Within the ASA‘s remit are “marketing on companies’ own websites and in other space they control like social networking sites Twitter and Facebook” and “sales promotions, such as special offers, prize draws and competitions wherever they appear. Chatbots are therefore likely going to fall within the ASA’s remit and so, the ASA will need to determine how best to regulate any advertising materials they provide users.

Developers of chatbots will also need to embrace the CAP Code which prescribes rules for advertisers, agencies and media owners and covers the content of marketing communications; the administration of sales promotions, the suitability of promotional items; and the use of personal information in direct marketing. Companies that use chatbots will need to ensure that any advertising directed through chatbots complies with this Code. For example, there are strict rules relating to politics, alcohol, financial products, weight loss products, food and tobacco.

Sponsored products

Developers and owners of chatbots will need to bear in mind the rules around product recommendations and native advertising. Companies working in collaboration with sponsors who want to exploit chatbots will have to make clear whether a chatbot is “sponsored”/”paid for”/”brought to you by” or when a chatbot is programmed to put forward sponsored products. It will clearly need to be flagged, for example, if a chatbot is programmed to always suggest a particular type of brand whenever anyone asks for drink recommendations or the nearest coffee shop.

Children

The way in which companies regulate a chatbot’s ability to verify age and the content of its conversations with children will have to be carefully monitored and controlled. For example, the Advertising Standards Authority take particular care in regulating advertising to children and companies will need to abide by the CAP Code in order to remain compliant. Chatbots will need to identify early on whether they are talking to a child and then tailor material and conversation topics accordingly.

Moral issues

Chatbots attempt to replicate human conversation in the way that they interact with users. Companies should therefore take precautions and have policies in place to cover moral issues that arise during these conversations, in particular where the chatbot is operating within a highly sensitive field. For example, if a user asks a chatbot a sensitive question around suicide, should the chatbot report this to the local police/health authorities? Could chatbots been seen to be taking on a role of trust or duty of care? Also, if a user asks for information about where to buy products that may be deemed suspicious (e.g. high quantities of fertiliser), at what point should the chatbot conclude that it needs to be reported to the police? These moral questions may either need to be regulated or covered in company policies, especially in the cases of chatbots giving medical advice or interacting with vulnerable individuals.

Chatbot developers have focused on developing technology that can collect data from a wide range of sources and communicate it to users. Their role is to provide users with a seamless and efficient customer experience, whilst maintaining a company’s brand. Chatbots are well armed to do this, as they are programmed to understand typical consumer use cases and contain natural language processing technology. However, as ever, there will be a period of regulatory uncertainty as the law attempts to catch up with the technological developments of chatbots and other related artificial intelligence products, and to create rules around how chatbots should navigate the legal issues outlined above.

With many thanks to Katharine Lammiman who co-wrote this article.

Hey Chatbot: What are the legal issues? was last modified: September 30th, 2016 by Emily Dorotheou