Why many early chatbots failed

By Jason Contant | October 29, 2018 | Last updated on October 30, 2024
2 min read

Early adopters of chatbot technology, most of which are now closed or dismantled, often did not realize the extent to which you need to train the system, Accenture said last week.

“There was this… false sense that the machines would just learn themselves, so let’s go,” Jodie Wallis, managing director for artificial intelligence in Canada with Accenture, told Canadian Underwriter last week. “The reality is they can be super effective, but they absolutely need to be trained.”

Wallis discussed challenges and benefits of incorporating artificial intelligence in the insurance industry following Accenture’s release of the second season of its podcast series The AI Effect. The seven-episode series, including an episode on insurance, was hosted by Wallis and technology journalist Amber Mac Oct. 23.

A chatbot should always be updated with new features and answers, Chris Gory, president of Insurance Portfolio Financial Services, an employee benefits firm for start-ups, told Canadian Underwriter earlier this month. “It’s best to review the chatbot logs on a weekly or monthly basis to see what kind of questions the bot has been asked, and what it couldn’t answer.”

Humans always need to be in the loop at all times, monitoring questions and responses, Wallis said.

“You’ll need to ensure you have someone to manage the learning, as well as take over when the chatbot can’t answer the questions,” agreed Amanda Ketelaars, operations manager at Mitchell & Whale Insurance Brokers, whose website featured a chatbot up until about one year ago. “You want to ensure a great customer experience and there is nothing worse than getting caught in the loop with the chatbot and not having a human there to pull the customer out of the loop.”

Chatbots sometimes have an issue of getting caught in a loop, meaning a customer will ask questions that can’t initially be answered by the bot. This sometimes leads to customers getting frustrated because their question is not being answered, the bot can’t respond properly, and the cycle continues.

“As soon as the chatbot has difficulty answering a customer’s question more than once, it would automatically route to a human agent,” Wallis said. “Part of the key there is when that happens, you need to have the right knowledge engineers in place to assess what happened and to build that in real-time back into the model so it doesn’t happen again.”

Jason Contant