How smart speakers could ruin your reputation

By Jason Contant | May 2, 2019 | Last updated on October 2, 2024
2 min read
|
|Resident Kyle Piper carries his bike from his house through the floodwaters in Grand Forks, B.C., on Thursday, May 17, 2018. THE CANADIAN PRESS/Jonathan Hayward

Connecting consumers with brokers and insurers through voice technology has already begun, but a new blog from Harvard Business Review is warning companies that their reputation could be at risk.

The blog was written for general business audiences, but included an insurance example. It warned that if a company prematurely introduces smart speakers for more sophisticated purposes and the technology fails, the company’s hard-earned reputation could suffer.

“Voice-enabled algorithms will not be able to advise customers on different options for products like mortgages, insurance and car loans until they can grasp much more detailed financial information from separate accounts in potentially multiple institutions that customers provide vocally – beyond the partial data readily available,” said the blog, What Companies Should Consider Before Investing in Smart Speakers. “Important information could also fall through the cracks as clients move between devices and channels.”

Beyond cost concerns, companies need to confirm they can manage the reputational and operational risks that could accompany each new smart speaker application. For example, businesses will have to monitor whether smart speakers are giving inappropriate financial advice that could endanger a customer’s financial health, wrote blog authors, Gokhanedge Ozturk and Shri Santhanam.

Ozturk is a partner in the digital practice of managing consulting firm Oliver Wyman, while Santhanam is a partner in Oliver Wyman’s digital, technology and analytics practice.

Providing security against fraud committed with recordings of customers’ voice data is another concern, the blog warned. Voice fraud incidents are already rising, as faking audio files in many ways is now easier than copying credit cards or fingerprints. “With enough data, artificial intelligence programs can generate fairly convincing audio files of anyone,” the authors wrote.

To combat voice fraud, companies will have to build and maintain systems that verify vocal orders by doing much more than asking key questions. New ways will have to be found to detect and notify customers of fraudulent vocal orders as efficiently as for false orders placed on computers or in stores. For example, systems and algorithms will have to be developed that can quickly analyze links to previous fraud incidents and determine if voices are real or pre-recorded.

Overall, the blog recommends that companies proceed cautiously, and in stages.

Jason Contant