Bots should sound like, well, bots

If it walks like a duck and talks like a duck, but it's not a duck... customers might get confused.

Andrei Papancea

03/21/2023

One of the major concerns brands have when automating conversations is how an AI-driven automated experience compares to speaking with a human agent. Brands want customers to feel comfortable interacting with a bot. They want the customer inquiry to be handled swiftly, and for customers to feel engaged. 

Both agent-led and AI-led conversations can achieve those results. What’s more is they pair beautifully together to lower hold times and offer customers 24/7, self-paced, and satisfactory service. 

But it’s a mistake to compare the two apples to apples because the way in which each method achieves those results is and should be different. 

And it starts with how the initial inquiry is answered.

Agents typically answer with a brief opening line, state their name, and inquire further about why the customer is reaching out. It’s usually clear that the customer is talking to a live agent, and it’s what many customers might expect after listening to the tone, pace, and delivery of the opening lines. 

A virtual assistant might answer the inquiry initially the same way, but with one additional and crucial step: identifying itself as a bot in case it’s not already clear from the written or verbal interaction. 

Now, this step is often forgotten and sometimes ignored within the conversational design process. “So what if the customer thinks the bot is human?” “Isn’t that a good thing? Doesn’t that make it seem like we have a larger service team that can help them immediately?” 

Yes, but you’re really playing with the customers’ trust in your brand. 

A recent study by PWC shows that trust is crucial in consumer buying decisions. “71% of consumers say they’re unlikely to buy if a company loses their trust.” And if the old marketing rule of thumb holds true - where it’s six to seven times more expensive to acquire a new customer than it is to retain a customer, failing to be honest in customer service experiences could be a detriment to the brand’s bottom line. 

Failure to disclose that a bot is a bot to your customers can sew customer dissatisfaction and distrust, ultimately fueling a decision to switch to another product or service. 

Furthermore, you’re also playing with your customers’ time, which is incredibly valuable and should be treated as such. Most people interact differently with bots than they do with humans. So, when customers enter your customer service pipeline, it should be clear if they’re speaking to a bot or a human so they can use their time most effectively to interact. 

And finally, the law requires you to be transparent. As an article from Zendesk points out, The “B.O.T law” requires companies to inform customers in California when they’re talking to a chatbot instead of a human. 

It’s for these reasons that we advise the companies that we work with to be forthright with their customers: It needs to be immediately apparent or stated that they are interacting with a virtual assistant. This process builds trust and often helps inform the customer on how to interact for the best results leading to a better experience. 

And robotic voices aren’t a bad thing - they’re honest - just so long as the voice is clear and understandable in their diction. But the same standards hold true for agents, just with a more human-like delivery. 

Inquiring customers usually aren’t frustrated by the robotic voice of the bot, but rather if the conversation flow isn’t helpful in empowering them to resolve the issue on their own.

These points aside, there is something to be said if your brand has a strong brand sound. Incorporating a signature voice thoughtfully into your conversation can really envelop the customer in the brand experience. But I’d still caution brands to take the proper steps to identify that the virtual assistant isn’t alive to retain trust. 

Andrei Papancea

Andrei is our CEO and swiss-army knife for all things natural language-related.

He built the Natural Language Understanding platform for American Express, processing millions of conversations across AmEx’s main servicing channels.

As Director of Engineering, he deployed AWS across the business units of Argo Group, a publicly traded US company, and successfully passed the implementation through a technical audit (30+ AWS accounts managed).

He teaches graduate lectures on Cloud Computing and Big Data at Columbia University.

He holds a M.S. in Computer Science from Columbia University.