The power and potential of emotional chatbots

The power and potential of emotional chatbots Dan Levy is the editorial director at Smooch by Zendesk. A journalist by training and content strategist by accident, he's fascinated by the ways messaging is changing our relationships with brands and each other.


Like the web, mobile apps and vinyl records, chatbots have been declared dead countless times only to prove very much alive — in that artificial way.

Some have blamed a lack of AI for the abundance of disappointing chatbots, while others have singled out the “hype cycle” itself for inflating our expectations.

Yet almost every week another global bank or airline or insurance company rolls out a new chatbot, reviving speculation that maybe, just maybe 2018 will finally be “The Year of the Chatbot.”

As machine learning and natural language processing technology improve, chatbots will only get smarter. But in order for bots to become truly indispensable pieces of a business’ customer experience, they will need to get more emotional as well.

Bots don’t judge

Most of our interactions with bots are transactional in nature. Order me a pizza. Tell me today’s weather. Remind me to buy flowers for Mom. But a growing breed of chabots are being designed to do something far more impactful: make us feel better.

Earlier this year, Wired delved into the strange and fascinating world of “companion robots” — bots made to keep us company, calm us down after a tough day at the office, or even replicate the personality of a departed loved one.

While this may seem like science fiction — indeed, Black Mirror already took the idea to its horrific extreme — there could be societal advantages to the technology.

Alison Darcy, the CEO behind a “therapeutic bot” called Woebot, says some people actually feel more comfortable opening up to robots than humans, telling Wired that “when you remove the human, you remove the stigma entirely.”

Meanwhile, a New York startup called Mei is building a new messaging app with a built-in “relationship AIi.” Not unlike the context-driven “suggested replies” built into Gmail, Mei recommends ways to engage with a user’s contacts based on their age, gender and “personality profile,” which it builds from their conversation history.

Mei recently introduced an “anomaly detection” feature that points out when a contact is messaging in a way that doesn’t fit their usual pattern so the user can check in and see if they’re okay.

In the midst of the #Metoo movement, a number of chatbots have also emerged to help businesses and employees deal with sensitive issues like workplace harassment and discrimination.

Spot allows employees to anonymously report “inappropriate moments at work,” without talking to a human. Their chatbot is programmed to ask the sort of open-ended questions that police would use to interview a crime victim or eyewitness. Cofounder Julia Shaw — a criminal psychologist and memory scientist — believes removing humans from the equation will encourage more victims to come forward.

Loris, launched by the founder of SMS helpline Crisis Line Text, is taking a more proactive approach. The AI-powered messaging software will coach users on how to build empathy and listening skills in order to navigate difficult conversations at work — with angry customers, sensitive employees, and diverse colleagues.

While all these bots bring up legitimate concerns about data privacy, security, and the plain old “creepiness factor,” sharing sensitive information with non-sentient beings has at least one key advantage: Bots don’t judge.

Automating empathy

The notion of emotionally-intelligent AI becomes more fraught when businesses enter the equation. Over the past few months, nearly all the major tech companies have rolled out business messaging platforms, including Apple (Business Chat), Facebook (WhatsApp Business API) and Google (RCS business messaging and Google My Business).

As consumers begin to discover they can easily message businesses using the same apps they’re already spending the majority of their screen time on, businesses will need to figure out how to handle these conversations at scale. Chatbots will no doubt play an important role.

Sciensio, a bot agency that’s developed chatbots for clients in the healthcare and event management industries, uses an AI technique called “advanced learning” to understand customer intent and provide relevant answers that may not even occur to some literal-minded humans. For example, its event bot has learned to interpret the declarative statement, “I have to pee!” as an urgent question: “Where is the bathroom?”

“A subtle, but critical component of conversation design is responding to the customer need, not just answering their question,” says Sciensio CEO Chuck Elias. “For example, the technically correct answer to ’May I use the bathroom?’ is ‘Yes’, but the response you are looking for when asking that question is ‘Of course, it’s over there!’”

Structurely, an Iowa-based startup that builds AI-powered chatbots for real estate agents, has added elements like deliberate typos and delays between messages to its AI, in order to make the conversation feel more natural.

The bot has been designed to respond empathetically to prospective clients who may be selling their homes under difficult circumstances, like a death or divorce.

"Buying or selling your home is a wildly emotional and stressful time in your life," Structurely CEO Nate Joens told me. “So the last thing a buyer or seller wants to talk to is a computer.”

The human touch

A recent chatbot snafu demonstrates what can go wrong when AI is charged with parsing human emotions. In September, a Westjet customer sent a note via Facebook Messenger to the Canadian airline, complimenting a crew member for helping her protect a “plant cutting” on her flight home with a new succulent.

Presumably triggered by the word “cutting,” the airline’s chatbot responded by suggesting the happy customer reach out to a suicide prevention hotline.

Sometimes, real human empathy requires real humans.

Structurely and Sciensio both allow human agents to take over the conversation if their bots get confused or a customer asks to speak to a real person. For now, Apple and WhatsApp are requiring brands with early access to their business messaging platforms to have customer support agents on hand to chat with customers.

But once these platforms become widely available and the floodgates open, it’s only a matter of time until chatbots join them on the front lines. When they do, conventional intelligence — artificial or otherwise — won’t be enough to deliver a truly human customer experience.

Interested in hearing leading global brands discuss subjects like this in person?

Find out more about Digital Marketing World Forum (#DMWF) Europe, London, North America, and Singapore.  

View Comments
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *