light blue banner

Chatbot Fails & What Field Service Can Learn from Them

Chatbot Fails & What Field Service Can Learn from Them

Chatbot Fails & What Field Service Can Learn from Them

July 13, 2017 ClickSoftware 0 Comments

In the past year, chatbots have exploded in popularity. E-commerce giants, customer service websites, and even the White House have all seemingly dipped their toe in the chatbot waters. Some have been successful, others quite the opposite.

With organizations like Facebook and Microsoft failing catastrophically with chatbot technology, where does that leave the rest of us? And more importantly, should field service organizations even bother with chatbots in the first place?

While this emerging technology shows great promise for our industry, specifically the virtual assistant capabilities of software, there are many lessons to be learned from early implementation failures.

But first, a definition:

A chatbot (short for chat robot) is a computer program that can autonomously communicate with people via text or audio stimulus. Whether voice-activated (e.g. Siri, Alexa), or text-based (e.g. web chatbot), these softwares leverage artificial intelligence to communicate by mimicking human behavior to the best of their programmed ability.

And now, onto the fail show.

Fail #1: Microsoft

On March 23rd of 2016, Microsoft released an artificially intelligent robot via Twitter named Tay (short for Thinking About You). The bot was designed to act like a modern 19 year-old American girl and autonomously interact with, and learn from other Twitter users. Within minutes, other Twitter users had taught the software inflammatory language, and offensive cultural cues.

16 hours later, Microsoft was forced to shut the software down completely due to the negative image they had generated, and public sentiment. Following Tay’s shutdown, Microsoft released another bot in the United States named Zo, but this bot was likewise ultimately a failure.

To be fair, Microsoft has successfully launched chatbot softwares in China and Japan, both of which have not gone off the rails.

In examining this situation, there are two major mistakes Microsoft made that service organizations should avoid when developing chatbots:

1. Avoid Clinging to Cultural Cues

Microsoft’s attempt at mimicking millennial teen behavior lacked taste, and backfired in a big way. Instead of looking cool to their target audience, they demonstrated to the entire world that they knew little about their millennial audience’s needs, behaviors, and desires.

Field service organizations that want to leverage chatbot technology to communicate with customers should avoid trying to mimic culturally charged language, or mirror customer personas too closely. The key is being there to answer customer questions in their moment of need. Don’t waste precious time trying to fool your customers into thinking your chatbot is their friend. Your customers don’t need robot friends.

2. Serve a Purpose, or Serve Your Customers

In theory, Tay was a cool new teen robot software that internet users could chat with.

In reality, it was a cultural information mining experiment meant to gather intelligence about a target audience. Tay served no purpose beyond giving Microsoft actionable intelligence that they could use in future product development.

Harmful to society? Not really. But consumers rarely want to interact with a brand’s software unless they are getting something of value in return. Microsoft made themselves a target by putting out a software that ultimately served no purpose.

Field service professionals must hone in on specific purposes for chatbots. Pick one or two big customer pain points, and use chat technology to make it a little better. Are your customers frustrated due to long wait times when calling in for service? Offer them the opportunity to use chat technology to schedule their next appointment.

Are there frequently asked questions, or common issues your customers face? Program your chatbot to answer these questions fast, instead of forcing your customer to move around your website in order to find an answer.

Fail #2: Facebook & The White House

In April of 2016, Facebook launched artificial intelligence chatbots for the messenger app. Early partners included 1-800-FLOWERS, CNN, and a weather app called Poncho.

Long story short, it didn’t go so well. And as you can see below, Poncho wasn’t so great about delivering accurate weather predictions.

On August 10th of 2016, the White House announced they would receive correspondence from citizens via a Facebook messenger bot. In the first week, most journalists had agreed that the chatbot was labor-intensive to use, and even posed some basic security concerns.

There were several problems, and lessons we can learn from this chatbot. Although the main takeaway centers around usability.

Don’t Make the Customer Experience More Cumbersome

From an experience perspective, The White House chatbot was no different than other forms of digitally contacting POTUS. Instead of responding to users, the chatbot simply led them through a step-by-step process of filling out a message that may or may not have been sent through the system for approval, or disapproval. The actual process of using this chatbot was labor-intensive, and was no more convenient or valuable than sending an email. Arguably, it was even more cumbersome than previous methods.

Using technology for technology’s sake rarely goes well. Service organizations can learn from the White House that if they are going to jump into new technology, they must embrace it wholeheartedly. It’s simply not enough to offer a new service channel that is as cumbersome as previous touchpoints, or fails to meet the promise of easier use.

For weekly updates on all things Field Service, subscribe to the Field Service Matters blog.

Ready to transform your business? So are we.

Get started
x

Join us for ClickConnect 2016!

The Leading Mobile Workforce Management business and educational conference

Register Today