Digital customer care poised to be an area of significant savings for consumer facing business’s bottom line in the coming years. No wonder companies are turning to quick-fixes to rapidly implement such solutions; chatbots provide a relatively immediate and inexpensive tool which provide the illusion of a more sophisticated, artificial-intelligence based platform.
Quick, easy, and cheap solutions so rarely pan out, however. Over the past few years, some companies have bet big on bots which simply regurgitate simple information given to them, or inadvertently create offensive content which can immediately damage relationships between a business and their customer base. Below are only a few of the examples of bots “blowing it” (or #botfail’s as they are tagged on Twitter), due to one or more of the reasons above.
Thinking and you
The most famous example would be Tay (an abbreviation of “thinking and you”) – a chatbot released by Microsoft in the spring of 2016. Originally developed as a joint program between their search and technology research divisions, the final product was envisioned as a female American teenager that could interact with users on Twitter as proof of concept. Based on similar experiments which performed well in China, certain topics were off limits for the bot (such as recent “hot topic” events) but she was advertised as having attitude or “zero chill”.
Tay could respond to questions from users, comment on images, and “learn” from the conversations she was having – which would prove her demise. Some users began teaching the bot more unscrupulous language, which Tay would start to use in her responses to them, however, her learning capabilities meant she would repeat some of this chauvinistic, misogynistic, and racist language to other users’ more innocuous queries.
While Microsoft was swift in its reaction, trying to delete as many offensive tweets as they could, the irony is Tay was built too well and produced the unsavory material at a much faster rate. What Microsoft hoped would be a display of their artificial intelligence development prowess, morphed into a public relations disaster, and they soon shut down the account – and released a much more censored and limited version nine months later to little fanfare.
Of course – not all bot-fails are controversial as Tay – most are just examples of user frustration or indifference to a bots inability to do what its developers claim it can. Take Google Assistant – asking it to tell you a joke should be a simple task – but alas:
— Niall Quinn (@niall_quinn1) October 26, 2016
And some bots show their limits when trying to deal with human sarcasm or antagonism. Take the White House’s Facebook Messenger bot looking less than intelligent when attempting to collect a user’s contact information.
— Kurt Wagner (@KurtWagner8) August 10, 2016
Though these examples aren’t outright disasters, they do present the main problem with poorly behaving bots in the consumer space: it only takes a few missteps for users to dismiss bots or intelligent chat options when communicating with businesses and institutions online, or even their own devices.
While a mistake can be dismissed as a growing pain in the evolution of AI in digital care and communication, perception can also go a long way in the early days of the platform and help foster and accelerate adoption of solutions.