As artificial intelligence grows, evolves, and refines its implementations, there are sure to be hiccups along the path of innovation. Some consumer facing AI-based products are so efficient and seamless that it skews our expectations for what AI and machine learning can accomplish elsewhere. That said, hastily built AI solutions which are not trained properly, and then left to interact with users without any supervision, can put a damper on what the public at large expects from a smart solution.
It’s no surprise that as AI’s potential captures headlines, in turn questions about what exactly we can hope to achieve with it come to light. Here are a few recent stories touching on the expectations we have for artificial intelligence in customer service, and the greater consumer space.
Australian telecommunications company Telstra found itself in hot water on social media this month, as its customer took to social networks to blast Codi – their new virtual assistant.
— Tim (@skramit) February 26, 2018
As Wysdom CTO Karthik Balakrishnan pointed out last year, failure to train is training to fail. AI supervision is paramount to initial success in cognitive customer care. Letting your customers use the platform before it’s ready can lead to further negative connotations and reluctance to use digital means for support down the road.
Great article about the 2 types of virtual assistant. They generally get mixed together but they require different approaches to deliver a great customer experience. #AI #MachineLearning https://t.co/9UVSs8so36
— Ian Collins (@Wyrex95) March 13, 2018
This article flips the script on how we evaluate chatbots and automated assistants. Instead of hoping for a solution that instantly works in every situation like a human agent, we should create and celebrate more purpose-driven solutions that complete a narrow set of tasks in a quicker and more efficient manner.
The article goes on to differentiate bots built for discovery versus those focused on service.
How To Make A.I. That’s Good For People
New York Times
— Fei-Fei Li (@drfeifei) March 8, 2018
Stanford Artificial Intelligence Lab Director Fei-Fei Li urges readers to consider AI’s effect on society as a whole. She touches on the aforementioned need for human training and supervision, and not solely from AI researchers, but subject matter experts that can inform the platform beyond the basics.
“Making A.I. more sensitive to the full scope of human thought is no simple task. The solutions are likely to require insights derived from fields beyond computer science, which means programmers will have to learn to collaborate more often with experts in other domains.”