Want to see how we do chatbot analytics? Join us for a product demo webinar.

An image of a chatbot owner working with a projection visualization of the bot experience

Quick: what’s the best way to get feedback from customers on how well your chatbot is meeting their expectations? I bet the first thing that came to mind was “A survey!”

The instinct when you need feedback on customer experience and engagement is the time-honored survey.

But do user surveys actually give the insights you need on your chatbot experience, and are there better options?

Rethinking the CSAT survey to measure bot experience

The typical source of information on your chatbot experience comes from the traditional exit customer survey. That is a quick survey launched post-conversation, before the customer jumps off the conversation to another task. Surveys are so popular because they can easily be tacked onto the end of a conversation interaction. There are an abundance of tools that offer customization for branding, questions, notification workflows, reporting and so on. They seem like an easy and great way for a business or service team to generate feedback.

Customer service and experience are, or should be, an integrated part of digital business efforts. A recent article shows the impact of measuring CSAT on business performance:

The business impact of increasing Customer Satisfaction (CSAT)

A chart on the business impact of increasing Customer Satisfaction (CSAT) from Qualtrics. Improving bot experience will increase CSAT.


Source: Qualtrics (https://www.qualtrics.com/experience-management/customer/what-is-csat/)

Surveys have been part of the digital landscape for so long that their actual value often goes unquestioned. However, there are challenges with relying on a survey-based approach, when measuring how your customers feel about their digital interactions. These include:

      • Binary options like ‘thumbs up or down’ give minimal insight.

      • Star ratings, typically 1-5, still provide no detail.

      • Text response buttons can only highlight specific issues.

      • Freeform text responses can be hard to analyze in volume.

    Whatever the response type, surveys can help you generate some primitive feedback numbers or insights. But they provide next-to-no color on how your customers actually feel about their user experience. Problems with these types of results include:

        • Subjective responses, where terms like “poor”, “okay”, “satisfied” “good” and “highly satisfied” differ from customer to customer.

        • Skewed toward complaints, for each customer that complains, around 26 never respond, according to research.

        • Cultural bias, people have their own definitions when it comes to survey language, and regional differences can vary hugely.

        • Short-termism, a CSAT score lacks the impact of other survey tools. And customers increasingly feel surveys are non-engaging or valuable.

      As bots play a growing role in customer engagement, handling thousands or even millions of customer conversations, relying on traditional surveys to evaluate customer experience is a flawed approach. In the era of big data and AI, we can analyze bot interactions in order to automatically identify the customer experience. For example:

          • Did the bot provide the same response more than once, does the customer have to repeat their question?

          • Does the customer paraphrase their request, using different words to solicit a different answer? (think back to the times when you’ve asked to speak to an agent, only to rephrase that as a human)

          • Did the request escalate to profanity? (oops, I think I’ve been guilty of that!) or other signs of frustration?

          • Were there multiple requests to escalate that didn’t resolve?

          • Maybe the customer left the conversion before hitting the end-point.

          • Using an AI-based sentiment model, were we able to detect negative sentiment.

          • Or was there explicit negative feedback, with the customer expressly indicating their dissatisfaction?

        Chatbot analytics can automatically extract signals that paint the full picture of bot experience. It can analyze every single conversation and produce a score that summarizes bot or customer repetition, profanity, sentiment, abandonment, feedback, or repeated asks for agent escalation.

        A data-driven approach to measuring bot experience

        Despite knowing that they are dealing with a virtual agent, customers nonetheless have high expectations. Knowing how they feel about their interactions goes a long way to building trust, positive experiences, and repeat engagement.

        At Wysdom, our chatbot analytics software analyzes every conversation to produce the Bot Experience Score, or BES. With the BES, product owners, CX leaders and other executives will have an unbiased assessment of the customer experience, without having to rely on biased surveys or anecdotal information. Setting goals and measuring BES over time will help you evaluate how well Conversational AI is contributing to a digital customer experience. You can learn more about BES here.

        Subscribe to our newsletter

        Get inspired

        Read more insights on the latest in virtual agent analytics and performance management

        Subscribe to our newsletter

        We use cookies to ensure that we give you the best experience on our website. By clicking “I Accept” or if you continue to this site, we will assume that you consent to the use of cookies unless you have disabled them.