Want to see how we do chatbot analytics? Join us for a product demo webinar.

BLOG

Some playtime with a “state of the art” linguistic tool led me to a startling realization – while everyone is busy making hay when the sun shines and selling their freshly baked AI tools and frameworks, few if any, realize that AI is anything but akin to DIY IKEA furniture! Before I go any further, let me explain what happened.

Language in its infinite beauty, is incredibly nuanced. “My car has four seats” – in this phrase, “seats” is a noun. “My car seats four” – identical in meaning to the previous phrase, but “seats is a verb.

The scholastic linguistic tool however reckoned that “seats” is a noun regardless of the phrase. The grammar police in me quickly realized the risk of canned, black box AI tools:

You have little control over how these tools are trained

While most commercial cognitive services have done wonders for accelerating AI adoption into everyday applications, they seldom (and often never) allow you to change the model’s training parameters. Those familiar with Machine Learning would quickly realize that training against a Naive Bayes’ vs a Max Entropy decision tree, can produce very different models and results.

You have little control over what these tools are trained on

Some cognitive services allow you to train their models on your data. However, some don’t! A major risk of using pre-trained models is that you have no clue on what the model(s) were trained on.

Let’s take a Halloween example. Say 1 in 100 humans is a zombie and your model should identify zombies from humans. If you train the model on 99 humans and 1 zombie, guess what – your model will, with a very high likelihood, classify a zombie as a human and woe shall befall upon us. You might as well have just blindly guessed, since the chance of being right was 99 percent!

My suspicion is the linguistic model was exposed to skewed data, i.e. a majority of samples had “seats” in a noun context, which biased the model’s output. If your data is skewed, your model itself could be rendered useless.

Cognitive Platform Components

Which brings me back to my earlier point. Operating AI isn’t plug and chug. Should you do that, you’ll be chugging someone else’s drink – and beware, it could be spiked. AI supervision is about owning your data and having the right resources to constantly train your models on the right data. Identifying what’s right takes experience and expertise and sometimes, is art and not science (I’ll qualify that in another blog).

At Wysdom.AI, we believe in a cognitive platform approach – a triad of Wysdom’s cutting edge cognitive products, surgically augmented DIY cognitive services, and the all important AI supervision. A platform approach is vital for AI to truly bring value to your business processes and in turn, have a positive impact on your customers.

Through our years in the industry, we’ve seen enterprises large and small struggle and succeed. Success is often the result of meticulous planning and adopting a platform approach to AI.

So before bolting off the gate to deploy DIY cognitive services, always remember – failure to train is training to fail.

Subscribe to our newsletter

Get inspired

Read more insights on the latest in virtual agent analytics and performance management

Subscribe to our newsletter

We use cookies to ensure that we give you the best experience on our website. By clicking “I Accept” or if you continue to this site, we will assume that you consent to the use of cookies unless you have disabled them.