AI Supervision is a severely underrepresented ingredient for operationalizing AI. It is akin to serving your dinner with all the sizzle and spices, but the recipe forgets to mention salt.

Sheepdogs or mops?Let’s say you run a pet store and your AI can recognize sheep dogs. You use this to target customers and market the cutest of the lot to them.

But wait – your AI now starts classifying mops as sheep dogs! How do you fix that? How many of your customers were mopped in the process? And what was at fault – the inherent bias carried by all Machine Learning? Or your carelessly gathered training data that induces bias into your models?

Raising the kid named AI

Training an AI system is much like raising a child. Children learn from what you expose them to. They ingest the causes (inputs) and effects (outputs) of what you teach, and in some cases use their own intellect to infer things you don’t explicitly state – much like how neural networks inexplicably engineer features from the given data.

Children, and in fact even babies, can readily identify patterns such as colour and form (clustering). It is we who give colours a name; if you tell a child the colour red is actually called blue, it will readily accept so and repeat after you. On an ongoing basis, we correct a child’s behaviour and help them identify right from wrong (supervised learning).

As you sow, so shall you reap

Garbage in, garbage out” holds very true in the world of Machine Learning and so do prejudices. In fact a lot of us will readily recall how Microsoft’s Tay turned abusive, thanks to its mentors who thought it so. Recent applications of AI to court sentencing and insurance approvals have shown social biases simply because AI learns from the data humans feed them, in turn reflecting their own bias.

Focussing on the business

We often get stuck in the coolness factor and needlessly debate neural net architectures without crystallizing the problem or opportunity on hand. Technology is here to serve and elevate organizations to unparalleled levels of automation and cognition. It is vital we clearly define success metrics and esoteric metrics. Metrics such as AUC (area under the curve), ROC (receiver operating characteristics), precision and F1 scores can transpose into business metrics that reflect customer behaviour and sentiment.

Sudden changes (other than organic growth) in your Machine Learning metrics are often indicative of unforeseen changes to your underlying business processes or emerging new patterns in data. The root cause of such anomalies should be immediately identified and rectified.

The “gold rush” for Managed Services

Machine Learning is remarkably similar to how humans learn. Choosing the right training model, eliminating skewness and biases in data, retraining models, and most importantly tying the system performance to business metrics, are vital functions for successful AI.

The slew of DIY cognitive services and toolsets available today, and their ease of use implies that organizations are rapidly adopting them. But as we all know, “well begun is half done”. A “human in the loop” is a must have for the deployment of most AI applications.

Of course, the illusion is to have you believe AI magically does it all.

 

Contact Us

Our cognitive care solution has helped customers deliver the best customer experience to their customers. Find out how Wysdom can help you.