How often have we all used the terms Artificial Intelligence (AI) and Machine Learning (ML) interchangeability? And is there anyone today who does not offer an “AI and ML and cloud and digital” solution that can solve every problem you have, and presciently evolve to provide for every foreseeable need you may have, even before it arises?
A quick glance on Google Trends for ML shows more than a four-fold jump for the phrase in the last 3 years. It is trending at an all time high!
First, let’s take a moment to delineate what Machine Learning exactly is. Machine Learning sits at the intersection of Computer Science and Statistics. In other words, you’re writing computer programs that operate on data, using dedicated programming languages such as R or highly specialized statistical packages such as SciPy for general purpose languages such as Python. These packages allow data-scientists to explore seemingly mundane data, extract shocking conclusions, and even make predictions.
At the heart of such programs lie one or more statistical algorithms called “models”, that are derived based on observations over a set of “training” data. For example, when training an image recognition program, the input may be images or pixels for a face and the output may be the emotion represented by that face. After several rounds of training, given a new face, the model can infer the emotion on that face, with some degree of confidence.
The quality of your input training data absolutely has a direct impact on the potency of your models, although some nifty math may help reduce noise in the data to some degree. The models may be comprised of various statistical distributions such as Logistic Regression, Bayesian classifiers, Neural Networks, or a combination of these. They try to best describe the input training data, and in turn, form the basis for predictions on new data.
So which models are best suited for the task at hand? Can new features be engineered or discovered from the training data and in turn, further improve the accuracy of future predictions? This is exactly where the science becomes art, since there are no set rules! One must apply observation, intuition, rigour, and experience to estimate the most accurate outcome.
We at Wysdom.AI can proudly stake claim to being a cognitive platform that serves a frictionless experience to our users and allow them to interact with systems the same way they would interact with other humans.
Each of these models abstract a lot of high school math – from probability to calculus. And yes, these models need constant fine tuning, since new input data may skew predictions since they represent new patterns that were simply never observed before. Or, they may be plain outliers, in which case, the business may choose to ignore them. Unaccounted emerging patterns in the atmosphere for example, skews weather predictions – and sometimes, by big margins!
Cognitive programs process large samples of data on an ongoing basis, and constantly infer patterns that human developers simply cannot manually recognize or code for. That’s exactly what makes these programs so unique and hence they grow smarter over time!
Three Peas in a Pod
So where does AI fit in? AI spans a much broader scope including psychology, linguistics, philosophy, neuroscience, robotics and not surprisingly, mathematics, to name a few. The goal of AI is to observe and mimic human behaviour and, of course, constantly learn from new data and patterns. In fact, ML is a specialized stream of AI and so is Natural Language Processing (NLP). NLP enables machines to read, understand and respond in spoken language, much the way humans do. NLP itself may utilize several ML algorithms, to understand one or more spoken phrases on hand.
As the lines between humans and machines blur, human to machine interaction powered by NLP, is an area of active research and ongoing improvements. Of course, it is rife with challenges, since language has theoretically infinite permutations and is always evolving. Albeit slowly (relative to our need for instant gratification), machines are definitely catching up!
Building AI systems take more than just data scientists. AI applications today must exhibit digital application paradigms such as web-scale engineering, 24×7 availability, infinite elasticity and enable seamless software updates at a very minimum.
At Wysdom.AI, our small team of highly talented developers, data scientists, linguists and DevOps engineers unite to deliver a truly turnkey AI platform for cognitive care. With expertise at its center and user-experience at its fore, we allow users to interact using spoken language over channels of their choice, understand what they really want and then serve them with the best possible solution – be it over self-service, social media or chatbots.
As our team now segues into deploying Deep Neural Networks, we at Wysdom.AI can proudly stake claim to being a cognitive platform that serves a frictionless experience to our users and allow them to interact with systems the same way they would interact with other humans.
Let’s usher in the era of “Designed Intelligence”!