As we hear of breakthrough innovation in AI, from machine comprehension to fake celebrities generated by GANs (Generative Adversarial Networks), one quickly forgets we are still in the very early days of AI adoption – especially in the enterprise space.

And as with every breakthrough in innovation, AI has its fair share of teething problems, one of which is the lack of collaboration. Most data scientists begin building models from scratch. Data curation and wrangling consumes a whopping 80 percent of their time.

It is normal for Neural Networks (or Deep Learning) to take days or sometimes even weeks to complete a single training run. Simply optimizing a model may entail tuning the model’s hyperparameters and rerunning the training process. This is like baking bread: you bake the first loaf (model training), sample a few bites (model testing), tweak the ingredients (hyperparameter tuning) and then shove new dough back into the oven and repeat this over and over again, until you get the perfect loaf.

It is expensive and time consuming without even mentioning the sizeable computer power needed to accomplish this task.

Image: Forbes
The Model Menace

While the industry is converging on Python as the defacto programming language for machine learning, programmers are spoiled for choice when it comes to libraries. However, “more is less” as Barry Schwartz postulated in his book The Paradox of Choice.

The vast majority of programming frameworks are open source and some of the early frameworks are either losing steam or will simply be usurped. Your choice of framework may soon cease to exist. A safe bet in this regard, is to pick a framework that is backed by a well known brand name (no endorsements here).

The prized jewel – the model – is nothing more than a collection of harmless looking files. However, thanks to the abovementioned choices these models are NOT interoperable between programming frameworks. So should you choose to build in TensorFlow, the model files are saved as ‘data’ and ‘meta’ files, and only TensorFlow applications can use them. Should you choose Caffe2 tomorrow, the previously built TensorFlow models are rendered useless.

The same can be said of the myriad of Cognitive Services provided by the AI giants. You cannot port your models between DialogFlow, Microsoft LUIS, and IBM Watson. For example, should you build your AI application around DialogFlow the value of your AI is trapped inside DialogFlow. Porting to Microsoft LUIS will entail dumping your training data and starting on LUIS from scratch.

One could compare this situation to the early days of computer programming without libraries. In the absence of libraries, every trivial task – from reading a file to printing an output – would be incredibly painful and time consuming. Libraries have significantly accelerated the speed at which new applications are rolled out.

“Alone we can do so little; together we can do so much” – Helen Keller

As enterprises seek to double down on AI as a key pillar in their transformation journeys, collaboration and reuse will be key to preventing the reinvention of the wheel and possessing the ability to port AI assets between applications and organization silos.

These AI assets span everything; from training data that has been cleansed and normalized, to models that have been genericized such that they are rapidly localized to a given industry, enterprise and use case on hand. In effect, once the barriers surrounding AI reusability are addressed, AI will become the new vehicle of collaboration between organizations small and large, without data ever having to change hands.

To field a practical example, consider a facial recognition model that can readily spot human faces. However, the model cannot pick human faces in helmets. Rather than train a model to now recognize human faces in helmets from scratch, the facial recognition model can be repurposed and localized, to recognize faces in helmets at a fraction of the computer resources and time.

Sunlight on the horizon

Here’s the great news: the industry is slowly but surely formalizing various avenues that enable such reuse and facilitate collaboration. Needless to mention is that terms like collaboration and reuse raise the elephant in the room – security.

What if someone out there is able to reverse engineer the training data the model trained on, simply by observing the model’s outputs through repeated runs? And should the model itself be compromised, is it possible to infer the training data from the models weights? The training data after all, may represent actual customer data that the model observed.

Part 2 of this series sheds light on the emerging movements around AI collaboration, along with the security aspects that go in tandem. Stay tuned.

Contact Us

Our cognitive care solution has helped customers deliver the best customer experience to their customers. Find out how Wysdom can help you.