AI Talk: Models, models and models

May 22, 2020 / By V. “Juggy” Jagannathan, PhD

This week, I am focusing on AI models. A colleague pointed me to an article that suggests AI models are faring poorly. Then I saw three more articles with the same theme. So, here you go—hope you find this instructive!

AI models go haywire

Who could have predicted the run on toilet paper? How about golf pushcarts? Answer: no one. Do you think an AI system trained on past buying behavior could have predicted the shortages? Obviously not. This is a problem well understood in the AI community: A model is only as good as the data it trains on. If reality deviates from what the AI model sees during training, all bets are off as to what it will predict, especially now when our reality is so different from what it was a few months ago. All of the AI models that many companies rely on to stock up their shelves have gone completely haywire, becoming practically useless. Models need to be continuously monitored and adjusted in the best of times; in the midst of the pandemic, buying behavior is skewed. Many companies who have been relying on automated reordering systems and other predictive algorithms have been rudely awakened to this reality. They all need a data science team to babysit their models and tweak them or wholesale replace them.

Is training AI models like teaching kids?

This recent MIT Technology Review article highlighted an effort by Carnegie Mellon University (CMU) researchers. Their machine learning system attempts to mimic how kids learn. So, what does that mean? Essentially, when teaching kids how to recognize objects, you don’t attempt to teach them the thousands of detailed categories. You start with broad general terms, such as dogs, cats, fruits, etc., and then later dive into the details of classification. The CMU researchers used the same approach, breaking the space of objects into hierarchical clusters and using training data to systematically learn to recognize objects at one level before transitioning to a deeper level. Their results appear promising.

 Adversarial testing

I read a recent science blog that had an interesting take on AI models. A few years ago, multiple AI systems, including one from Microsoft and another from Alibaba, scored highly on Stanford reading comprehension tests. However, when researchers added a few nonsensical sentences to the mix, humans ignored those red herrings while machines got flummoxed (underscoring the behavior highlighted in the first section of this blog). If reality is different from the data used to train the model, AI performance suffers. Professor Potts from the Stanford Linguistics department has interesting views on how to fix this issue. He believes we are too easy on AI systems when testing them. Most tests for models simply attempt to evaluate how well the model does on data that is drawn from a distribution similar to the training set. Potts suggests incorporating adversarial testing when training a model on reading comprehension—test the system on data that is deliberately misleading, ungrammatical or nonsensical. His overall message: AI systems have some ways to go in order to learn how to generalize, be self-aware (know what they know and don’t know) and ask for more information if there is not enough information to predict or decide. Well, those are the themes of developing Artificial General Intelligence—and we are quite a way from achieving that! Perhaps we can make incremental progress and make the models less brittle and more robust.

Acknowledgement

My colleague, Dan Walker, pointed me to the MIT Technology Review article on AI models.

I am always looking for feedback and if you would like me to cover a story, please let me know. “See something, say something!” Leave me a comment below or ask a question on my blogger profile page.

V. “Juggy” Jagannathan, PhD, is Director of Research for 3M M*Modal and is an AI Evangelist with four decades of experience in AI and Computer Science research.