AI Talk: Augmenting decisions and the impact of “adversarial” data

May 17, 2019 / By V. “Juggy” Jagannathan, PhD

This week’s AI Talk…

AI in Healthcare – Augment not replace

This article in Forbes contains a now familiar refrain that we have heard before: AI use cases in health care are all about augmenting human decision making. We heard a similar argument from Eric Topol in his recent book, Deep Medicine. The author of the Forbes article, CEO of Ambra Health, a company that markets image diagnostic tools, makes the same case. AI in health care is seeing a significant rise in investments—take a look at the trendline in this CBInsights report. Almost every aspect of health care is impacted by these investments! But the overall messaging is clear: AI’s role is to augment, not replace.

Unremarkable AI

This is work being done in my backyard—CMU! Researchers are striving to make the role of AI in clinical decision making unobtrusive. “The idea is that AI should be unremarkable in the sense that you don’t have to think about it and it doesn’t get in the way,” Zimmerman said. “Electricity is completely unremarkable until you don’t have it.” Indeed. These days, many use a navigation app to get from place A to place B. How many are “aware” that they are using it? GPS navigation has become so pervasive that one does not perceive it to be intelligent assistance. The goal for AI in clinical decision making is the same. The key is to insert AI assistance in just the right spot in the overall workflow where it can shed light on key issues and augment the knowledge needed to make decision.

Adversarial audio

I must admit, I have not heard the term “adversarial audio;” however, I do remember seeing this New York Times article a year ago about how you can trick your favorite electronic personal assistant with subliminal messages inaudible to the human ear! But, have no fear. A new antidote for this type of attack on your personal assistant is now available. Researchers at the University of Illinois-Urbana-Champaign have figured out a way to detect subliminal messages embedded in audio and flag them. Presumably Amazon and Google already have some counter measures for such attacks!

Adversarial examples

Purely by coincidence, I ran across this news item on the MIT website about how MIT researchers have developed a new technique for evaluating the robustness of machine learning algorithms. Say you have a Convolutional Neural Network (CNN) that classifies images as cats, dogs or other animals. How do you know if this neural network will classify a new cat image correctly? The researchers developed a way to perturb the training images slightly—these are called “adversarial examples.” If the classification is correct for such perturbation, the CNN is robust, but if they find an example which triggers a misclassification, then they know that the network needs more training!

Acknowledgement:

My colleague, Anna Abovyan, pointed me to the blog posted on the CMU Human Computer Interaction institute regarding “unremarkable AI.”

I am always looking for feedback and if you would like me to cover a story, please let me know. “See something, say something”!  Leave me a comment below or ask a question on my blogger profile page.

V. “Juggy” Jagannathan, PhD, is Vice President of Research for M*Modal, with four decades of experience in AI and Computer Science research.