AI Talk: Rebooting AI, fake documentation, predictive text

Oct. 11, 2019 / By V. “Juggy” Jagannathan, PhD

This week’s AI Talk…

Rebooting AI: Building Artificial Intelligence We Can Trust – Gary Marcus & Ernest Davis

The emperor has no clothes—at least that is the allegorical message conveyed by this book. The core idea of the book is that deep learning techniques will not lead us to the higher goal of achieving Artificial General Intelligence (AGI). AGI is the ability of machines to display human level intelligence in the performance of any task. In this, I will have to agree with the authors. The argument is that deep learning networks—with their emphasis on lots and lots of training data—work nothing like human brains which can learn a variety of concepts with very little data. The models trained by deep learning solutions are brittle and cannot respond to stimuli not seen in the training data. As examples, the authors site some of the spectacular accidents suffered by self-driving cars when confronted with unexpected situations. Deep learning has a lot of commercially useful applications—and speech recognition is a spectacular success—but it is just one of the techniques available in the AI toolbox. Rule-based systems, semantic knowledge networks and traditional AI solutions are just as useful in some contexts. The type of learning that humans do, however, is yet to be mimicked by computer systems and until one can do that, dreams of AGI are just that—dreams. Current AI systems have really no commonsense and no real understanding of what has been learned! AI research teams need to get-off the deep learning bandwagon and start in a fundamentally new direction to determine how learning really happens. AI research needs a reboot.

EHR Documentation and reality

A recent study reported in JAMA showed that in reviewing 180 patient encounters in the emergency room, the concordance between what transpired between doctor and patient and the EHR record (as per the audio/visual record of the encounter) was pathetic. Only 38.5 percent of the review of systems and 53.2 percent of physical exam findings corroborated with the audio/visual recording. The findings suggest aggressive use of templates and/or reliance on copy and paste by physicians who are stressed and under duress. The number of physicians studied is small though and the study results can’t be generalized given this cohort size. However, it does suggest that using scribe solutions now popular among physicians could be a way to combat the type of problems highlighted in this study.

Predictive Text

In this week’s edition of The New Yorker, author John Seabrook explores the role of predictive text. Predicting what text comes next is a product of advances in language modeling, one area of deep learning solutions. Using language models, the Google Compose feature in Gmail finishes words and sentences in users’ emails. In this article, the author explores the language generation features of GPT-2, a language model released by Open AI. The prose generated by this model is quite impressive. But as the author notes, the generated text is quite nonsensical. It underscores the fact that machine common sense knowledge is lacking. Nevertheless, the text generated represents a significant advance in the current state of the art. You can experiment with this technology here.

Acknowledgement

The EHR study research report was pointed out to me by my friend, Juergen Fritsch.

I am always looking for feedback and if you would like me to cover a story, please let me know. “See something, say something”! Leave me a comment below or ask a question on my blogger profile page.

V. “Juggy” Jagannathan, PhD, is Director of Research for 3M M*Modal and is an AI Evangelist with four decades of experience in AI and Computer Science research.