AI Talk: Future of health care and explainable AI

August 2, 2021 / By V. “Juggy” Jagannathan, PhD

Future of health care

I listened with fascination to Dr. Gordon Moore’s interview with Nicholas Webb, CEO of LeaderLogic, last week on the 3M Inside Angle podcast. Webb’s views on health care so resonated with my own personal views (expressed in my last blog), that I followed Dr. Moore’s advice at the end of the podcast: I read the book by Webb  “The Healthcare Mandate” and watched the documentary released last month, “The Healthcare Cure.” I will try to boil down the essence of Webb’s views in a few short paragraphs, but I have the same advice to all of you that my colleague gave in his podcast—read the book and watch the movie!

The documentary, jointly developed with primary care physician Dr. Ray Power, is all about the empathetic connection between patients and physicians. It reflects back to the time when physicians made house calls and had personal, caring relationship with their patients. The current state of affairs, where 50 percent of physicians suffer from burnout, is bemoaned and the need to reestablish a connection between patients and caregivers is emphasized. The health care cure will only be realized when there is a “big shift.” What does that mean exactly? It refers to a refocus on disease prevention instead of treatment of a preventable condition. Eighty percent of health care costs, Webb states, is spent on treating diseases that are preventable—like obesity.

Webb proposes the use of technology to monitor each person’s health around the clock. He uses the term constituents instead of patients, because you are a patient only if you are sick. He defines a new computational platform—The Constituent Healthcare Operating System—which is essentially a population health dashboard accessible to the primary care physician you have entrusted with keeping you healthy – not just treating you when you are sick. This platform is fed with a continuous data stream from wearable and remote monitoring devices and summarized by AI technology to highlight any abnormality.

Another consistent theme of Webb’s work is an innovation mindset. As an inventor with 40 patents himself, Webb underscores the importance of continuous innovation to improve every aspect of our lives. Hospitals, clinics and the whole health care ecosphere should be focused on innovating to ensure better health. It is also gratifying to note that he mentions the innovation culture that exists at 3M. The closing sentiment of the documentary is quite fitting, alluding to the relationship between physicians and patients: “Can we live in a health care environment that allows us to enjoy the partnership in wellness rather than transactional in sicknesses?”

“Beware explanations from AI in health care”

A newly published paper in Science Magazine, addresses explainabilty in AI in health care. The thesis of the paper, written by authors hailing from the University of Toronto and various law schools at Penn State and Harvard, is explainable AI in health care is neither reliable nor necessary. What? On its face, the statement “nor necessary” may seem ridiculous, but let’s examine the author’s reasoning.

First, they draw a distinction between explainable AI and interpretable AI. In interpretable AI, the algorithm is a transparent white box that bases its decision on the identifiable features of the input. So, one can directly determine what features led to the final conclusion. The authors have no issues with this process—other than to note that such methods can be intrinsically less accurate than deep learning models.

In the case of explainable AI, the prediction or decision is made by a deep learning model using lots of training data. This model is essentially a black box. A lot of explainability research is focused on how to explain the results that such black box models produce. The explainability part is handled by a different process which is typically trained to map the decision back to plausible variables that led to the black box model’s actual prediction. 

The authors basically argue that this separate process for post hoc explanations is fraught with issues and they are right. For one, the accuracy of this process is intrinsically inferior to the deep learning model—otherwise we might as well use this process for prediction. Not only can this separate process lead to inaccurate reasoning, but it may also be susceptible to small changes in the input leading to quite different reasons for the final predictions. So, the process designed to engender trust from physicians using the system fails in its fundamental mission.

So, what is the solution? The authors argue that we should focus on ensuring that we get the best model possible and evaluate it thoroughly. They draw a comparison to drug trials. A drug proposed to treat a particular condition goes through randomized control trials. At the end of a rigorous process of evaluation, the clinical community reviews the results and, if they are favorable, prescribes the drug for that condition. A physician does not ask for a specific explanation of how a particular drug works. He simply trusts the evaluation process. That same  level of trust should be applicable to any method used to predict or diagnose conditions. The authors suggest that for the vast majority of use cases using AI black box algorithms this should work. However, they also suggest that in cases were decisions are tied to fairness and equity (e.g., who should be eligible for dialysis machines), a less accurate but transparent interpretable model can be used.

The viewpoints expressed are certainly valid and raise important considerations on how we should develop, evaluate and maintain deep learning models.

Acknowledgement

The podcast by my colleague and friend Dr. Gordon Moore inspired the first story. The article on explainable AI was sent to me by my colleague, Clark Cameron.

I am always looking for feedback and if you would like me to cover a story, please let me know! Leave me a comment below or ask a question on my blogger profile page.

V. “Juggy” Jagannathan, PhD, is Director of Research for 3M M*Modal and is an AI Evangelist with four decades of experience in AI and Computer Science research.