From 3M Health Information Systems
AI talk: Balancing knowledge and neural models
In this week’s blog, I explore the rationale behind using artificial intelligence (AI) that combines clinical knowledge and deep neural models in deploying health care solutions.
Balancing knowledge and neural models
Neural networks have taken the world by storm in the past decade. Mega transformer models are all the rage now and it seems like there is a new advancement being announced every day. But neural nets are not some magic pixie dust that you can sprinkle on all applications to make them better overnight, particularly in health care. They need to be carefully planned and deployed using a mix of knowledge-based approaches and deep learning neural models – all deployed in a way that focuses on the user’s experience to ensure trust and deliver results congruent to current thought processes and workflows. That is precisely what our team is committed to with 3M HIS solutions.
Let’s start with the use of neural nets for say, coding a hospital encounter. An encounter typically has many associated documents: admit note, progress notes, consult notes, radiology reports, lab results, etc. A deep neural net trained to evaluate encounters can spit out accurate ICD-10 codes. We, present such an approach in our published research utilizing a deep convolution neural net. These codes can be given to a coder for review, but a coder expects a lot more than simply the results. They want to know why a particular code was selected.
In short, to garner the trust of the coder, the codes need to be explainable. Deep neural nets typically behave as a black box, learning to recognize the context for the selection by training on lots and lots of data. There are a variety of ways one can determine how a deep learning model comes up with a solution – but this requires meticulous assemblage of the evidence. One can combine the results of deep learning with knowledge-based approaches to identify appropriate evidence in the encounter documentation.
Now, instead of coding a full inpatient encounter, assigning ICD-10 and CPT® codes to ambulatory documentation is a bit different. There are a lot of very similar ambulatory encounters and radiology reports. A deep learning model that spits out codes and a confidence metric, allows one to send a certain portion of documents direct-to-bill. The rest are sent to coders for review. Again, trust in the direct-to-bill system must be gained by exhaustive evaluation of the results of the model on what it thinks the code is with a very high degree of confidence. One cannot rest on one’s laurels once such a model is deployed. Models must be continually evaluated and recalibrated. This is a key part of being a trusted technology partner to a customer.
There is yet another reason to employ knowledge-based approaches for deploying coding solutions: The Centers for Medicare & Medicaid (CMS) releases code updates every quarter. During the pandemic there were all sorts of new codes rapidly put into play. There was little to no data or time to train a new model to recognize these new codes. When one looks at deep learning solutions and practical deployments – the keyword is pragmatic.
Clinical documentation integrity (CDI)
CDI solutions are focused on ensuring that clinical documentation is accurate and detailed. Health systems rely on these solutions to ensure that clinical co-morbidities are accurately captured in the patient’s documentation. One can design and learn a deep learning model that identifies clinical entities and medications in clinical documentation – for example, the approach discussed here. However, a CDI solution relies not only on raw clinical data, but is based on a conceptual model of a specific domain/problem area. Our army of clinical specialists meticulously develop such models.
These models take a holistic look at the entire set of documentation for the patient, identifying pieces that fit into the overall model and presenting the evidence garnered to the clinical documentation specialist (CDS). How else can one suggest to the physician how to improve the documentation? A physician sees a simple nudge, which might show up as: “Please specify acuity (acute, chronic, acute-on-chronic) and type (systolic, diastolic, combined systolic-diastolic).” Behind these nudges, say for heart failure, our clinical specialists develop a detailed model capturing every nuance of the condition. We have an army of clinical specialists meticulously developing such models.
CDI solutions and many applications in health care require detailed explanation and analysis be provided to a decision maker. A knowledge-based model is an essential driver to ensure computer-assisted coding (CAC) and CDI solutions are intrinsically explainable and guides the CDS, coders and physicians toward the right documentation and decision.
In the excitement surrounding the advances in deep learning technology, one tends to forget that practical solutions in health care require meticulous planning. And, we have a litigious system that is quite unforgiving of egregious errors in care – as it should be. There are compliance issues when it comes to both CDI and coding. Improper nudges or upcoding will garner the wrath of our regulators. Explainable AI is still in its infancy. Solutions need to be carefully crafted to fit the workflow of the users.
Deep learning technology, indeed, has a bright future. But knowledge-based solutions are needed for the foreseeable future to provide proper guard rails on the use of deep learning solutions and inject common sense reasoning. There is more to AI than meets the eye.
I am always looking for feedback and if you would like me to cover a story, please let me know! Leave me a comment below or ask a question on my blogger profile page.
“Juggy” Jagannathan, PhD, is Director of Research for 3M M*Modal and is an AI Evangelist with four decades of experience in AI and Computer Science research.