AI Talk: SCOTUS ruling and algorithms in clinical medicine

July 19, 2021 / By V. “Juggy” Jagannathan, PhD

SCOTUS ruling

The Supreme Court of the United States (SCOTUS) recently dealt a blow to the American Hospital Association (AHA). What was the bone of contention? The AHA contended that a procedure performed in the hospital must be reimbursed at a higher rate than if the same procedure was performed in, say, an ambulatory, outpatient or primary care clinic.

The AHA’s argument centered on the importance of patient safety. The Centers for Medicare & Medicaid Services (CMS) site neutrality ruling essentially states the reimbursement is based on what is being done and has no bearing on where it is done. SCOTUS upheld that position by declining to hear AHA’s challenge.

Hopefully this ruling accelerates the march towards the adoption of value-based care. Value-based care underpins population health management. The idea is if providers focus on the long view of wellbeing and not the short term bottom line, the care will be dictated by what is best for the patient’s long term outcome. The equation is changed from incentivizing “sick care” (where the providers get paid when you get sick) to “health care” (where the providers profit when you stay healthy). That kind of transformation will simultaneously improve outcomes and reduce cost.

Though the SCOTUS decision currently only applies to the Medicare population, private insurers are likely to follow suit. This decision may have additional ramifications. If a procedure can be done in an ambulatory clinic or primary care office, why not in the patient’s home if safety can be assured? Already, more and more care is shifting from high cost acute care settings to lower cost outpatient settings.

For this change to happen and accelerate, AI tech adoption must also happen and there is plenty of new and innovative tech being invented. Just take a look at this blog by Peter Diamandis. The responsibility of staying healthy, of course, belongs to all of us. He urges all of us to be the “CEO of our own health.” With a doctor’s review and recommendation just a telehealth visit away, and a whole host of technology that can monitor practically anything in your body, the scene is set for your physician to take care of you remotely and ensure you never have to visit that expensive hospital room again.

Hospitals will have to re-invent themselves just like every organization is re-inventing themselves after the pandemic. Hybrid work (work from home or office) has become the norm. Perhaps hybrid care will also become the norm (treatment provided at hospital and/or home)?

It will be brave new world powered by technology. The question now is whether this change will happen in the next decade. I am betting on it.

Algorithms in clinical medicine

If there is anything the pandemic has taught us, it is that everything can change. The virus mutates, the treatment evolves, etc. A static, machine learned model is simply not going to keep up with these changes. Models need to be part of the ecosystem of learning, continually evolving. This lesson and many others can be gleaned from the extensive evaluation conducted on one particular machine learning model by researchers at the University of Michigan Medical School. 

A dozen researchers did a retrospective study of the efficacy of a sepsis prediction model, released in 2017 by a large EHR vendor to all its customers. Sepsis is an important condition to prevent as the prognosis for patients can be grave if sepsis is not recognized and treated in a timely fashion.

University of Michigan researchers turned on the sepsis alerts for a period of about one year from December 2018 to October 2019. The model was run every 15 minutes during this time frame and the alerts raised and not raised were analyzed by the researchers. The experiment was run on a silent mode, with no actual alarms raised to physicians. This was purely a comprehensive, external model evaluation exercise. EHR data from 38,455 hospitalizations involving 27,697 patients were analyzed.

The results were underwhelming. The standard metric for predicting models is the area under the curve (AUC). The score in this experiment for a 24-hour predicting capability was 0.63. To provide some context, if that number was 0.5, the prediction would only be as good as a coin flip. There were many false positives, that in a real world use case would have created alert fatigue and loss of faith in the system.

To be fair to the EHR vendor, they maintained that the model needs to be fine-tuned to the needs of a specific institution. By setting some threshold value, a trade-off can be made between false positives (leading to alert fatigue) or false negatives (casting a wide net to avoid missing real sepsis cases). That argument misses the point made by the researchers, which is that this model, with its current level of performance, is simply not useful. Normal practice patterns were better at catching sepsis.

The researchers also raised a larger question about the deployment of such models. The model is proprietary and no peer reviewed external evaluation was ever done. The FDA regulates algorithms that analyze radiology images and they have a special category for them: Software as a medical device (SaMD). Why not use this mechanism to regulate such a technology deployment? There is a strict evaluation process, and a process for ensuring the model remains current as disease and treatment paradigms evolve. This is the primary lesson the authors would want us to walk away with from this evaluation. You can listen to their views in this podcast.

On the heels of reading about the University of Michigan work on evaluating the sepsis model, I saw this article in Clinical Research News about an AI algorithm recommending treatment plans for cancer patients. I hope that team is keeping an eye on the potential pitfalls of using AI both in the short term and long term.

Acknowledgement

My colleague, Clark Cameron, sends out a daily dose of health care related news which was the inspiration for my first story. My second story was motivated by our ML Guild discussion led by Dr. Jimmy.

I am always looking for feedback and if you would like me to cover a story, please let me know! Leave me a comment below or ask a question on my blogger profile page.

V. “Juggy” Jagannathan, PhD, is Director of Research for 3M M*Modal and is an AI Evangelist with four decades of experience in AI and Computer Science research.