AI is in the news: Are patients and providers ready?

April 28, 2023 / By Tom Oniki

Artificial intelligence (AI) in health care is making headlines. Last year, the World Economic Forum published, “AI can deliver better healthcare for all. Here’s how.” This year, Forbes published “AI And The Disruption Of Healthcare” and “19 Ways AI May Soon Revolutionize The Healthcare Industry.” And last month, The Wall Street Journal reported, “Generative AI Makes Headway in Healthcare.” These and many, many other reports are detailing the promise and progress of AI in health care. Prestigious universities are offering courses on AI in health care. And companies are investing heavily in the technology.  

While clearly there are challenges ahead with regard to accuracy, ethics and equity (see the National Academy of Medicine’s special report, “Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril,” among other publications), AI in health care has the potential to significantly benefit — even revolutionize — diverse areas such as risk stratification, documentation improvement, coding, analytics and prediction, surgery, medication dispensing, revenue cycle management, diagnosis, and management pathway development and monitoring. Its impact can already be felt. 

Against this backdrop, I read with interest a Pew Research Center report released last month, entitled, “60% of Americans Would Be Uncomfortable With Providers Relying on AI in Their Own Health Care.”  The report details the results of polling 11,004 U.S. adults in December 2022. In addition to the titular statement, the polling results revealed several other AI opinions: 

The results also show an interesting difference between those who had familiarity with AI and those who didn’t. While 72 percent of those who had heard “nothing at all” about AI would not want AI to help decide the amount of pain medication they would get after a surgery. Only 51 percent who had heard “a lot” or “a little” about AI would not want AI used.  

That pattern was repeated when those polled were asked whether they would want a surgical robot with AI to be used in their surgery. Seventy four percent who had heard “nothing at all” about AI said they would not want robots to be used in surgery, but only 49 percent who had heard “a lot” or “a little” said they would not want robots used in surgery. 

These findings took me back many years to my medical informatics dissertation project. I had designed a decision support system intended to help ICU nurses comply with a hospital-developed care pathway for managing respiratory failure patients. It would detect nursing documentation that indicated non-compliance with the pathway and generate reminders which the charge nurse would review with the staff each shift. I tested the system in two adjacent ICUs in the hospital. In each ICU, to provide a control arm for my experiment, the system generated reminders for only half of the patients in the unit. 

In one of the ICUs, nurses responded very favorably to the reminders. They would say, “This is really helping me remember what I need to do for the pathway.” And, “Why are you only doing it in half the unit? When are you going to do it on the whole unit?”  

But in the other ICU — where the patients were sicker and the nurses were more experienced — the nurses’ responses were very different. “This is annoying and isn’t helping at all,” they would say. And, “I don’t agree with the pathway anyway.” And even, “I may just chart that I did it, so it stops reminding me!” 

That was fascinating to me. It taught me valuable lessons: We can’t force technology onto an unwilling audience. The same technical solution might be loved by one audience and hated by another. In order to facilitate successful adoption, it’s important to understand those audiences and the differences between them.  

There were clinical, technical and sociological forces at play here. We could have gotten more buy-in on the pathway in the first place. We could have provided more education — both on the pathway and on the decision support system. And we could have included a broad representation of cross-disciplinary roles in the design of the system and its implementation. 

Returning to AI in health care. We, the proponents of the technology, must always keep the audience in mind. We must ensure that AI isn’t acting alone, but in tandem with physicians to lessen administrative burden and provide the information for them to make more informed medical decisions.  

The late medical informatics pioneer Homer Warner used to say that informatics is 90 percent people. What can we do to make people’s jobs and lives better? What can we do to improve education, understanding, buy-in and acceptance? How can we include our audience in our products? Without those considerations, much of AI’s great potential could go unfulfilled. 

Tom Oniki, PhD, is a director of medical informatics for 3M Health Information Systems.