Artificial intelligence and clinical data: Finding patterns now to shape our future

March 19, 2018 / By Steve Cantwell

My oldest son is on the autism spectrum which brings him both challenges and unique skills, such as perfect pitch and encyclopedic memory. One of his gifts is an ability to see patterns that most of us miss. At least I do.

When my son is not doing homework, I often find him with a calculator and his red notebook. He experiments with graphs that create visual representations of words or names. This began with a simple formula he learned at school, but then he started to experiment with colors, rhythms, musical notes, random numbers and equations tapped out on the calculator. Here are a few sample pages from his notebook.

I often wonder what is going on in his mind to help him see patterns I can’t. My son’s gifts remind me of artificial intelligence (AI) because the mental process behind these skills is a black box to me. There’s more than a little of the data scientist in him, which should serve him well in these data-saturated times.

Artificial intelligence, machine learning, and AI algorithms are all over the media now—AI applied to everything from autonomous vehicles to deciphering surveillance data to tracking terrorist threats. Make no mistake: AI is not science fiction. It’s already here.

Working with intelligent, self-learning technology is an inevitable part of our future. The number of data-generating digital devices and processing speed will only continue to increase exponentially. This flood of digital data and the use of AI algorithms to decipher and manipulate it creates both opportunity and threat. According to figures reported by Facebook and Twitter, more than 125 million Americans on Facebook and over 675,000 people unknowingly engaged with Russian trolls on Twitter. In covering this story, The Atlantic reporters point to “the vast amounts of data that the digital-advertising industry collects about Americans” as a key factor in the success of the Russian campaign.

We hear stern warnings—from luminaries like the late Stephen Hawking and Elon Musk and Bill Gates—who say AI will certainly transform us, but could even destroy us, if we are not vigilant now in shaping its future. In an interview for Newsweek, Steven Hawking said: “Computers can, in theory, emulate human intelligence, and exceed it. Success in creating effective AI could be the biggest event in the history of our civilization. Or the worst. We just don’t know.” Hawking also told Wired magazine, “The genie is out of the bottle. We need to move forward on artificial intelligence development, but we also need to be mindful of its very real dangers.”

Others are asking hard questions about exactly what emerging technologies like AI do. Can these technologies create and reproduce inequalities and bias? Do we understand the potential blind spots? In a recent interview, tech author and economist Adam Greenfield notes that “the sophistication of these systems is rapidly approaching a point at which we cannot force them to offer up their secrets.”  Thus, the problem of the “black-box” algorithm. 

So where is health care in all this? Center stage. Health care is one area where we’re likely to see AI applied on a broad scale sooner rather than later. The volume of clinical data far exceeds what human providers and payers could act on without technology. And we’re not talking about Instagram pictures of favorite meals or family vacations but clinical data with life-and-death importance.

AHA Hospital Statistics show an estimated five billion healthcare claims are adjudicated in the U.S. every year. This accounts for $3.0 trillion in annual healthcare payment. This isn’t just big dollars but a massive amount of clinical and financial data embedded in these coded claims. Even a single ICD-10 code, such as P0716, which specifies a newborn between 1500-1749 grams, tells us a lot about the patient’s risks and expected costs for care. Such low birth weight is likely accompanied by respiratory distress and the need for auxiliary oxygen, hemorrhage potentially causing brain damage, higher risk of heart failure and digestive disorders—not to mention increased long-term risks for diabetes and high blood pressure.

The possibilities for AI and computer automation in health care appear limitless. Take the case of sepsis. Every year more than 250,000 Americans die of this common reaction of the body to infection, though it can be effectively treated with antibiotics. The trick is to catch warning signs in time. Sepsis can’t be detected by a blood test. You can’t see it with a microscope. To respond before it’s too late, you need to watch for patterns of symptoms—high white-blood-cell counts, fast breathing, high temperature, low temperature, low blood pressure. Innovative hospitals like Harborview Medical Center in Seattle use computer systems to monitor early signs and automatically alert nurses to increased risk for sepsis.

This is a compelling example of a partnership between technology and human beings. It’s far from perfect. There are many false alerts, but it is actually helping to detect sepsis early in these hospitals and thus save lives.

AI has great potential to speed up health screening and to identify risk factors for heart attack, stroke, cancer and diabetes. For example, AI can be used to study the pictures of the eyes of diabetic patients, millions of them, to detect early risks of blindness. Individual providers could never analyze that volume of data.

When AI is involved in such sensitive areas as cancer diagnosis, the concerns about black-box AI become even more urgent. Without transparency, gaps and/or bias in data sets can create confusion. For the output to be useful to clinicians, they need transparency to understand the context—the “why.” Then they can take preventive action to help patients.

Pure AI would control both input and outcomes. But the truth is in health care we don’t have the clinical outcomes data we need. We’re still “practicing” medicine. We have sketchy data, at best, on good outcomes. People come to the clinic or hospital ER sick. They get treated and most often don’t come back. We don’t hear about how fast they recovered or how they reacted to medications—unless something goes seriously wrong, such as a drug allergy or infection. People don’t call back the doctor and say, “I’m feeling better. Let me tell you how it went.”

Futurist Anab Jain talks about the importance of imagining our future and warns us to ask hard questions now about how new technologies will impact us. She cites the example of medical genomics. What are the implications of algorithms that predict future health risks based on genetics? Who might claim ownership of our genetic information and what would they do with it? Could insurance premiums increase decades in advance based on a person’s genetic risk for a chronic health condition? See Anab Jain’s 2017 Ted Talk: “Why we need to imagine different futures.”

My son imagines the future in his red notebook. He creates and unravels patterns of colors as musical notation, nuanced rating systems for new movies and pop music albums, alternate alphabet symbols to make secret codes and even complete phonetic systems that account for all the vowel and consonant combinations. At the root of his play is an intense curiosity and ability to see patterns in what appears, and sometimes is, random data. Many of his experiments don’t work, but he stays at it.

Like the real data scientists at work today, my son is convinced there’s a way to engage the boundless data the world throws at us to find coherent patterns that reveal secrets.

Steve Cantwell is a senior marketing communications specialist at 3M Health Information Systems.