AI Talk: Predicting opioid abuse, rain and brain computer interface

Oct. 15, 2021 / By V. “Juggy” Jagannathan, PhD

In this week’s blog, I examine what happens when an algorithm’s prediction is wrong. I also look into how deep learning is shaping weather forecasting. The last story is on a futuristic technology, brain computer interface (BCI), which is no longer in the future.

Predicting opioid abuse

This interesting story in Wired relates to the use of AI algorithms in predicting opioid abuse. One such algorithm was built by a company called Appriss, which got its start notifying crime victims’ families when an incarcerated convict was going to be paroled or released. This company then branched out to predicting abuse of controlled substances by monitoring and tracking the purchase of prescription opioids, starting in 2014. Fast forward half a dozen years, this company’s product dubbed NarxCare is used by both law enforcements and a large number of physicians to determine if a patient is at risk for abusing opioids.

Now, it is a well-known fact that any algorithm has both false positives (patient flagged as opioid risk when they are not) and false negatives (missing patients who are at risk). But in this case, the NarxCare predictions were all believed to be true. The Wired article walks through a specific false positive, flagging the case of a 32-year-old woman suffering acute pain whose doctor refused to prescribe her an opioid for the pain. The reason for the false positive in this case? That woman had a really sick dog who was treated with opiates. Those opiate prescriptions showed up in her name and she was refused care she really needed. In a clinical domain, as we have discussed in earlier blogs, it is imperative to evaluate any automated prediction which impacts care and the rationale for an outcome should be explained in detail. In this particular case, if the doctor had realized the prediction was based on the opioids the pet had taken, they would have immediately corrected the problem. A cautionary tale.

Predicting rain

The DeepMind team is at it again, this time using their deep learning prowess to address a familiar problem: predicting rain. They used Generative Adversarial Networks (GAN) to predict radar images which can accurately predict rain in the next 90 minutes. The GAN approach has also been successfully used to generate realistic fake video. The model dubbed DGMR (Deep Generative Model for predicting Rain) actually does better than the conventional approaches that are based on years of study and hand-crafted simulations. These simulations attempt to understand how weather works and how various factors influence weather. The DeepMind solutions short circuit all this analysis: Simply feed it lots of radar data and then let it predict. There is no question that this is a practical application. The only problem with this approach is that once you have such a model, does it actually help us understand weather? What factors influence the outcome?

Brain computer interface (BCI)

There is a lot happening on the technology front that connects our brain to computers. This excellent article surveys the waterfall of activities happening in this space. Most of the research started out as Defense Advanced Research Project Agency (DARPA) efforts to provide relief to wounded veterans. The researchers wanted to know how veterans who lost their limbs during war could control their prosthetics with their minds. Numerous labs, including Intelligent Systems Center at Johns Hopkins, have been working on various approaches.

Typically, electroencephalography (EEG) signals recorded from electrodes placed in the skull are used for this purpose. More recently, electrocorticography (ECoG), which are basically small brain implants (an invasive procedure), are being used to collect signals instead. These signals are now analyzed using machine learning approaches to interpret what the signals actually mean. What started out as a way to help paraplegic veterans now has a broader application. For example, the military is exploring ideas such as flying drones just by thinking about it! There is a significant amount of commercial interest in this technology as well. This Venturebeat article says this area has seen close to $300 million in growth just in the first eight months of 2021. Neuralink (Elon Musk’s BCI company), Facebook and others are investing heavily in this area. Core to the technology working is machine learning, or basically recognizing a pattern of brain wave activity as being related to specific actions. And there is a growing corpus of datasets being made available that is helping advance research activity. Take a look at this github repository with links to EEG-datasets that can be used for BCI research.

All of this attention also brings with it some plausible negative consequences. Because BCI is a computer interface by definition, it is also hackable. Does it mean a hacker would be able to scan our thoughts and divine what we are thinking? This is the ultimate privacy breach and it sounds scary. But it can allow us to act using only our minds. Does this tech portend telepathy? Guess we will know in probably less than a decade.

Acknowledgement

My colleague, Dan Walker forwarded the story on Wired about faulty algorithms. My long-time friend Chandy pointed me to the BCI article.

I am always looking for feedback and if you would like me to cover a story, please let me know! Leave me a comment below or ask a question on my blogger profile page.

V. “Juggy” Jagannathan, PhD, is Director of Research for 3M M*Modal and is an AI Evangelist with four decades of experience in AI and Computer Science research.