AI Talk: Tele-critical care, brain and C2M

June 12, 2020 / By V. “Juggy” Jagannathan, PhD

No specific theme this week on AI Talk. Just a collection of interesting stories!

New Tele-Critical-Care model proposed

I came across a new proposal published in the Telemedicine and e-Health Journal. The authors propose a new telehealth model, with an acronym that is quite a mouthful: NETCCN, or National Emergency Tele-Critical Care Network. The goal is to support COVID-19 patients, establish a physician lead network of caregivers and to predict and prevent future outbreaks. The authors, hailing from the Society of Critical Care Medicine and the Telemedicine and Advanced Technology Research Center, reviewed current literature on technology and solutions to come up with their proposal. The core of the proposal is to utilize telehealth digital technology to support not only telemedicine but tele-critical care, an approach for taking care of acute conditions at home utilizing technology. To enable this kind of care, the proposal advocates a Tiered Telementoring platform. What is that? It is a pyramid structure with an acute care interventionist guiding a non-critical care physician, who in turn guides non-critical care trained nurses, who in turn assist medical staff and caregivers at home. With this scheme, a critical care trained virtual physician can take care of 60 patients. The patients themselves are grouped in virtual wards. The proposal is interesting one in that it reimagines what hospital and acute care can be in the world of digital technology, AI and telehealth!

Brain and deep neural network

Some fascinating new research has been reported by Caltech researchers. I came across a discussion of this work in a science blog. For the past 15 years, the researchers have been studying how object recognition is accomplished in our brains. For this study they have been showing thousands of pictures of objects to non-human primates. By using deep brain stimulation and functional magnetic resonance imaging (fMRI), they study the response of stimuli and see which brain cells are activated by it. In the process, researchers observed that certain features of the objects activated certain cells in the inferotemporal (IT) cortex. For instance, round objects and faces triggered a certain area. Spiky objects like spiders turned on another area. Then researchers did something quite interesting: They trained a deep learning neural network to recognize the same images. They noticed that principal component analysis (a mathematical technique for identifying features that contribute to an object makeup) for objects also did something similar. Their biological work had identified areas which recognize faces and spiky objects. What about a different characteristic? Like stubby objects? They found the deep neural network to recognize such stubby objects, turned on specific areas in the network to enable such recognition. Will brains do the same? Indeed, they found regions in the brain that only fired when stubby objects are shown and not when faces or spiders are shown. These researchers have established that deep neural network object recognition works in a way that’s similar to the brain, perhaps answering the question of why image recognition using deep neural networks has progressed so well in recent years.

Consumer to Manufacturer

We have heard about business to business (B2B) models that track business to business transactions. Business to consumer (B2C) largely refers to eCommerce applications, which directly cater to consumers. But for the first time I came across the term Consumer to Manufacturer (C2M) in a recent article in MIT Technology Review. What does C2M actually mean? The idea is consumer preferences directly dictate what needs to be manufactured and sold. The manufacturers build products to meet consumer demand. How did this come about? Well, Chinese manufacturers facing the double whammy of a trade war with U.S. and the outbreak of COVID-19 decided to focus their energy on cultivating their domestic markets. How does it work? AI algorithms predict consumer preferences and these are used to tailor the manufacturing process to consumer needs. Interesting!

I am always looking for feedback and if you would like me to cover a story, please let me know. “See something, say something!” Leave me a comment below or ask a question on my blogger profile page.

V. “Juggy” Jagannathan, PhD, is Director of Research for 3M M*Modal and is an AI Evangelist with four decades of experience in AI and Computer Science research.