AI talk: Blood test, psychedelics and safer AI

Aug. 12, 2022 / By V. “Juggy” Jagannathan, PhD

This week, I cover three stories on artificial intelligence (AI): two relate to health care applications and one relates to governance aspect. 

A drop of blood on a fingertip

“To see life in a drop of blood” 

A few weeks ago, I saw a blog post by a NIST researcher, with the title above. The researcher, Ming Zheng, was focused on detecting diseases using blood tests. Recall the company Theranos, and its founder Elizabeth Holmes, went up in flames attempting to do something similar? This made me wonder what Zheng was attempting to achieve. Zheng and his colleagues have been researching carbon nanotubes for more than two decades. Carbon nanotubes can bind to DNA molecules and can be turned into sensors. These sensors are controlled by the DNA molecule used to bind the nanotube. AI techniques can then interpret what this combination actually senses when immersed in blood. Zheng calls this “molecular perceptron” – a play on an AI module named perceptron – that is a precursor to neural models. He insists this type of sensing is similar to how wine tasters can perceive nuances in wine using their taste buds.  

 What does he hope to do with such a molecular sensor? He wants to detect ovarian cancer. Their collaborator Mijin Kim, used AI and nanotube sensors to detect ovarian cancer. She was recognized for this effort by MIT Technology Review’s Innovators Under 35. Perhaps this time, efforts to create new novel blood tests, particularly early detection of diseases, leads to actual progress. And the Theranos debacle becomes a distant unpleasant memory. 

A person using a virtual reality headset

 Psychedelics 

This week, an MIT Technology Review article had “psychedelics” in its heading, which immediately caught my attention. They were not actually talking about people taking psychedelics, but about a research study that showed that virtual reality (VR) can have the same effect as these potent medications. The researchers, Glowacki et. al., hailing from Bristol, UK, developed a special VR framework called “Isness-distributed (Isness-D).” The framework is tested in groups of four. Four participants, from anywhere in the world, don the VR headsets and share the experience together. The title of their research paper is self-explanatory: “Group VR experiences can produce ego attenuation and connectedness comparable to psychedelics.” I have never seen a scientific paper talk about ego-attenuation – a topic usually discussed in spiritual literature.  

 There is a short video clip in MIT Technology Review which provides a glimpse into what the participants felt. It depicts diffused white blobs of light – each representing a participant – slowly merging and becoming one. I guess one can call that psychedelic. But the interesting research result is the therapeutic benefits of drugs are achieved without using any medications. More and more research is showing that VR can have significant therapeutic benefits. There is an entire book on this subject by Brennen Speigel – “VRx – How virtual therapeutics will revolutionize medicine.” And it seems the preferred way to teach the new generation of physicians about anatomy and dissection is using VR. There are a lot of applications for this technology in health care and elsewhere, and we are in the beginning stages of a revolution. 

Safer AI 

I saw an article in the World Economic Forum newsletter this week about how to bring about safer AI solutions. What is their proposed approach? Turns out to be fairly simple to articulate: Develop certification program for AI systems. AI solutions and systems are being deployed at a frenetic pace across the board. But it is also well documented that there is significant evidence of bias in AI solutions and there is no way to determine if the program causes harm or what the recourse should be for those impacted by its action. Why not develop a certification program from an independent organization with relevant expertise who can apply context specific to the application to determine what it means to deploy responsible solutions? Sounds quite reasonable.  

 They call it “soft law” mechanisms and they have been successfully deployed in many other industries and applications. For instance, dolphin-safe certification for tuna ensures compliance with the goal of not killing dolphins while fishing for tuna. Similarly, financial institutions can get lending programs certified to be fair and responsible. Companies using hiring screening software can get certification for the software to avoid bias and discrimination. This is not to replace regulatory approaches being pursued in countries around the world, but rather an augmentation to the overall strategy of ensuring the deployment of safe, responsible AI solutions. Sounds like a good approach, but does require setting up of all these certification bodies. It’s not clear when that is likely to happen. 

I am always looking for feedback and if you would like me to cover a story, please let me know! Leave me a comment below or ask a question on my blogger profile page.  

“Juggy” Jagannathan, PhD,is an AI evangelist with four decades of experience in AI and computer science research.