AI Talk: Tele-surgery, NIH and yoga

Aug. 21, 2020 / By V. “Juggy” Jagannathan, PhD

Tele-Surgery

Came across an article in Wired this week which showcased the impact of COVID-19 in the field of tele-surgery. The article talks about how to address COVID-19 concerns; University Hospitals Coventry & Warwickshire in Coventry, UK, resorted to tele-surgery to treat bowel cancer patients. They used the da Vinci system from Intuitive Surgical. It so happened this hospital had the wherewithal to acquire an additional system to augment the one they already had (at a hefty two million pounds price tag). The technology allows physicians to remotely perform the surgery with assistances from robots. The result of using these highly precise instruments actually helps patients: the incisions are smaller, and the patients heal faster. Robotic surgery is a growing field and perhaps the pandemic is going boost this industry further. Currently, roughly two percent of surgeries are being done using robots. It is an evolving field and companies are innovating rapidly. Incorporation of augmented reality to guide physicians and providing haptic feedback are in the works. Of course, the cost has to come down for access to take off, but social distancing, has come to surgery!

NIH Covid-19 response

The National Institutes of Health (NIH) is now funding efforts to accelerate the adoption of AI in dealing with COVID-19. This month, NIH launched the Medical Imaging and Data Resource Center (MIDRC), a collaborative network of researchers, with the aim of advancing the use of AI in early detection and personalization of COVID-19 treatment. The center aims to create a repository of chest images and other data that can then be used to create deep learning models that can detect COVID-19. With $160 million in funding, this seven-year program has high ambitions. The goals of this endeavor in the very short term are to help deal with the pandemic, but long term, they are focused on the effective and ethical use of AI in the treatment of diseases.

Body Pose Tracking

I saw this article on the Google AI blog. It’s about a new advance in video processing that accurately tracks body movements. To help understand body motion, key reference points are tracked. The current system of tracking, called COCO (Common Object in Context) topology, tracks 17 points like the nose, right eye, left knee, etc. The latest Google system, dubbed BlazePose, tracks 32 points on a body. The new points include things like right eye outer, left pinky finger, left heel, etc. With image recognition tuned to detect all of these points with respect to a person’s center of mass, they are able to track movements more precisely. Uses of this technology are largely related to fitness and they even claim they can analyze yoga poses. Maybe I should take a video of my poses and see what it comes up with!

I am always looking for feedback and if you would like me to cover a story, please let me know. “See something, say something!” Leave me a comment below or ask a question on my blogger profile page.

V. “Juggy” Jagannathan, PhD, is Director of Research for 3M M*Modal and is an AI Evangelist with four decades of experience in AI and Computer Science research.