From 3M Health Information Systems
AI talk: Labor risk, AI risk, robot psychologist
In this week’s blog, I tackled three stories that cover a range of topics: An artificial intelligence (AI) algorithm that predicts labor risk in pregnancy, how to manage risk in AI systems and the promising role of a robot psychologist.
Mayo’s predicting labor risk score (LRS)
I saw this news item in my American Medical Informatics Association (AMIA) daily download section this week. It is a result of a new study published by Mayo. It turns out, current standards of care for managing pregnancy advocated by the World Health Organization (WHO) are based on a study done in 1955. That is quite unbelievable.
The new study reported by Mayo is a systematic analysis of 66,586 deliveries to develop a prediction model that computes the probability of unfavorable labor outcome (labor risk score (LRS)). They define unfavorable labor outcome to include such events as unsuccessful vaginal delivery (use of caesarean delivery (CD)), postpartum hemorrhage, admission to neonatal intensive care unit (NICU), etc. They studied more than 700 variables in this statistical analysis of this dataset and the single most important predictor turns out to be “parity.” I had no idea what that meant until I googled it. It is the number of prior pregnancies which lasted more than 24 weeks. Other top predicting factors include number of prior C-sections, age and BMI.
This study is an important step forward to determine individualized pregnancy risk factors and allows one to tailor interventions and improve the odds of a successful labor and delivery (and hopefully update the outmoded standard of care).
AI risk management framework
The National Institute of Standards and Technology (NIST) has just published a second draft edition of “AI Risk Management Framework” and is seeking comments. A workshop is planned for next month on this front. Though managing risk is essential for any type of software, AI-based systems bring some unique considerations into play. Foremost among them include ensuring safety and fairness, managing bias, being transparent and explainable. The framework goes through every aspect of designing, developing, deploying and maintaining of AI systems. Risk is defined in terms of harm to people, organizations and society (system/ecosystem).
One of the challenges to risk management in AI systems is developing methods for measuring risk. Risk tolerance is another area of consideration. NASA-like tolerance is infeasible for most organizations, but having a handle on what is acceptable and what is not acceptable needs to be articulated. The NIST report is a good wakeup call to organizations developing and deploying AI solutions. It is a resource that provides cues as to how to map (the context of application and risks), measure (methods to assess risks) and manage (prioritizing and addressing risks).
This study, published by University of Cambridge researchers comprised of roboticists, computer scientists and psychiatrists, tested the efficacy of a robot psychologist. They built a socially assistive robot (SAR) using a small, cute, programmable Nao robot. The patients? An experimental cohort of 28 children 8-13 years old. The task? To administer the Short Mood and Feelings Questionnaire (SMFQ) and the Revised Child Anxiety and Depression Scale (RCADS).
So, what were the findings of this empirical study? Essentially, they found that children are more likely to open up and share their feelings to SARs than they are using traditional interviewing techniques. A small study with a small cohort, but still interesting as a potential avenue to explore. There are several studies which explore the role of chatbots in mental assessment in adults, but this is one of the first to focus on children.
In the news
Turing award recipient, Professor Geoffrey Hinton, was honored with a Royal Medal from the Royal Society in Canada, for his pioneering work in deep learning.
I am always looking for feedback and if you would like me to cover a story, please let me know! Leave me a comment below or ask a question on my blogger profile page.
“Juggy” Jagannathan, PhD, is director of research for 3M M*Modal and is an AI evangelist with four decades of experience in AI and computer science research.