AI Talk: Digital medicine and language models

Dec. 11, 2020 / By V. “Juggy” Jagannathan, PhD

This week’s AI Talk…

Telemedicine, digital health care transformation

Digital transformation in health care

Digital transformation occurred in earnest a few decades ago in the financial sector. Every sector of the economy from travel, retail to manufacturing has embraced digital transformation, but health care has been extremely slow to adopt changes. This is the case for many reasons. First there is the regulatory landscape and then there is patient privacy to consider, not to mention the EHR-fiefdoms. Add to this hospitals and health care facilities are competitive in protecting their markets and the patient population they serve. No wonder health care has evolved into silos of antiquity. It can seem like a morass of inefficiencies. For example, facilities are still using fax, a technology which practically became obsolete a decade or more ago. Scanning of paper charts is still popular, the epitome of inefficiency.

It took a pandemic to shake things lose. The digital transformation of health care is now firmly rooted and is accelerating. This week, The Economist has a nice summary highlighting what is happening in health care on this front. One metric describing the scale of the transformation: CB Insights, a market research firm, notes $8.4 billion in funding flowed into privately held startups in digital health just in the third quarter of this year. And as yet unlisted digital health companies that have a billion dollars or more in valuation (unicorns) stand at a staggering $110 billion. Telemedicine, wearable technology, and patients and doctor’s acceptance of technology are all factors playing into this COVID-19 maelstrom. Big tech has gone all in when it comes to health care! It’s about time. Digital efficiencies are sorely needed to improve care and reduce costs in this trillion-dollar industry.

Language Model costs

One big tech story this week was the firing of the Google Ethical AI team lead Timnit Gebru. I am not going to comment on the controversy per se, but I want to focus on the summary of the research paper which is at the center of this event, co-authored by half a dozen Google and University professors. MIT Technology Review has summarized the paper, which deals with the costs of creating language models. The paper itself is undergoing peer review and has not seen the light of day yet.

Language models are the underlying technology behind many current natural language understanding efforts. These models are the basis of all the predictive typing you see when you do Google searches or when you type into your smart phone. Recent massively trained language models like GPT-3 have shown impressive performance in a wide range of tasks, as covered earlier in this blog series. Now, what are the costs of creating these models that cause concern for the authors? Here‘s the gist:

  • Environmental costs – A transformer with neural architecture search has a carbon footprint that is roughly 315 times that of a flight from New York to San Francisco. In short, its use of electricity is phenomenal.
  • Financial costs – Recent large models cost millions of dollars. This prohibitively high price tag means only large companies can create them and the fruits of such models are not equitably distributed.
  • These models are trained on massive amounts of data which can render them incomprehensible. The models also absorb all the biases that exist in this data. Additionally, they are not smart enough to capture the evolving nature of language.
  • These models provide an illusion of meaning while clearly not understanding anything they say or generate.

These are all legitimate drawbacks to creating large language models. The authors’ goal is to recommend caution and encourage researchers to sit back, take stock and approach the problem with a fresh perspective. That is something all researchers need to pay attention to.

Acknowledgement

My friend and colleague, Philippe Truche, pointed me to The Economist article.

I am always looking for feedback and if you would like me to cover a story, please let me know. “See something, say something!” Leave me a comment below or ask a question on my blogger profile page.

V. “Juggy” Jagannathan, PhD, is Director of Research for 3M M*Modal and is an AI Evangelist with four decades of experience in AI and Computer Science research.