AI Talk: COVID-19 modeling, climate change, future of AI

April 10, 2020 / By V. “Juggy” Jagannathan, PhD

This week’s AI Talk…

Modeling the virus course

Dr. Anthony Fauci, one of the main faces behind the evolving guidelines promulgated to fight the spread of the coronavirus, has quoted models that predicted 100,000 to several million deaths from the virus depending on how we react to the pandemic. He also said the models used to predict these numbers are quite complicated and make a range of assumptions. This got me wondering: How exactly do they build and validate these models? This article in Wired is a decent exposition on how the models are being constructed and who is behind such endeavors. Some of the parameters that are estimated in this process include, as it pertains to infection in a population: number susceptible, exposed, infected and removed. Then modelers make various assumptions about the rate at which infection spreads (dubbed the “reproductive number”), the average time for infection to be transmitted and a myriad of other factors. Yes, the models can get quite complicated quickly. The modelers rely on data and that data is continuously evolving. Models are a necessary adjunct to policy making in these trying times and certainly the recent extension of stay-at-home recommendations for the month of April in the U.S. is based on the dire predictions of the modeling. One institute leading the charge on modeling in the U.S. is The Institute for Health Metrics and Evaluation (IHME), part of the University of Washington School of Medicine. Their models helped prepare Seattle-area hospitals and they just released a nationwide prediction on the impact of the pandemic, broken down by state. It captures the current state of mandates in each state and predicts whether the health system in each state is likely to be overrun or not. Certainly worth looking at.

Coronavirus and Climate Change

This weekend I saw images of wonderfully clear skylines in Los Angeles on social media. The reason? Zero smog. One of the byproducts of the shutdown is that millions of people are not out driving and polluting the atmosphere. Does this bode well for the future of climate change? Not so fast, argues Meehen Crist in this well written, nuanced article in the New York Times. The impact of the global shutdown has indeed been good for the environment. From Italy to China, the air and water has become much cleaner. Global carbon emissions have indeed fallen. In China, the significant drop is due not only to people sheltering in place, but the virtual standstill of the manufacturing sector. However, this pandemic-induced cleaning up of the environment is going to be short term, argues Crist in the NY Times article. For one, oil is cheap now, and when the effects of the pandemic wane there will be a lot of pressure to step up and use this resource which will shift focus from renewables. For sure, there will be systemic changes in how people move and work with telework becoming more prevalent, but that is not going to be enough. Policy makers have an opportunity to rethink how to move forward. 

Classical or Deep Learning

MIT Technology Review featured a debate on the future course of AI—specifically whether the future will incorporate the more classical approaches to AI, or if it will be purely dependent on deep learning solutions. The debate contestants: Professor emeritus of NYU, Gary Marcus (classical AI proponent) and Danny Lange, Vice President of Unity, deep-learning enthusiast. The arguments for and against were largely predictable. Professor Marcus argued for a solution that has classical AI in the mix, along with deep learning, but wants to explore newer architectures that go beyond all of the current solutions. Mr. Lange on the other hand, believes it is just a matter of getting enough training data and the new and emerging neural architectures will continue to learn and evolve rapidly. There is some truth to both arguments, but I am a bit more on Professor Marcus’ side. Take the example of Alexa skills. There are over 100,000 skills that impact every aspect of your life. A vast majority of these skills use classical AI for their reasoning and use speech recognition developed using deep learning. Basically, a hybrid solution. Another example comes from a lecture in CMU by Peter Clark (thanks to Zoom), who heads the Aristo Project at Allen Institute of AI. Aristo has made impressive progress in developing AI solutions that can pass 8th grade science tests. Clark and his team use BERT Transformer (machine learning) architecture to achieve these results. Clark acknowledged that sometimes they have no idea how their ML solution comes up with the correct answer! That also translates to not knowing why some answers are wrong as well. Clark’s team is now involved in developing solutions that can do multi-step reasoning, provide explanation and model common sense knowledge. In essence, they are exploring hybrid solutions for reasoning and solving complex problems.

I am always looking for feedback and if you would like me to cover a story, please let me know. “See something, say something”! Leave me a comment below or ask a question on my blogger profile page.

V. “Juggy” Jagannathan, PhD, is Director of Research for 3M M*Modal and is an AI Evangelist with four decades of experience in AI and Computer Science research.

During a pandemic, healthcare information is gathered, studied, and published rapidly by scientists, epidemiologists and public health experts without the usual processes of review. Our understanding is rapidly evolving and what we understand today will change over time. Definitive studies will be published long after the fact. 3M Inside Angle bloggers share our thoughts and expertise based on currently available information.