AI talk: AI and scientific risk

March 22, 2024 / By V. “Juggy” Jagannathan, PhD

I recently read a fascinating editorial in Nature titled, Why scientists trust AI too much – and what to do about it. The editorial is based on a research perspective penned by an anthropologist from Yale, Lisa Messeri, and a cognitive scientist from Princeton, Molly Crockett. What risk have they identified with using AI? Let’s dive in.

Three hanging lightbulbs.

AI and illusions of understanding in scientific research

The editorial highlights a perspective article by Messeri and Crocket that examines more than 100 peer-reviewed papers on researcher AI use over the past five years. Per their findings, Messeri and Crockett’s warning is clear, “The proliferation of AI tools in science risks introducing a phase of scientific inquiry in which we produce more but understand less.” They identify four dominant themes for how scientists use AI tools:

  • AI as oracle: A vehicle to comb literature and provide concise understandable summaries
  • AI as surrogate: A means to extract data from experiments
  • AI as quant: A tool to analyze vast quantities of data and provide meaningful insights
  • AI as arbiter: An assistant pressed into the role of a peer reviewer to evaluate merit of findings

Proliferation of the above can create a vicious cycle. The more that literature exponentially grows, the more of a need for oracle, surrogate, quant and arbiter, potentially resulting in producing more while understanding less. But that’s not all. Messeri and Crockett describe how this can lead to epistemic risks – a broad class of risks arising from holding incorrect beliefs. Such a belief system can lead to the illusion that one knows more than they actually do, is more objective than they actually are and understands less than they actually do.

Epistemic risks can lead to scientific monocultures (a term I hadn’t heard of prior to reading this article). Messeri and Crockett describe scientific monocultures through the use of a metaphor. In agriculture, when practicing monoculture, only one crop of species is grown at a time. This makes the process efficient and crop yield goes up. Over time, however, crops become more susceptible to pests and disease. A similar affliction can happen to scientists.

Calling all scientists

Messeri and Crockett are giving scientists a call to action. Be careful with how you use AI, and be aware of the risks in adopting AI technology. To paraphrase guidance I recently heard, AI use isn’t equal collaboration between a human and technology. We need to transition from the mindset of “human-in-the-loop” to one in which the human retains control of the technology – “human-in-the-top.” I call that good advice.

The exponential ‘ensh**tification’ of science

Coincidentally, I saw a blog post by Gary Marcus, an outspoken critic of large language models (LLM). Marcus has a series of examples showing how a lot of author-submitted research has all kinds of obvious LLM generated content. It appears as though use of LLM as a surrogate to help write papers is in full swing – another clear example of a human not remaining in control.

Have feedback or a blog topic of interest? Leave a comment below or ask a question on Juggy’s blogger profile page. 

“Juggy” Jagannathan, PhD, is an AI evangelist with four decades of experience in AI and computer science research.