From 3M Health Information Systems
AI talk: “The Coming Wave,” Innovator of the Year
A timely book by the founder of DeepMind, Mustafa Suleyman, is the main topic of discussion in this blog. The other topic is the pioneering work done by University of Wisconsin-Madison professor Sharon Li, which garnered her the Innovator of the Year award from MIT Technology Review.
Book by: Mustafa Suleyman with Michael Bhaskar
Mustafa Suleyman co-founded DeepMind in 2010 which was then acquired by Google in 2014. DeepMind has had impressive breakthroughs in the development of some core artificial intelligence (AI) technology. Its AlphaGo program, trained using the reinforcement learning technique, trounced the world champion in Go and reigns supreme. Its Alphafold program solved a 50-year-old standing problem of how a protein folds, i.e., what exactly is the 3-D structure of a protein. DeepMind publicly released the predicted structure of more than 200 million proteins last year – this has already had a profound impact on research and is poised to revolutionize synthetic biology. Proteins are the building blocks of all living things.
Suleyman’s book, “The Coming Wave” is an attempt to outline the explosive impact of generative AI, synthetic biology and quantum computing. In one respect, his message is similar to the one we saw a few years ago, in the book by Peter Diamandis, “The future is faster than you think.” The message is a convergence of how different technologies can accelerate the emergence of novel solutions to all kinds of problems from climate change to innovative prescription drugs. That is the positive side. When discussing the negative impacts, Suleyman does not pull any punches. He portrays a series of dystopian scenarios on a variety of fronts. He refers to addressing these negative outcomes as the “containment” problem. How does one put bounds on the technology in such a way that you enjoy the positive benefits while avoiding the negative impacts?
So, what are the extreme negative impacts the coming wave of technology can potentially unleash? Here are just a few highlighted in the book:
- Deep fake content in the form of speech and video can destabilize nation states
- Cyberattacks and security threats to infrastructure bolstered by AI power can be crippling
- Drone warfare with autonomous agents will be accessible to rouge states and bad actors alike
- Advances in synthetic biology makes it possible for the creation of drug resistant pandemic strains
- Automation of all kinds of jobs can lead to massive labor market disruptions
Suleyman foresees two kinds of opposite responses to this deluge of factors: An authoritarian, surveillance state in one extreme and islands of decentralized control who grow their own food and live in a cocoon on the other extreme. But the major argument in the book is that we must act now to chart a better course of action because the dystopian future envisioned could be closer than we think.
So, what is the proposed remedy for containment? He outlines a 10-point plan in the book, but here are a few of the main points:
- Safety – identifying and addressing technology safety issues – from generative AI to gene editing.
- Audits – Set up responsible organizations to audit the release of any technology and applications.
- Businesses: Profit + purpose – Regulators should figure out a way to incentivize purpose-driven solutions.
- Culture: Respectfully embracing failure – Just like in the airline industry, publicize failures in technology to learn quickly how to counteract missteps.
I am, in general, optimistic about the positive use of technology, but the concerns raised by Suleyman in his book indeed resonate to some degree and gives me pause. The rapid acceleration of generative AI tech has caught many unawares. So has the release of the structure of 200 million proteins. We are poised at a period of big transformations driven by technology. We do need to pay attention to these warning bells.
Sharon Li – Innovator of the Year – MIT Technology Review
Sharon Li, professor at the University of Wisconsin Madison, was named Innovator of the Year by MIT Technology Review – and I was curious to see what she had done. Turns out she led a research team that has found a way to identify out-of-distribution (OOD) samples. What is OOD and why do we care about it? Large language models and large multi-modal models are trained using trillions of data points. But if you ask them a question or concept they have not seen before, such an example is considered OOD. If the model has not seen such an entity before, its response likely is going to be hallucinated or just an educated guess. If there is a reliable way of determining that a question is OOD, then the model can behave differently. It now has a way of determining what it does not know! At least it can express a bit of humility in answering the question!
I am always looking for feedback and if you would like me to cover a story, please let me know! Leave me a comment below or ask a question on my blogger profile page.
“Juggy” Jagannathan, PhD, is an AI evangelist with four decades of experience in AI and computer science research.