AI talk: The Turing Trap

Nov. 18, 2022 / By V. “Juggy” Jagannathan, PhD

This week, I want to review an interesting and instructive conversation I heard about the Turing Trap. 

A group of people on a conference call

Turing Trap 

The Brookings Institution recently hosted a discussion between the director of the Stanford Digital Economy Lab, Erik Brynjolfsson and the moderator from Brookings, Anton Korinek. The title of the talk: The Turing Trap: A conversation with Erik Brynjolfsson on the promise and peril of human-like AI. 

The central premise of the discussion is that current artificial intelligence (AI) efforts have the wrong end goal. What goal is that? To imitate humans – a la “The Imitation Game” – which has become known as the Turing test. If a machine behaves in a manner indistinguishable from a human, it passes the Turing test.  

Why is this such a bad idea? It fundamentally focuses on eliminating human labor, as opposed to augmenting it. There are certain tasks which humans are not really suited for that machines can perform spectacularly well, and there are tasks that humans can do effortlessly that are quite hard for computers to do. The argument advanced by Erik is, why not focus the creative energies of using AI to make humans more productive? This not only improves the productivity of humans, but has the added benefit of not displacing humans and creating jobless workers. The tax code, he argues, also has the wrong incentives: encouraging capital and discouraging labor. I remember Bill Gates’ argument from a while ago that we need to tax robots in a manner similar to humans, and I agree.  

For a concrete instance of augmenting vs. replacing, Erik gives an example of a call center operative assisted by a robot making the operator effective and efficient in handling the call. This results in a much better customer experience. Almost every one of us have experienced working with some inane automated system answering our calls. Really like the emphasis on augmenting – as this also happens to be the driving force behind all our AI at 3M HIS, where our focus is on helping physicians to do their tasks more efficiently. 

Stanford established a new Center for Research on Foundation Models (CRFM) last year, as a part of its Institute on Human-Centered Artificial Intelligence. Bringing together a multi-disciplinary team of hundreds of researchers, the group put out 200-plus page research report: On the Opportunities and Risks of Foundation Models. Erik wrote a section on economics, looking into topics such as productivity and wage inequality.  

The reason for CRFM’s existence is to answer the following question: Can foundational models be democratized and built in a way that avoids the perils while enhancing the utility? What are foundation models? The core idea is massively large models that are trained on a vast amount of data that can then be leveraged to do a variety of tasks. Large language models such as GPT-3, LaMDA, DALL*E fall into this category. The discussion provided a window into this research. 

Another interesting topic of this conversation centered around measuring the value behind digital tools. Erik calls it GDP-B, Gross Domestic Product for Benefits, i.e., value created by the frequently free (hence not accounted in GDP) digital tools such as search, Wikipedia, Facebook or WhatsApp. They are in the process of systematically identifying value that individuals provide for such things by offering people money to stop using the tools. Their goal is updating the ancient GDP model (invented almost a century ago) to the current century. 

All in all, a fascinating discussion laced with moral and ethical considerations on how to build future AI models and avoid the Turing Trap. 

I am always looking for feedback and if you would like me to cover a story, please let me know! Leave me a comment below or ask a question on my blogger profile page. 

“Juggy” Jagannathan, PhD,is an AI evangelist with four decades of experience in AI and computer science research.