AI Talk: Biased algorithms, geography of AI

September 24, 2021 / By V. “Juggy” Jagannathan, PhD

This week’s blog revisits a recurring theme: biased algorithms. The other story revolves around the Brookings Institute findings on the state of distribution of AI.

Biased algorithms

This month, Health Affairs carried a blog declaring AI algorithms are failing communities of color. We have seen an increase in the detection of bias in algorithms in recent years—the emphasis on detection. So, what are some egregious examples? Algorithms to determine who gets kidney transplants, how much money a person is entitled to for brain injuries suffered on the football field and risk prediction algorithms determining who receives what care. In all these cases, studies have confirmed the existence of racial bias, with African Americans receiving unfair and unequal treatment.

What is the remedy? How should this problem be addressed? We have explored the role data plays in the development of algorithms in past blogs. Garbage in, garbage out. We need to ensure the data collected accurately represents the problem it is meant to address. Frequently, models are trained on only one dataset which is completely different than its real-life application environment.

The Health Affairs blog points to a few recommendations by various groups to address bias. One obvious one is to include subject matter experts (SMEs) in the application design process. Recently, the American Medical Association (AMA) announced a strategy to embed racial justice and health equity concepts into every innovation. Another recommendation is to beef up federal regulations to ensure algorithms do not perpetuate or exacerbate inequities. The FDA has a start on this front with their Software as a Medical Device Action Plan, but much more needs to be done.

When algorithms perform poorly, the fault is not of the algorithm per se, but points to the creator of the algorithms: humans. Hence, it is up to us to ensure that these algorithms are created carefully, evaluated properly and maintained regularly.

Geography of AI

The Brookings Institute, a policy think tank based in Washington D.C., just released an interesting report charting the research and adoption of AI across U.S. The authors, Mark Muro and Sifan Liu, painstakingly analyze various parameters related to AI research and AI adoption, broken down by geographic location. While the potential future impact of AI swings between utopia and dystopia, it is unquestionable that AI has the power to change our future. Core to this discussion is whether AI is being used to augment human functions (task augmentation), or to replace humans (task automation). 3M HIS’ use of AI is quite extensive in clinical documentation, and it falls squarely into the category of task augmentation (improving physician efficiency and creating time to care).

The report measures AI research by tracking federal AI funding, publications in major venues and patent applications. Commercialization measures are driven by the number of AI-related startups and AI job postings. Currently about five percent of $40 billion federal research and development goes towards AI-related research. A similar percentage, five percent, represents startups in AI. AI job postings are less than one percent of all IT postings.

What about where these activities are happening? The geographical distribution of research and adoption are not surprising, but disturbing. According to the Brookings Institute report, the Bay Area Silicon Valley is a super center, accounting for one fourth of all AI-related activities, while 261 metro areas record virtually no significant AI activities. Quite an uneven distribution. Metro areas with prominent universities benefit from federal research and development. There are many communities that should take advantage of emerging AI tech but need to act.

The report is a wake-up call to state and local governments to take a hard look at what they can do to improve their lot in the coming tsunami of AI technology. The authors urge that the Bay Area take the lead on ensuring that AI is developed with a strong ethical base—along the lines of the story above. They also urge all research centers funded by federal research and development to collaborate with industries to reinforce local economies and promote useful adoption of AI regionally.

The main message of the report is that state and local entities need to do a total reevaluation of their strategy with respect to AI. As an example, they cite the report Brookings did for Louisville, Ky. I used to live in Louisville a long time ago, which is home to several major industrial players, from GE and UPS, to health care companies like Humana. Though these regions are not able to do fundamental research, they are nonetheless positioned to promote data science and adoption of AI through these local industries.

What about all the places where no AI activity has currently been detected? The authors of the Brookings Institute report simply acknowledge it is going to be hard for them in the future. That message is a bit too pessimistic in my opinion. Every city needs to assess its current capacity to support an AI workforce and deal with what impact AI can potentially have in its backyard.

I am always looking for feedback and if you would like me to cover a story, please let me know! Leave me a comment below or ask a question on my blogger profile page.

V. “Juggy” Jagannathan, PhD, is Director of Research for 3M M*Modal and is an AI Evangelist with four decades of experience in AI and Computer Science research.