AI Talk: Regulating algorithms, LO-shot and Consumer Electronic Show (CES) 2021

Jan. 15, 2021 / By V. “Juggy” Jagannathan, PhD

gears

Regulating algorithms

We need no further proof that AI has made inroads in our lives than to see fresh regulations being contemplated by New York City. The city has proposed regulating algorithms that govern hiring. I was aware programs existed that assist in screening job applicants, but was surprised to see how pervasive the practice has become! I found a recent buyer’s guide for organizations that compared 13 different solutions to help with screening job applicants. The regulators are right to be worried.

The city’s proposal would require companies to disclose to job candidates how their applications are going to be screened. Screening software tool vendors must undergo annual audits to ensure the software does not discriminate. Bias in AI algorithms has been a rallying cry among researchers for some time now and this proposed legislation is indeed a step in the right direction. However, the devil is in the details. There is no standard mechanism available to conduct such audits and currently there is no organization that can certify such software. These issues must be addressed, hopefully soon. These are some of the growing pains associated with the age of AI.

Less than one-shot (LO-shot)

SingularityHub, a site devoted to exploring tech advances, featured an article about a new area of research which is garnering attention: How to learn concepts with minimal data. Typical machine learning solutions require thousands and thousands of what is referred to as labeled data points. If you want to teach a machine that something is a cat, you show it thousands of pictures of cat.

It has long fascinated researchers that this approach is not how humans learn. Typically, just a few samples are enough for humans to learn a concept and generalize from it. GPT-3, which we explored in an earlier blog, made advances in this direction with their few-shot learning capability. Now, can we extrapolate and learn when there is no data or, as it is referred to here, less than one-shot? A child shown a picture of a rhino and a horse can be told that a unicorn is something in between and successfully identify one.

MIT Technology Review showcases research efforts at the University of Waterloo to explore this question. You can access their research paper here. These researchers are attempting to learn N classes with the number of sample data being less than the number of classes one is attempting to classify! Their trick? Categorizing each data sample with a soft label. What is a soft label? Essentially, it is a label that can represent multiple entities. If you have a picture of the number three, you can soft label it as 60 percent a three, 30 percent an eight and 10 percent the digit zero. So, you are in effect learning something about three different categories with just one sample. The research presented is preliminary and mostly mathematical but represents a promising avenue of exploration.

Consumer Electronic Show 2021

The Consumer Electronic Show, an annual extravaganza in Las Vegas, attracted 175,000 people last year. This year, the show went virtual for the first time ever. It showcases the latest in electronic gadgetry and is a reliable bellwether for future tech. Wired published a list of gadgets and notable devices from the show.

From laptops to cameras, the one device that caught my attention from an AI standpoint was the CareClever Cutii—a mobile robot that helps seniors. This is a companion robot which rolls around on wheels and helps with all video calls—talking with family, friends and even with doctors! It can even be summoned if the senior experiences a fall and can help call an emergency contact. One other device that caught my attention from an AI perspective was their listing under the Best in Health category. I’ll leave it to you all to check out what this device is (Hint: It will give you advice on what to eat)!

I am always looking for feedback and if you would like me to cover a story, please let me know. “See something, say something!” Leave me a comment below or ask a question on my blogger profile page.

V. “Juggy” Jagannathan, PhD, is Director of Research for 3M M*Modal and is an AI Evangelist with four decades of experience in AI and Computer Science research.