AI Talk: Emotional AI, Fake news, stereotypes and vacations

Sept. 6, 2019 / By V. “Juggy” Jagannathan, PhD

Emotional AI

Fortune featured an interesting video interview this week with the CEO of Affectiva, Rana el Kaliouby. Affectiva is a spin-off from MIT Media Lab. What does the company do? They study human expressions. During the interview, the interviewer Jeremy Kahn of Fortune, makes various faces and the corresponding emoji shows up on the screen. He smiles, and a smiley emoji shows up, he frowns, another one shows up, etc. Now, what is the big deal about such a capability? Turns out it has a lot of applications. For example, it can help with negotiations, says Professor Curhan of MIT. Ms. Kaliouby mentions a slew of applications in the video interview: how to communicate the effects of bullying, identifying depression, communicating with caregivers, etc. Because these are such invasive solutions, privacy concerns have reared up. To combat these concerns, Affectiva uses an opt-in policy for their users and the company claims they have turned down money from companies involved in the security industry. The interview is worth watching.

Fake-news generator

In February, the OpenAI, a San Francisco-based AI research lab, released a fraction of the large language model they had trained. The reason they released only a fraction? They were worried about the potential to create fake news with the language model. Fast forward six months, they have now released a model that is roughly half the size of the overall model—which has 1.5 billion trainable parameters! Along with the model, they have released a report that argues for staged release of the models and explores the social impacts of their decision. Essentially, the argument is caution should be exercised when releasing a model in order to see its impact and take appropriate action. Caution is warranted for sure, however it is unclear how that helps this particular type of model, as one commentator (Richard Socher at Salesforce) quipped: “You don’t need AI to create fake news! People can easily do it