AI Talk: Neural database, deepfake clones and Robotaxi service

Sept. 17, 2021 / By V. “Juggy” Jagannathan, PhD


Neural database

Last week I saw a blog on Facebook describing their efforts to design a new type of database called a neural database. A typical database has data that corresponds to a fixed schema. So, if you want to know who is enrolled in what class, that data will come from one or more tables where each table has a fixed format. For instance, one table will list all the students in the university, another will list all the classes offered and a third will point to which student is taking what class.

Facebook researchers are postulating a database of facts that are expressed as natural language sentences, where each sentence can encode one or more facts. Questions can then be phrased in natural language and the results are in natural language as well. But, unlike regular databases where the results are 100 percent accurate all the time, here the goals are more modest. In a database of 1,000 facts, the query results are accurate roughly 90 percent of the time.

Now, a few details on how they do this. To create the database of text facts, Facebook researchers start with Wikidata, which has data encoded as triplets, such as (X, employed by, Y), where “employed by” is one of 25 types of relationships. Then they convert this relationship data into English text using a variety of templates. They also create queries that can work against this dataset. They only sample Wikidata for a small subset – Wikidata itself has around 95 million facts.

A three stage process is used for training and testing the neural database. In the first stage, a Transformer architecture is used to generate sets of facts supporting the query. In the next stage, a neural project-select-join operation works on each fact set to derive an answer. Then these are combined, using conventional techniques, to provide the final answer.

Answers can be a count, Boolean, max or min number working on an aggregation or simply a collection of facts. Given a simple collection of English sentence facts, some sample queries are “whose spouse is a doctor?” (requires a join operation of multiple sentences); “list everyone born before 1980” (select, collect operation).

Where can one use such a database? The researchers are postulating that most information resides outside of traditional database technology and can one day be used to design smart chatbots and more using such a database. In the interim, they will have to scale this approach to billions of facts, provide an explanation module to garner trust and achieve significantly higher accuracy.

Deepfake clones

Last week MIT Technology Review carried an article on how deepfake technology is going mainstream. A marketing startup, Hour One, now recruits faces to be part of any marketing campaign. Literally just faces! This company has already signed up around 100 faces. What does signing up mean for the person with the face? They allow the use of their face in deepfake videos. What do they get in return? Every time their face gets used in a marketing or commercial video they earn a micropayment. Like book royalties – but for your face. Hour One promises to use the image only in legitimate and above the board use cases. Their deepfakes have been used in place of a receptionist and for pitching language courses. The use of this technology, however, is raising concerns by eliminating a cadre of acting jobs .

Robotaxi service

Last week I saw a report in the Wall Street Journal indicating that Waymo, a sister company of Google, has started offering Waymo Robotaxi service in San Francisco. It turns out, what Waymo is doing is essentially rolling out the next phase of its testing program. Waymo has been testing a fleet of self-driving cars in San Francisco for the past six months. They have been using employees as test passengers after a successful roll out in Phoenix. Their Trusted Tester program is now open to the public and you can see a promotional video on their blog post. Autonomous driving has had its ups and downs and it appears that the technology is getting to a point where it can navigate major city traffic. As to when this tech will be able to navigate the wintery northeast? Well, that will take a while for sure.

I am always looking for feedback and if you would like me to cover a story, please let me know! Leave me a comment below or ask a question on my blogger profile page.

V. “Juggy” Jagannathan, PhD, is director of research for 3M M*Modal and is an AI Evangelist with four decades of experience in AI and Computer Science research.