AI talk: Watermarking, privacy and AI hype?

Feb. 10, 2023 / By V. “Juggy” Jagannathan, PhD

This week I am highlighting three stories that are related to problems caused by current state of artificial intelligence (AI). 

Watermarking LLM content 

MIT Technology Review has published an article around the idea of watermarking the content generated by large language models (LLMs). This is an excellent and timely idea, assuming it can be implemented. LLMs like ChatGPT have been producing such fluent texts that plagiarism detecting organizations like Turnitin are scrambling to catch up. Turns out, researchers at the University of Maryland have invented an approach to watermark the text generated by such models. The researchers prove this by generating text utilizing Meta’s Open Pretrained Transformer (OPT). Basically, their core idea tinkers with the next token generation process while not impacting fluency of the text generated. How is this done? 

Start with the token (word) generated. Use that token to create a hash that seeds a random number generator. The random numbers are used to deterministically split the vocabulary list (all the words that the LLM can possibly generate) into a whitelist (W) and a blacklist (B). Always select the next word from W, driven by the LLM. Repeat the process with the next token and onwards. The process of detecting the watermark simply checks if the token generated was in the whitelist. Text generated by humans won’t conform to the watermark process and can be reliably detected. Quite an ingenious idea. 

The problem, however, with this approach: The algorithm of embedding the watermark needs to be incorporated into the model that is doing the output. ChatGPT must, for instance, incorporate this algorithm, or any other LLM that is deployed. The approach and ramifications are yet to be understood fully and undoubtedly tweaks to the base algorithm will be needed. Clearly, the need to determine if the text output is from a language model or a human is definitely going to get more acute. 

Privacy and diffusion models 

Creating unbelievable images from a text prompt has been all the rage this past year. The ability to create arbitrary images, say a “rabbit flying around in a spaceship with a tuxedo,” literally creates such an image. If someone wrote such a text, in the past, we simply have to visualize the image in our brain. No more. Simply provide the text to DALL-E 2 or Imagen or Stable Diffusion and we can gawk at the image generated in wonderment.

But recent research reported by Google and DeepMind shows a dark side to this capability. It can spit out images of people that were used during training. So, according to the paper, if you type in “Ann Graham Lotz,” who happens to be an American Evangelist, it will return images of this person. I tried this in DALL-E 2 and it happens to be true. But if I tried this with “Barack Obama,” DALL-E 2, states that their content policy does not allow them to generate the image. 

Now, we can, of course, Google these names and get lots of hits. DeepMind research basically underscores the fact that these models memorize and regurgitate training data to some degree. In that sense, it is not privacy preserving. Well, the paper may prove to be fodder for a group of artists trying to sue companies trying to profit from diffusion models, stating a case of copyright infringement. 

The earlier models created using a different technique, generative adversarial networks (GANs), did not have this weakness. The authors of the research paper, “Extracting Training Data from Diffusion Models,” suggest that better methods must be used to train these models. The question for all of us regarding these diffusion models that generate photo realistic images: Is privacy just an illusion when we can already Google anything?

AI hype?

I saw a blog last week that tickled my curiosity from Dr. Gary Marcus titled, “Happy Groundhog Day, The AI Edition.” Dr. Marcus is a serial entrepreneur, a best-selling author of multiple books and a vocal critique of AI capabilities. In his blog on Groundhog Day, he compares the hype about AI that happened in late 1980s and the one that is happening now. 

Thirty years ago, AI hype was fueled by expert systems which provided practical solutions to a range of problems, but these systems were brittle. Companies poured money into humans to write rules and it didn’t work. There were too many edge cases and the effort failed, ushering in the period referred to as “AI winter” – a period when funding for AI projects disappeared. I lived through this period, and I too shifted my focus from expert systems to other things. Now, the situation is eerily similar in some respects. Companies are hiring humans help to fix brittle neural systems.

For sure, the AI systems are much better in their capabilities, but in one metric, they are not much different. Neither the systems of the past nor present have any notion of common sense. ChatGPT may provide fluent outputs, but it makes spectacular silly mistakes. Dr. Marcus states the main issue here is the inability of these systems to handle core human abilities like abstraction and the ability to compose solutions from parts. These capabilities are what makes us so good with math – an area where current systems have underwhelming performance.

We are NOT headed for another AI winter, though. In fact, Google is now in the process of releasing its answer to ChatGPT – which they are calling Bard. The promise of current day systems, even with their inherent limitations, is BIG. However, whatever systems are deployed, they need to be carefully vetted with appropriate guardrails and with humans-in-the-loop!  

I am always looking for feedback and if you would like me to cover a story, please let me know! Leave me a comment below or ask a question on my blogger profile page. 

“Juggy” Jagannathan, PhD,is an AI evangelist with four decades of experience in AI and computer science research.