Listen to this story
DeepMind is the second name that comes to mind when you talk about OpenAI, or even AI. The organisation has been at the forefront of AI advancements and innovations in recent years. But, now the company seems to be Google’s alter-ego.
Recently, Igor Babuschkin, an AI researcher at DeepMind has joined Elon Musk to start a new venture to build a ChatGPT rival. At the same time, Google is facing a brain drain problem with popular AI researchers like Hyung Won Chung, Jason Wei, Shane Gu, and others great minds reportedly joining OpenAI.
Looks like Google is going through a change in management. Behnam Neyshabur, the co-lead of the Blueshift team of Google, announced that the entire team would now be working with DeepMind for furthering the capabilities of LLMs developed by the parent company, Alphabet. The projects will be led by Oriol Vinyals, the brain behind AlphaCode, AlphaFold, and other cutting-edge projects at DeepMind and Google Research.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Alphabet, in its fourth quarterly earnings call for 2022, announced its decision to move DeepMind from the ‘Other Bets’ section to becoming a part of its parent entity, reflecting increased collaboration with Google Services, Google Cloud, and Other Bets.
Ahead of its segmenting report changes, DeepMind, in January this year, had announced its plan to release a ChatGPT rival, Sparrow. The company touts that Sparrow can do things that ChatGPT cannot. They have been working on their LLM chatbots for quite some time, which was revealed in their “Information-Seeking Dialogue Agent” paper last September. Along with the paper, their blog post revealed that chatbots driven by LLM can express inaccurate information, use discriminatory language, and drive unsafe behaviour.
But in the field of generative AI, DeepMind has been extremely quiet since the release of chatbot Sparrow, while ChatGPT and even Google’s Bard are proving to be the true testaments of LLMs’ capabilities. This brings us to question, if DeepMind is still sticking to its original research philosophy or moving away from the focus on working towards AGI.
Ever since Microsoft and OpenAI’s stunning technology performance, troubled Google seems to be trying hard to impress the crowd. Looks like it has now dragged DeepMind into its pity party. Cynical Meta also joined the show last week with the release of LLaMA, clearly showing their interest in LLMs and probably chatbots as well. The paper shows that Meta’s smaller language model with 13 billion parameters is outperforming GPT-3 and DeepMind’s Chinchilla-70B, which Sparrow is trained on, is being surpassed by LLaMA-65B.
Will DeepMind Cry Along?
DeepMind has an utilitarian approach with AI. It focuses on building specific use case technologies like AlphaCode, a rival to OpenAI’s Codex. Its AlphaFold model solved one of the biggest challenges in Biology by predicting the protein structure of almost every folding, which has led to a lot of drug discovery and developments. More than an AI-focused firm, DeepMind is a science team. They believe in solving problems with AI and not be part of the AI hype cycle.
Over the years, DeepMind has been working on cutting-edge AI projects that push the boundaries of what’s possible. In other words, they don’t need to build a chatbot because “we’re too busy creating the future of AI”. That’s the kind of attitude that makes DeepMind so fascinating.
Taking up Responsibility
However, that does not mean that DeepMind is not working on generative AI projects. DeepMind’s Flamingo, a model that can answer user’s questions was released in June, trained on their own LLM – Chinchilla. Moreover, in February, the DeepMind’s team released a paper — Collaborating with language models for embodied reasoning – highlighting how a planner, an actor, and a reporter can help augment LLMs into reasoning and in-context learning. These recent advancements point towards their interest in somewhat of a general model.
Geoffrey Irving, the lead author of Sparrow’s research paper, who is also a safety researcher, has been very concerned about the implications of these “conversational” models. DeepMind believes that Sparrow is just a research-based model not ready to be deployed since “it contains a lot of biases and flaws”. The company has been setting up benchmarks for responsible AI since inception.
This strategy seems very similar to Meta, though their reasons may be different. Meta believes in researching closed-door because of its past. It was one of the first to release a chatbot to the public, BlenderBot-3, which started spewing garbage and racist nonsense. Similarly, Galactica had to shut down when it started giving hallucinating results. Then the company, after shutting everything down, took a backseat, and watched others publicly releasing models.
AI evangelist Yann LeCun too had been citing ethical reasons for why Meta is not releasing its ChatGPT-like chatbots and taking down hallucinating models like Galactica. Demis Hassabis, founder of DeepMind, said this kind of powerful AI technology poses a lot of potential danger and could cause “significant damage to humanity”.
What DeepMind is doing now, is what a lot of other companies would agree is the right way to go about it. Instead of publicly giving out models and being risk-prone, DeepMind and Meta played the “let’s see who fails first” game. But, for now Google seems to have tugged DeepMind along to cry with it, but for how long?
[Update: 15:30 | February 28, 2023: The story has been updated to reflect the recent developments in DeepMind]