Interesting Innovations From DeepMind In 2021

A look at a few of the notable innovations from DeepMind in 2021

This year technology entered into areas we have never seen before–from AI being used to find exoplanets to robots learning to “reproduce”. Innovation leader DeepMind, too, put its best foot forward this year and released some mind-blowing systems with the potential to create a big impact. These were focused around a diverse range of areas such as robotics, games, language models and many more.

Let us take a look at a few of the notable innovations from DeepMind in 2021.

Biology: Enformer

DeepMind and Alphabet’s Calico teamed up to bring out “Enformer“, a transformers-based model with the ability to predict gene expression from DNA sequences with greater accuracy. Enformer understands the variants in the non-coding genome and predicts the effects of any variants on gene expression in both natural genetic and synthetic variants. DeepMind said that it framed the machine learning problem as predicting thousands of epigenetic and transcriptional datasets in a multitask setting across long DNA sequences. While training on most human and mouse genomes and testing on held out sequences, DeepMind found an improved correlation between predictions and measured data relative to previous state-of-the-art models without self-attention, added this paper titled, “Effective gene expression prediction from sequence by integrating long-range interactions.”

For more details, click here.

Player of Games (PoG)

DeepMind created a system called Player of Games (PoG), whose structure and mechanism it has released in a research paper. Player of Games performs well at both perfect and imperfect information games such as Chess and Go, Poker, and Scotland Yard. It works by using a single algorithm with minimal domain-specific knowledge. PoG’s search capabilities are well-suited across the fundamentally different game types. It is guaranteed to find an approximate Nash equilibrium by resolving subgames to remain consistent during online play, the paper released by DeepMind says.

For more details, click here.

Large Language Model: Gopher

Gopher is a 280 billion parameter transformer language model from DeepMind that almost halves the accuracy gap from GPT-3 to human expert performance and exceeds forecaster expectations.  

Gopher outperforms the current state-of-the-art for 100 tasks (81% of all tasks). The baseline model includes large language models like GPT-3 (175 billion parameters), Jurassic-1 (178B parameters), and Megatron-Turing NLG (530 billion parameters). It showed the most uniform improvement across reading comprehension, humanities, ethics, STEM and medicine categories with a general improvement in fact-checking. There was less improvement in reasoning-heavy tasks and a larger and more consistent improvement in knowledge-intensive tests.

For more details, click here.

Robotics: RGB-Stacking

The innovation leader released robotics a new benchmark, RGB-Stacking, for improving robots’ ability to stack objects. It said that the variety of objects employed in the research and the massive number of empirical evaluations undertaken to support their findings distinguishes it from previous work.

The findings by DeepMind show that a mix of simulation and real-world data may be utilised to learn complicated multi-object handling. It provides a solid foundation for the open topic of generalising to novel items. It also open-sourced a version of our simulated environment and the designs for developing their real-robot RGB-stacking environment, the RGB-object models and information for 3D printing them.

For more details, click here.

Meta-RL: Alchemy

DeepMind and University College London released Alchemy, a principled benchmark for meta-reinforcement learning (meta-RL) research. This environment presents tasks sampled from a task distribution with a deep underlying structure and is a 3D, first-person perspective video game implemented in the Unity game engine. The researchers said that the benchmark was created to test the ability of agents to reason and plan via latent state inference and useful exploration and experimentation.

For more details, click here.

Biology: Open-Sourcing AlphaFold 2.0

DeepMind announced that it is making AlphaFold 2.0 source code public. This AI-based algorithm predicts the shape of proteins, a major challenge in the healthcare and life sciences field. With this decision, DeepMind hopes to offer easy access and better research opportunities to the scientific community in areas such as drug discovery.
For more details, click here.

Download our Mobile App

Sreejani Bhattacharyya
I am a technology journalist at AIM. What gets me excited is deep-diving into new-age technologies and analysing how they impact us for the greater good. Reach me at

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Upcoming Events

Career Building in ML & AI

31st May | Online

31st May - 1st Jun '23 | Online

Rakuten Product Conference 2023

15th June | Online

Building LLM powered applications using LangChain

Jun 23, 2023 | Bangalore

MachineCon 2023 India

26th June | Online

Accelerating inference for every workload with TensorRT

MachineCon 2023 USA

Jul 21, 2023 | New York

Cypher 2023

Oct 11-13, 2023 | Bangalore

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox