MITB Banner

Interesting Innovations From DeepMind In 2021

A look at a few of the notable innovations from DeepMind in 2021

Share

This year technology entered into areas we have never seen before–from AI being used to find exoplanets to robots learning to “reproduce”. Innovation leader DeepMind, too, put its best foot forward this year and released some mind-blowing systems with the potential to create a big impact. These were focused around a diverse range of areas such as robotics, games, language models and many more.

Let us take a look at a few of the notable innovations from DeepMind in 2021.

Biology: Enformer

DeepMind and Alphabet’s Calico teamed up to bring out “Enformer“, a transformers-based model with the ability to predict gene expression from DNA sequences with greater accuracy. Enformer understands the variants in the non-coding genome and predicts the effects of any variants on gene expression in both natural genetic and synthetic variants. DeepMind said that it framed the machine learning problem as predicting thousands of epigenetic and transcriptional datasets in a multitask setting across long DNA sequences. While training on most human and mouse genomes and testing on held out sequences, DeepMind found an improved correlation between predictions and measured data relative to previous state-of-the-art models without self-attention, added this paper titled, “Effective gene expression prediction from sequence by integrating long-range interactions.”

For more details, click here.

Player of Games (PoG)

DeepMind created a system called Player of Games (PoG), whose structure and mechanism it has released in a research paper. Player of Games performs well at both perfect and imperfect information games such as Chess and Go, Poker, and Scotland Yard. It works by using a single algorithm with minimal domain-specific knowledge. PoG’s search capabilities are well-suited across the fundamentally different game types. It is guaranteed to find an approximate Nash equilibrium by resolving subgames to remain consistent during online play, the paper released by DeepMind says.

For more details, click here.

Large Language Model: Gopher

Gopher is a 280 billion parameter transformer language model from DeepMind that almost halves the accuracy gap from GPT-3 to human expert performance and exceeds forecaster expectations.  

Gopher outperforms the current state-of-the-art for 100 tasks (81% of all tasks). The baseline model includes large language models like GPT-3 (175 billion parameters), Jurassic-1 (178B parameters), and Megatron-Turing NLG (530 billion parameters). It showed the most uniform improvement across reading comprehension, humanities, ethics, STEM and medicine categories with a general improvement in fact-checking. There was less improvement in reasoning-heavy tasks and a larger and more consistent improvement in knowledge-intensive tests.

For more details, click here.

Robotics: RGB-Stacking

The innovation leader released robotics a new benchmark, RGB-Stacking, for improving robots’ ability to stack objects. It said that the variety of objects employed in the research and the massive number of empirical evaluations undertaken to support their findings distinguishes it from previous work.

The findings by DeepMind show that a mix of simulation and real-world data may be utilised to learn complicated multi-object handling. It provides a solid foundation for the open topic of generalising to novel items. It also open-sourced a version of our simulated environment and the designs for developing their real-robot RGB-stacking environment, the RGB-object models and information for 3D printing them.

For more details, click here.

Meta-RL: Alchemy

DeepMind and University College London released Alchemy, a principled benchmark for meta-reinforcement learning (meta-RL) research. This environment presents tasks sampled from a task distribution with a deep underlying structure and is a 3D, first-person perspective video game implemented in the Unity game engine. The researchers said that the benchmark was created to test the ability of agents to reason and plan via latent state inference and useful exploration and experimentation.

For more details, click here.

Biology: Open-Sourcing AlphaFold 2.0

DeepMind announced that it is making AlphaFold 2.0 source code public. This AI-based algorithm predicts the shape of proteins, a major challenge in the healthcare and life sciences field. With this decision, DeepMind hopes to offer easy access and better research opportunities to the scientific community in areas such as drug discovery.
For more details, click here.

Share
Picture of Sreejani Bhattacharyya

Sreejani Bhattacharyya

I am a technology journalist at AIM. What gets me excited is deep-diving into new-age technologies and analysing how they impact us for the greater good. Reach me at sreejani.bhattacharyya@analyticsindiamag.com
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.