Major Algorithmic Breakthroughs Of 2021

A look at a few of the algorithmic breakthroughs of this year

With so much research and effort going into solving real-world problems with AI algorithms, 2021 was a pivotal year for its advancement. We saw some jaw-dropping innovations through AI mechanisms this year – from robots reproducing to finding exoplanets with the help of AI algorithms. Let us have a look at a few of the algorithmic breakthroughs of this year.

Living Robots that can “reproduce”

In a major breakthrough, scientists have discovered an entirely different form of biological reproduction and applied it to create the first-ever, self-replicating living robots. This research was conducted by scientists at the University of Vermont, Wyss Institute for Biologically Inspired Engineering at Harvard University, and Tufts University

This team had created “Xenobots” last year and discovered that these computer-designed and hand-assembled organisms can swim out into their tiny dish, look for single cells, gather them together and assemble “baby” Xenobots in their mouth. After a few days, these become new Xenobots that look and move just like themselves.

Sam Kriegman, Ph D, who is the lead author on the new study, said that the Xenobot parent is made of some 3,000 cells and forms a sphere. These can make children, but then the system normally dies out. He added that it is difficult to get the system to keep producing. An artificial intelligence program working on the Deep Green supercomputer cluster at UVM’s Vermont Advanced Computing Core was able to test billions of body shapes in simulation—triangles, squares, pyramids, starfish, etc. The team added that it found the ones that allowed the cells to be more effective at the motion-based “kinematic” replication reported in the new research.

For more details, click here.

Back-to-back large language models

2021 has been a transformative year for large language models, with all the major names in tech bringing in path-breaking new systems. Just days back, DeepMind introduced a 280 billion parameter transformer language model called Gopher. DeepMind’s research went on to say that Gopher almost halves the accuracy gap from GPT-3 to human expert performance and exceeds forecaster expectations. Following that, tech mammoth Google introduced the Generalist Language Model (GLaM) – a trillion weight model that uses sparsity. LG AI Research has revealed its new artificial intelligence language model “Exaone”, with capabilities of tuning 300 billion different parameters or variables.

Prior to that, AI21 Labs released Jurassic-1, which comes with 178 billion parameters. Microsoft and NVIDIA took it a notch further and introduced the Megatron-Turing Natural Language Generation (MT-NLG) model with an astounding 530 billion parameters. Google had also released Switch Transformers, a technique to train language models with over a trillion parameters.

300 Possible New Exoplanets discovered with the help of Algorithm

UCLA astronomers identified 366 new exoplanets with the help of an algorithm developed by a UCLA postdoctoral scholar (Jon Zink, who received his doctorate from UCLA in June and is a UCLA postdoctoral scholar now). Few of the noteworthy findings include a planetary system that comprises a star and at least two gas giant planets.

UCLA said that the discovery was made possible by a new planet detection algorithm that Zink developed. It said, “A challenge in identifying new planets is that the reductions in staller brightness may originate from the instrument or from an alternative astrophysical source that mimics a planetary signature.” Zink’s algorithm has the capability to separate which signals indicate planets and which are merely noise.

For more details, click here.

Quantum Communication Breakthrough by ISRO

In the earlier part of this year, ISRO successfully demonstrated free-space Quantum Communication over a distance of 300 m. This is a major innovation for unconditionally secured satellite data communication that use quantum technologies. This achievement made the country join nations such as the US, the UK, Canada, China and Japan with significant contributions in the field of quantum communication.

For this massive feat, different key technologies were developed indigenously. To achieve this, there was the usage of indigenously developed NAVIC receiver for time synchronization between the transmitter and receiver modules, and gimbal mechanism systems, etc.

For more details, click here.

GitHub Copilot

OpenAI and Microsoft’s GitHub Copilot is an AI pair programmer for writing better code. It has the capability to work with various languages like Python, JavaScript, TypeScript, Ruby, Java, and others. It works as an extension on the desktop or in the cloud on GitHub Codespaces. The programmer can look at alternative suggestions, edit suggested code manually, and choose what to accept and reject with its help.

For more details, click here.

OpenAI’s DALL·E

DALL·E is a 12-billion parameter version of GPT-3 trained to generate images from text descriptions. It uses a dataset of text-image pairs and is a transformer language model that receives both the text and the image as a single stream of data containing up to 1280 tokens. It can render an image from scratch and also alter aspects of an image using text prompts.

For more details, click here.

Meta’s SEER

Meta AI released SEER (SElf-supERvised), a billion-parameter self-supervised computer vision model that can learn from any random group of images on the internet. Meta said that SEER does not need the careful curation and labelling that most computer vision training models need. SEER outperformed state-of-the-art supervised models on downstream like object detection, segmentation, and image classification, among others.

For more details, click here.

More Great AIM Stories

Sreejani Bhattacharyya
I am a technology journalist at AIM. What gets me excited is deep-diving into new-age technologies and analysing how they impact us for the greater good. Reach me at sreejani.bhattacharyya@analyticsindiamag.com
MORE FROM AIM
Yugesh Verma
How is Boolean algebra used in Machine learning?

Machine learning model with Boolean algebra starts with the data with a target variable and input or learner variables and using the set of rules it generates output value by considering a given configuration of input samples.

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM