2021 has been an eventful year for AI, from Ola pranking the people of the internet with flying cars or Huang immersing a deep faked version of him in a video to the mass adoption of the metaverse and NFT investments. The year has introduced some gigantic AI models and GPT-3 competitors while witnessing regulatory crackdowns on big tech from countries worldwide. Some companies provided huge aids for the pandemic struck nations, and some went in other directions, like flying to space. The year only kept getting more interesting by the end. We’ve got you a timeline of the year, highlighting the most important updates of 2021 you should know.
Software highlights that framed 2021
AI21 Labs created an NLP model bigger than GPT-3
Sign up for your weekly dose of what's up in emerging technology.
AI21 Labs released a language model that it claims is ‘the largest and most sophisticated language model ever released for general use by developers.’ The tool challenged OpenAI’s dominance in the “natural language processing-as-a-service” field, with its 178 billion, slightly bigger, parameters (3 billion more) than GPT-3. The tool is also available to anyone interested in prototyping custom text-based AI applications and developers to easily customise a private version of Jurassic-1 models.
Google released GLaM, dominating GPT-3
Google introduced their Generalist Language Model (GLaM), a trillion weight model that uses sparsity. GLaM has improved learning efficiency across 29 public NLP benchmarks in seven categories: language completion, open domain question answering, and inference tasks.
DeepMind Gopher competed with GPT-3
DeepMind introduced their competitor to GPT-3, Gopher, a 280 billion parameter transformer language model. The team claims that Gopher almost halves the accuracy gap from GPT-3 to human expert performance and exceeds forecaster expectations. Gopher lifts performance over current state-of-the-art language models across roughly 81% of tasks containing comparable results.
OpenAI introduced DALL.E, a 12 billion parameter version of their GPT-3
GPT-3 creators OpenAI introduced DALL.E, a 12-billion parameter version of GPT-3. With the name inspired by Salvador Dali and Pixar movie WALL.E, the DALL.E model can input text prompts to generate images. The model can receive both the text and the image as a single stream of data packing up to 1280 ‘tokens’ and supply images from scratch. DALL.E can also work with multiple objects in an image.
GitHub developed CoPilot for better coding
DeepMind’s open-sourced AlphaFold 2.0
AlphaFold is DeepMind’s AI program to predict protein structures. This year, the company announced it is making the AlphaFold 2.0 source code public along with a searchable database of species proteomes. AlphaFold 2.0’s algorithm can predict the shape of proteins, a major challenge in the healthcare and life sciences field. DeepMind opened it up for easy access and better research opportunities to the scientific community in areas such as drug discovery.
OpenAI introduced GLIDE for image diffusion, beat its model
OpenAI’s GLIDE (Guided Language to Image Diffusion for Generation and Editing) tool is a 3.5 billion parameter text-to-image generation, beating their previous model DALL.E. The researchers claimed to train the 3.5 billion parameter diffusion model to use a text encoder to condition natural language descriptions. The model also has editing capabilities and zero-shot generation.
OpenAI introduced CLIP, a pre-training model to learn visual concepts from NLS
OpenAI’s neural network CLIP can effectively learn visual concepts from natural language supervision. The contrastive language–image pre-training model is founded on the works on zero-shot transfer, natural language supervision, and multimodal learning. Similar to GPT -3’s zero-shot feature, in CLIP, users only have to input the names of the visual categories to be recognised and apply the CLIP model to any visual classification benchmark. The model has proved that scaling a simple pre-training task is sufficient to achieve competitive zero-shot performance on a great variety of image classification datasets.
Meta launched its self-supervised model, SEER
Meta’s SEER, SElf-supERvised model, has been termed as “the start of a more powerful, flexible, and accessible era for computer vision” by FAIR. SEER is a self-supervised computer vision model input with a billion parameters, allowing it to learn from any random group of images on the internet. According to Meta, the model does not need careful curation and labelling like most computer vision training models. In fact, it outperformed state-of-the-art supervised models on downstream, including object detection, segmentation, and image classification.
Tesla announced plans for a humanoid robot
During its Tesla AI Day, Musk announced that his company is working on a humanoid robot that could perform repetitive tasks, and its prototype is likely to be ready next year.
The Metaverse became hugely adopted
The Metaverse entails leveraging immersive technologies to form a shared virtual platform that can be accessed through multiple devices, allowing people to move within digital environments. The second half of 2021 was filled with launches, announcements and even rebranding of the Metaverse. This includes Facebook changing its name to Meta and investing $10 billion to hire 10,000 employees in Europe to build the immersive experience. In addition, other companies, including Microsoft with its Mesh for Microsoft Teams, Tencent, Epic Games, NVIDIA, Mesh and Roblox, joined the Metaverse.
NVIDIA created its version of the Metaverse: the Omniverse
In April, NVIDIA launched their metaverse efforts called the NVIDIA Omniverse. The Omniverse is a scalable, multi-GPU, and real-time reference development platform for users to build 3D simulations and collaborate with others. “Individuals and teams in STEM and across domains can now collaborate in real-time with the data characteristics in Omniverse,” said Vishal Dhupar, Managing Director of Asia South at NVIDIA. The Omniverse is founded on Pixar’s Universal Scene Description and NVIDIA RTX technology.
Hardware highlights of 2021
NVIDIA turned ON the world’s fastest AI supercomputer
NVIDIA’s Perlmutter, which has been credited to be the world’s fastest AI supercomputer, was powered up and running in May. The supercomputer runs on 6,159 NVIDIA A100 Tensor Core GPUs, making it the largest A100-powered system in the world. Perlmutter is claimed to piece together a 3D map of the universe, probe subatomic interactions for green energy sources and much more.
IBM introduced the world’s first 2-nanometer chip technology
IBM announced developing the world’s first chip with 2 nanometers (nm) nanosheet technology. The 2 nm chip technology will address the growing demand for increased chip performance and energy efficiency, advancing the state-of-the-art in the semiconductor industry. The 2 nm chip is claimed to achieve 45 per cent higher performance or 75 per cent lower energy use.
IBM launched its chip, Telum, for financial services
IBM announced Telum – a new CPU chip that will allow people to leverage deep learning inference at scale. Telum is ideal for financial services workloads like fraud detection, loan processing, clearing and settlement of trades, anti-money laundering, and risk analysis due to its centralised design. The chip contains eight processor cores, running with more than 5GHz clock frequency, optimised for the demands of enterprise-class workloads. In addition, the chip contains 22 billion transistors and 19 miles of wire on 17 metal layers.
Tesla introduced its computer chip, Dojo
At AI Day, Tesla unveiled the computer chip that the company uses to run its supercomputer, Dojo. Musk claimed that the supercomputer would be at four times faster processing speed than other computing systems. The chip, D1, contains 7nm technology and 362 teraflops of processing power, allowing GPU level computation with CPU connectivity.
Organisational highlights shaping the coming years
Facebook became Meta
Rebranding itself to reflect Facebook’s move and focus on building the ‘Metaverse’, Facebook changed its name to Meta. The Metaverse is a persistent online virtual environment reflecting the real world.
Jeff Bezos stepped down as Amazon’s CEO
In February, Amazon’s founder Jeff Bezos announced that he would be stepping down as the company’s chief executive officer in the third quarter and will transition to Executive Chairman of the Amazon Board. The CEO role will be handed to Amazon Web Services head Andy Jassy.
Jack Dorsey stepped down as Twitter’s CEO
Dorsey, who has had a rocky ride with Twitter as the company’s co-founder and first CEO, and was later made to step down and become a board member, only to become the CEO again in 2015, officially stepped down as CEO this year. This was in response to him being the co-founder and CEO of another company, Square, raising concerns among investors who thought him stretched too thin. He is the most popularly known of the four founders of Twitter.