Now Reading
AI Chips That Made It To The Market In 2021

AI Chips That Made It To The Market In 2021

With the emergence of new developments in the domain of Artificial Intelligence and Deep Learning, computational requirements also have seen a steady increase. The success of any modern AI technique relies on computation, on a scale unimaginable even a few years ago. Therefore, more advanced chips and hardware are being developed and released to match the processing capabilities of complex neural networks. Their ability to deliver computational power depends on the maximum number of transistors one can pack; some are also tailor-made to perform specific calculations required by modern AI systems efficiently. This article will look at some of the top AI Chips that made a mark in the market with their prowess in 2021. 

Intel Loihi 2

Loihi 2 is Intel’s 2nd generation neuromorphic research chip whose architecture supports the latest classes of neuro-inspired algorithms and applications while providing up to 10 times faster processing and 15 times greater resource density with 1 million neurons per chip and improved energy efficiency. The use of extreme ultraviolet (EUV) lithography simplified the layout design rules compared to past process technologies and made it possible for Intel to develop Loihi 2 rapidly. In addition, the Loihi 2 chips support Ethernet interfaces, increased support due to the glueless integration with a range of event-based vision sensors, and larger meshed networks of Loihi 2 chips. This powerful chip opens the door to a wide range of new neural network models that can be trained through deep learning. 

Image Source: Intel

Google Tensor

Considered a milestone chip in machine learning by Google, the Google Tensor co-designed by Google Research provides amazing support required by the state of the art AI-ML models, such as Motion Mode, Face Unblur, Speech enhancement for videos and applying HDRnet to videos. Google Tensor has been carefully designed to deliver the correct level of computing performance, efficiency and security. The new chip can run more advanced, state-of-the-art ML models at lower levels of power consumption. Google Tensor also powers computational photography and futuristic video features. It also features a Tensor security core, a new CPU-based subsystem by Google for the further generations of dedicated security chips. 

Image Source: Google

Ambarella CV52S

The CV52S by Amberella is an expansion of its AI vision system-on-chip portfolio. The CV52S provides exponentially smooth 4K image processing, video encoding/decoding, and CVflow computer vision processing in a single, low-power design. Fabricated with an advanced 5 nm process technology, CV52S enables power consumption below 3W for 4KP60 video recording with advanced AI processing at a rate of 30 fps. In addition, the chip’s CVflow architecture provides deep neural network (DNN) multiprocessing, a requirement for the next generation of intelligent cameras. The CVflow engine is loaded with the capability to efficiently run multiple neural networks (NN) in parallel while accelerating classical computer vision algorithms and providing powerful computer vision acceleration. 

Image Source: Ambarella

Atlazo AZ-N1

Announced in January, the Atlazo AZ-N1 includes its highly power-efficient AI and machine learning processor, the Axon I, which is targeted to process audio, sound, biometrics, and other sensor signals, and classify activities in less than a fraction of the power budget compared to other solutions on the market today. The processor supports a spectrum of AI/ML networks, including DNN, LSTM and GRNN and popular feature extraction techniques such as the MFCC. One Axon I processor can perform beyond 130 inferences. There are various products in development that the AZ-N1 will find its use in, including smart earbuds, hearing aids and health monitoring devices. 

Image Source: Altazo

Mythic M1076 Analog Matrix Processor

The M1076 Mythic AMP can deliver up to 25 TOPS for high-end edge AI applications in a single chip. The chip integrates 76 AMP tiles and stores up to 80M weight parameters, executes matrix multiplication operations without any external memory, which allows the M1076 to deliver the AI compute performance of a desktop GPU while consuming up to 1/10th the power, all in a single chip. AI-ML models can be easily executed at a higher resolution and lower latency for better results with this highly powerful chip.

See Also
Women in tech

Image Source: Mythic

NVIDIA A100

The Nvidia A100 is the chipmaker’s flagship data center GPU for inference and training. First introduced last year, the chip continues to still dominate multiple benchmarks for AI performance. Recently, the A100 broke 16 AI performance records in the latest MLPerf benchmarks, which NVIDIA claims to make the GPU the fastest for training performance among commercially available products in the market today. NVIDIA A100’s Tensor Cores with Tensor Float (TF32) provide up to 20X higher performance over the previously released NVIDIA Volta with zero code changes and an additional 2X boost with automatic mixed precision and FP16. As a result, a training workload like BERT can be solved at a large scale, in under a minute by 2,048 A100 GPUs, a world record for time for the solution.

Image Source: Nvidia

Summing Up

Today’s cutting-edge AI systems not only require chips that aren’t just AI-specific but also state-of-the-art. Moreover, the required speed dynamics and necessity of cost efficiency make it virtually impossible to develop and deploy cutting-edge AI algorithms without state-of-the-art AI chips. Such development and adoption of AI technologies, in turn, increase global stability and seem to be broadly beneficial for the future of AI. 

 

What Do You Think?

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.
Join our Telegram Group. Be part of an engaging community

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top