8 Powerful AI Chips Challenging NVIDIA’s Dominance In Computing Industry

Deep learning has picked up tremendously over the last few years and is being used extensively in numerous areas — from running digital assistants to autonomous vehicles. As these machine learning and deep learning models deal with large data sets, they need powerful chips for crunching large numbers. The latest advancements are pushing AI chips to emerge victorious than before.

Recent report estimate that by 2025, cloud-based AI chipsets will account for $14.6 billion in revenue and that these AI chips would be used in a variety of areas such as smartphones, smart speakers, AR/VR headsets, and other devices which need AI processing.

AIM Daily XO

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

While the market has been largely dominated by NVIDIA, there are many other players who are building equally competent AI chips from large players to startups for carrying large computations. Here we list some of the most powerful GPUs that are revolutionising deep learning and machine learning space.

1| AWS Inferentia

Latest in the AI chips, Amazon announced ‘Inferentia’ during the re:Invent conference in Las Vegas. Designed by Annapurna Labs which is an Amazon-owned Israeli company, the chip is made to deal with large workloads while requiring lower latency. While it is designed for inference, which is the process-trained ML model to find patterns in large data, it can also handle power workloads, providing thousands of teraflops per Amazon EC2 instance for multiple frameworks. Some of the popular frameworks that it is compatible are TensorFlow, Apache MXNet, and Pytorch. Data types that it supports are INT-8, mixed precision FP-16 and bfloat16. The company doesn’t aim to directly compete with NVIDIA, Intel or AMD, and would be made available to only their own cloud customers.


Download our Mobile App



2| Intel’s Myriad 2 AI Chip

Brought to you by Movidius, an Intel company, these chips are utilised for some of the most ambitious AI, vision and imaging applications involving both enhanced performance and low power consumption. The Myriad 2 family of processors are transforming the capabilities of devices and delivering industry-proven performance at an unbeatable price proposition. In a recent development, an ESA-led team subjected Intel’s Myriad 2 AI chip to one of the most energetic radiation beams available on earth. The test was done in CERN. The chip is run using a pair of twin LEON4 controllers – the latest in the LEON family of integrated circuits developed by ESA with Sweden’s Cobham Gaisler company.

3| IBM’s 8-Bit Analog Chip

IBM was recently in news for bringing new hardware that brings power efficiency and improved training for AI projects. With an 8-bit precision for both their analogue and digital chips for AI, the chip is currently being used to test a simple neural net that identifies numerals with 100 percent efficiency. As data constantly shuttles between memory and processing, which consumes valuable energy and time, this AI chips envisions to overcome these challenges quite smoothly. IBM’s new analogue chip is based on phase-change memory. The newest solution utilises in-memory computing that promises to double the accuracy and consumes 33x less energy than a digital architecture of similar precision. It is well suited for low-power environments, making it possible to bring AI to the Internet of Things (IoT) devices and edge computing applications.

4| Huawei’s Ascend 910 and Ascend 310

Huawei recently announced two new AI chips— Ascend 910 and Ascend 310, at a recent global event for the ITC industry held at the Shanghai World Expo Exhibition and Convention Centre. With an aim to be used in data centres and internet-connected consumer devices, this is pegged as one of the most powerful chips for edge computing scenarios. The AI chips by Huawei claims to be processing more data in a faster amount of time than its competitors and help train networks in a matter of time. Huawei’s Ascend 910 is aimed at data centres and would be available in the second quarter of 2019. The Ascend 310, meanwhile, is aimed at internet-connected devices like smartphones, smartwatches and other gadgets tied to the IoT.

5| AMD GPU Radeon Instinct MI60

A recent addition in the list, AMD announced world’s first 7nm GPU named Radeon Instinct MI60 at its Next Horizon conference. With an industry-leading 1TB/sec of memory bandwidth, the company believes that GPU will power the next generation of deep learning and AI applications in High-Performance Computing (HPC), cloud computing and graphical rendering applications. The chips have an ability for ultra-fast floating point computing performance. The company says that the GPU to GPU communication has increased considerably which is about 6X faster than before. This is enabled by the special AMD Infinity Fabric Link technology. The chips are specifically designed for high scale operations where the 7nm technology by AMD claims to dramatically improve performance per watt over previous generation products.

6| Google TPU

Google introduced its homegrown AI chip, Tensor Processing Unit (TPU) in 2016 which is now in its third generation. The upgraded TPU goes deeper into artificial intelligence than the initial versions to carry heavy workloads. With the newest improvements in the chip, it will reduce the dependency of Google on chips makes such as NVIDIA on which it was dependent for GPUs to carry intensive machine learning applications. The original TPU designed was meant for the inference stage of deep learning, whereas the new version can handle training as well. The company claims that it takes a day to train a machine translations system using 32 of the best commercially available GPUs, and the same workload takes six hours atop eight connected TPUs. Google is currently operating this equipment inside its own data centres rather than selling it to other device makers.  

7| PowerVR GPUs and AI chips By Imagination

In a recent announcement, Imagination Technologies announced three new PowerVR graphics processing units (GPUs) that will be aimed for various categories of products such as neural networks for AI markets. With a performance range of 0.6 to 10 tera operations per second (TOPS) and multi-score scaling up beyond 160 TOPS, these chips will play a crucial role in bringing new computing capabilities in smart cars, smartphones, cameras, IoT devices and more.

8| Qualcomm AI Chips

One of the front runners in the chip making for mobile phones, it unveiled two new systems-on-chip (SoCs) designed to serve smart visual applications for IoT platforms. The densely packed 10-nm FinFET-based chip can track automated equipment in Industrial IoT, carry face recognition, and more. Also, Qualcomm’s Neural Processing SDK for AI is designed to help developers run one or more neural network models trained in Caffe/Caffe2, ONNX or Tensorflow. Along with saving time and effort, it optimises performance of trained neural networks on devices with Snapdragon. It provides tools for model conversion and execution, does a lot of the heavy lifting needed to run neural networks and more. I

Startups In The Space

Apart from the leading chip makers discussed above, there are many startups booming in the space. 2016 founded Cerebras System is a California-based startup that was recently funded for building chips for next-gen machine learning operations.

Another UK-based AI hardware startup called Graphcore is working to lower the cost of accelerating AI applications in cloud and enterprise data centres to increase the performance of both training and inference by up to 100x compared to the fastest systems today.

Coming back to India, we recently wrote about AlphaIC (Alpha Integrated Circuits) a startup which is trying to introduce revolutionary changes in the world of high-performance computing and data centres using AI. Founded in 2016, it is designing AI chips and working towards AI 2.0 through which it wishes to enable the next generation of AI with this series of products.

Sign up for The Deep Learning Podcast

by Vijayalakshmi Anandan

The Deep Learning Curve is a technology-based podcast hosted by Vijayalakshmi Anandan - Video Presenter and Podcaster at Analytics India Magazine. This podcast is the narrator's journey of curiosity and discovery in the world of technology.

Srishti Deoras
Srishti currently works as Associate Editor at Analytics India Magazine. When not covering the analytics news, editing and writing articles, she could be found reading or capturing thoughts into pictures.

Our Upcoming Events

24th Mar, 2023 | Webinar
Women-in-Tech: Are you ready for the Techade

27-28th Apr, 2023 I Bangalore
Data Engineering Summit (DES) 2023

23 Jun, 2023 | Bangalore
MachineCon India 2023 [AI100 Awards]

21 Jul, 2023 | New York
MachineCon USA 2023 [AI100 Awards]

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

Is Foxconn Conning India?

Most recently, Foxconn found itself embroiled in controversy when both Telangana and Karnataka governments simultaneously claimed Foxconn to have signed up for big investments in their respective states