MITB Banner

Watch More

How is this team from IISc building next generation analog chipsets for AI applications

Publishing a paper is just one way to contribute to the community.
Listen to this story

Deep neural networks (DNNs) have grown in size and complexity. This has made it difficult for conventional digital processors to provide the required performance with low power consumption and sufficient memory resources. Due to these reasons, analog computing is found to be more attractive compared to digital computing. Analog computing techniques achieve high computational density and energy efficiency compared to an equivalent digital implementation. 

Based on this, researchers at IISC Bangalore published a paper describing a novel design framework that can help build next-generation analog computing chipsets that could be faster and require less power than the digital chips found in most electronic devices. 

“We developed a novel analog computing paradigm called shape-based analog computing, which achieves the desired functional shape using the transistors’ inherent device physics utilising universal conservation principles. Using this framework, end-users to create a modular analog architecture just like digital design while simultaneously maintaining the area and energy efficiency of analog,” said Pratik Kumar, PhD Student at IISc Bangalore and one of the authors of the research. The design framework has been developed as a part of Kumar’s PhD work. 

The research team built a a prototype of an analog chipset called ARYABHAT-1 (Analog Reconfigurable technologY And Bias-scalable Hardware for AI Tasks) using the framework. The chipset can be used for AI-based applications like object or speech recognition or those that require massive parallel computing operations at high speeds.

 Credit: NeuRonICS Lab, DESE, IISc

The research was led by Doctor Chetan Singh Thakur, Assistant Professor, Department of Electronic Systems Engineering (DESE) in collaboration with Shantanu Chakrabartty, Professor at the McKelvey School of Engineering at the Washington University in St Louis. Ankita Nandi, Prime Minister’s Research Fellow working with Dr Thakur at the NeuRonICS Lab of IISc Bangalore, was also involved in the research work. 

In an email conversation with Analytics India Magazine, Pratik Kumar spoke about the team’s work, inspiration, and future prospects. 

Driven by the power of the human brain

The research work began in 2019. The researchers mention that they were intrigued by how powerful and energy-efficient the human brain is. With roughly 86 billion processing units (neurons) and consuming only about 25 Watts of power, the human brain can best even the most powerful supercomputer in the world in terms of computational power, efficiency, and energy consumption. “As an engineer, we see the human brain as a mixed-signal processor. While replicating the human brain was not an ideal path forward, it was clear that digital can be augmented with analog to move in a direction similar to the human brain,” said Kumar.

The researchers generalised the margin-propagation design framework using a multi-spline approach to design a basic prototype function that is robust with respect to biasing, process nodes, and temperature variations. Finally, the researchers used the basic prototype function to synthesise shape-based analog computing (S-AC) circuits that different approximate functions commonly used in ML architecture. 

   Credits: NeuRonICS Lab, DESE, IISc

As per the authors, this research paves the way for designing high-performance analog compute systems for machine learning (ML) and artificial intelligence tasks (AI) that are robust to transistor operating regimes, modular just like digital design, and simultaneously technology scalable. This lends their designed model the power of modularity and scalability of digital design along with the energy and area efficiency of the analog world.

Fundamental challenges

With Moore’s law reaching its end and Dennard’s scaling has already hit the wall, the industry has started focusing on digital accelerators (like GPUs, TPUs, and IPUs), which are now not enough to execute the demanding workloads efficiently. “We have hit a wall where we cannot squeeze out more performance per watt from a low technology node; thus, several things like dark silicon and others come into the picture. This challenge is further aggravated by the exponentially increasing size of ML algorithms which now require computations to happen in billions. All of these have led to a fundamental yet grave hardware bottleneck in the design of digital AI accelerators. Firstly, we cannot do more computation because of fundamental physical limits, and the second is energy consumption,” said Kumar. 

“To date, the power density and performance benefits of analog designs remain unmatched by their digital counterparts. But the popularity of analog designs has long been hindered due to the lack of robust modular architectures that can be scaled and synthesised across process technology,” he further added.  

Future impact

Concerning the impact of the research on the future, Kumar explained that the research focuses on solving a few of these fundamental challenges. The designed architectures are also technology scalable and bias scalable, which means the same architecture can be used in server applications where you don’t worry about power, but speed is important. In edge applications such as wearable devices, energy efficiency is of primary concern.

The focus of this work was to design S-AC circuits for machine learning processors. However, the approach can be generalised to other analog processors as well. The researchers, in fact, successfully demonstrated the approach for a three-layer neural network. This approach can prove useful in synthesising large-scale analog deep neural networks and reconfigurable machine learning processes. 

“We are delighted to receive a very positive response from the community. Such feedback encourages us to carry our work further in this field. In fact, a few more related papers are aligned and will come by the year-end, enriching the proposed methodology of “Shape-based Analog Computing,” Kumar said. 

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Zinnia Banerjee

Zinnia Banerjee

Zinnia loves writing and it is this love that has brought her to the field of tech journalism.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories