MITB Banner

How Reinforcement Learning Is Advancing Chip Designing

Share

Arranging ‘billions’ of components on a tiny surface area of a computer chip is a complicated process. It calls for precise decision-making at every step of the way and requires a designer with years of experience in laying out circuits that squeeze power efficiency from nanoscopic devices.

Designers now tap the latest AI advancements to learn the processes involved in chip designing to help draw up more powerful blueprints in less time. It allows engineers to co-design an AI software to find the optimal configuration with different designer perspectives.

Nvidia

Principal research scientist, Haoxing Ren, spoke to Wired about testing reinforcement learning on AI chips to arrange components and wiring. The process includes exploring chip designs in simulation, training a neural network to recognise the efficient decisions to produce a high performing chip etc. According to Ren, this approach can create an excellent chip while cutting the engineer’s work in half. 

Google

At Google, designers are using ML to create chips at speed far superior to humans. Algorithms are finishing months worth of work in just six hours. Google has been successful in applying AI commercially in chipmaking. The study has been published in Nature. The paper talks about the upcoming version of Google’s TPU chips optimised for AI computation. 

In the paper, the researchers explained the task of ‘floorplanning’ that usually requires a human designer but is now tackled by Google’s algorithms. The planning involves CPUs, GPUs and memory cores connected with tens of kilometres of writing- with a lot of importance on the placement on the chip. It is essential to decide and place each component on a die to maintain the eventual efficacy of the chip; even a nanometer of change can have huge effects. 

Google’s engineers called this floor plan designing a ‘board game’ for machines. The ‘game’ here is a silicon die instead of a board game, and CPUs & GPUs instead of game pieces- with AI tasked with finding the board’s ‘win conditions’ aka computational efficiency.

Google engineers have trained algorithms with a dataset of 10,000 chip floor plans- each tagged with a reward function depending on its efficacy across different metrics. Then, through reinforcement learning, the algorithm used this data to differentiate between good and bad floor plans. 

Reinforcement learning

According to Song Han, an assistant professor of electrical engineering at MIT, reinforcement learning has significant potential for improving the design of chips since it is difficult to predict ‘good’ design without experience. The input to our model is the chip netlist (node types and graph adjacency information), the ID of the current node to be placed, and some netlist metadata, such as the total number of wires, macros, and standard cell clusters, according to a Google AI blog. The netlist graph and the current node are passed through an edge-based graph neural network to encode the input state. This generates embeddings of the partially placed graph and the candidate node’.

Credits: Google AI Blog

The embeddings are concatenated to form a single state embedding and passed to a feedforward neural network whose output is a learned representation of the valuable features. These are the input for the policy network probability distribution for node placement. ‘RL training is guided by a fast-but-approximate reward signal calculated for each of the agent’s chip placements using the weighted average of approximate wire length and approximate congestion’. The study observed that pre-training improved the sample efficiency and placement quality. 

Share
Picture of Avi Gopani

Avi Gopani

Avi Gopani is a technology journalist that seeks to analyse industry trends and developments from an interdisciplinary perspective at Analytics India Magazine. Her articles chronicle cultural, political and social stories that are curated with a focus on the evolving technologies of artificial intelligence and data analytics.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.