Advertisement

Tech Giants Turn To AI For Chip Design

As the world reels under acute chip shortage, using AI and machine learning techniques for chip design seems to be a possible solution.
AI for chip design

Just two months after Google unveiled its new deep reinforcement learning technique for designing the next generation of Tensor Processing Units, Samsung has revealed that it would be taking a similar route. As per the Wired report, the South Korean semiconductor giant will be deploying new software from chip design firm Synopsys that uses artificial intelligence to create its chips. Aart de Geus, the chairman and co-CEO of Synopsys, claims this is the first commercial processor design with AI.

Apart from Google and Samsung (Synopsys), other companies like NVIDIA and IBM are also dabbling in AI-designed chips. If successfully deployed, this could mean a new area of AI applications and possibly a breakthrough in designing chips.

NVIDIA & AI-designed Chips

A Samsung spokesperson has confirmed that the company would be using Synopsys AI software to design its Exynos chips for smartphones. For this, Samsung’s collaborator Synopsys would be using DSO.ai tool, a design space optimisation AI software that can autonomously identify optimal ways to arrange components. The tool does so while reducing the area and the power consumption and increasing performance.

DSO.ai uses reinforcement learning to evaluate the available alternatives against the final design goals and finally produce a better design than that produced by human engineers.

DSO.ai was inspired by AlphaGo, a computer program from DeepMind which broke all records when it beat a human player at the game of Go in 2016. Like Go, chip design is a large potential solution space, but only a trillion times larger than the game. Moreover, it is a labour-intensive task and requires weeks of experiments based on past experiences. DSO.ai offers a generation, optimisation paradigm using reinforcement learning to look for optimal solutions.

Other applications of DSO.ai include fine-tuning library cells to give the best frequency and lowest power, shrink the die size within an existing floor plan, determining operating voltage that will produce the most optimal power vs performance tradeoff, and exploring the effect of custom clock structures or power distribution networks.

Reinforcement Learning for Chip Design

Each computer chip is divided into a number of blocks; a single block is an individual module such as a memory subsystem or control logic system. Determining the layout of a chip block is called chip floorplanning, which is one of the most daunting and time-consuming tasks in the chip design process. This task has to be carried out so that the power, performance, and area are minimised while adhering to density and routing congestion constraints.

There have been many studies and researches conducted to optimise this process. However, it still takes human experts to invest weeks to produce solutions that meet the multi-faceted design criteria.

Last year, Google proposed chip placement with deep reinforcement learning. Unlike prior methods, this approach viewed chip placement as a reinforcement learning problem. In this approach, the agent is trained to optimise the quality of chip placements; it can also learn from past experiences and improve with time. 

While existing baselines need human experts in the loop and may take weeks to generate the chip design, the proposed approach can generate placements within just six hours. To generate placements for previously unseen chip blocks, Google researchers used a zero-shot method that helped yield a placement in less than a second. By fine-tuning the new block policy, results can be improved even further.

Credit: Google AI Blog

This year, Google’s research came to fruition when the tech giant described in a Nature journal paper that its machine learning technique has been applied to the commercial product–its upcoming version of TPU, which is optimised for AI computation.

Similarly, Synopsys’ rival company Cadence recently launched Cerebrus, an intelligent chip explorer. This new tool also uses reinforcement learning to optimise the physical design process. After the block engineers specify the design goals, Cerberus optimises the Cadence digital full flow to meet the specified power, performance, and area (PPA) goals. Using Cerebrus, engineers can concurrently optimise the flow for multiple blocks, especially suited for large, complex system-on-chip design needs.

Last year, NVIDIA too made inroads into this area. In a paper, the researchers from NVIDIA outlined how techniques like deep convolutional neural networks, graph-based neural networks, and other machine learning techniques can be used to accelerate chip design.

Credit: NVIDIA

Wrapping Up

As the world reels under acute chip shortage, using AI and machine learning techniques for chip design seems to be a possible solution. In addition, more accessible and efficient chips would help develop autonomous, 5G communication, and other AI devices. It is still the beginning, representing only 10 per cent of the opportunity.

That said, both manual and automated processes, chip designing and floorplanning require expertise in computing, device physics, and electronic engineering. Therefore, companies need to consider how they equip themselves to meet the skill demand. While many experts rule out chip design automation taking away human jobs, companies must have the foresight to develop the system under which this transition is smooth.

Download our Mobile App

Shraddha Goled
I am a technology journalist with AIM. I write stories focused on the AI landscape in India and around the world with a special interest in analysing its long term impact on individuals and societies. Reach out to me at shraddha.goled@analyticsindiamag.com.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Upcoming Events

15th June | Online

Building LLM powered applications using LangChain

17th June | Online

Mastering LangChain: A Hands-on Workshop for Building Generative AI Applications

Jun 23, 2023 | Bangalore

MachineCon 2023 India

26th June | Online

Accelerating inference for every workload with TensorRT

MachineCon 2023 USA

Jul 21, 2023 | New York

Cypher 2023

Oct 11-13, 2023 | Bangalore

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR