Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Hardware
Tom’s Hardware
Technology
Mark Tyson

Nvidia Tech Uses AI to Optimize Chip Designs up to 30X Faster

Nvidia AutoDMP

Nvidia is one of the leading designers of chips used for artificial intelligence (AI) and machine learning (ML) acceleration. Therefore it is apt that it looks set to be one of the pioneers in applying AI to chip design. Today, it published a paper and blog post revealing how its AutoDMP system can accelerate modern chip floor-planning using GPU-accelerated AI/ML optimization, resulting in a 30X speedup over previous methods. 

AutoDMP is short for Automated DREAMPlace-based Macro Placement. It is designed to plug into an Electronic Design Automation (EDA) system used by chip designers, to accelerate and optimize the time-consuming process of finding optimal placements for the building blocks of processors. In one of Nvidia’s examples of AutoDMP at work, the tool leveraged its AI on the problem of determining an optimal layout of 256 RSIC-V cores, accounting for 2.7 million standard cells and 320 memory macros. AutoDMP took 3.5 hours to come up with an optimal layout on a single Nvidia DGX Station A100

AutoDMP delivers a very similar placement plan to experts using the latest EDA layout design tools. The similarity is somewhat reassuring and indicative that the AI is a sensible time-saver than a revolutionary change. (Image credit: Nvidia)
(Image credit: Nvidia)

Macro placement has a significant impact on the landscape of the chip, “directly affecting many design metrics, such as area and power consumption,” notes Nvidia. Optimizing placements is a key design task in optimizing the chip performance and efficiency, which directly affects the customer.

(Image credit: Nvidia)

On the topic of how AutoDMP works, Nvidia says that its analytical placer “formulates the placement problem as a wire length optimization problem under a placement density constraint and solves it numerically.” GPU-accelerated algorithms deliver up to 30x speedup compared to previous methods of placement. Moreover, AutoDMP supports mixed-sized cells. In the top animation, you can see AutoDMP placing macros (red) and standard cells (gray) to minimize wire length in a constrained area.

(Image credit: Nvidia)

We have talked about the design speed benefits of using AutoDMP, but have not yet touched upon qualitative benefits. In the figure above, you can see that compared to seven alternative existing designs for a test chip, the AutoDMP-optimized chip offers clear benefits in wire length, power, worst negative slack (WNS), and total negative slack (TNS). Results above the line are a win by AutoDMP vs the various rival designs.

AutoDMP is open source, with the code published on GitHub.

Nvidia isn’t the first chip designer to leverage AI for optimal layouts; back in February we reported on Synopsys and its DSO.ai automation tool, which has already been used for 100 commercial tape-outs. Synopsys described its solution as an “expert engineer in a box.” It added that DSO.ai was a great fit for on-trend multi-die silicon designs, and its use would free engineers from dull iterative work, so they could bend their talents towards more innovation.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.