Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Hardware
Tom’s Hardware
Technology
Jowi Morales

Summit supercomputer set to be retired in November — it was the world's most powerful back in 2018-19

Summit supercomputer at ORNL.

Oak Ridge National Laboratory’s (ORNL) Summit supercomputer is set to be decommissioned in November this year, after serving almost six years and delivering 200 million node hours to researchers. The Oak Ridge Leadership Computing Facility (OLCF) made this announcement today on X (formerly Twitter) after the last day of the 2024 OLCF User Meeting, where its attendees signed a piece of the supercomputer to commemorate its faithful service.  

Summit once stood as the most powerful supercomputer in the world, taking the top spot on the Top500 list during 2018 and 2019. It has 4,356 nodes, each one powered by two IBM Power9 22-core 3.07 GHz CPUs with six Nvidia Tesla GV100 GPUs. It was dethroned in 2020 by the Arm-based Fugaku supercomputer, but ORNL regained the top spot in 2022 when it introduced the AMD-powered Frontier supercomputer.

Despite being six years old, which is a relative eternity in terms of computer development, Summit never left the top ten of Top500’s most powerful supercomputers during its lifetime. However, it seems that the ORNL has deemed that it’s time for the Summit supercomputer to retire. After all, Summit’s theoretical peak of 200.79 PFlop/s pales against Frontier’s 1,714.81 PFlop/s, the first supercomputer to break the exascale barrier. Aside from this, ORNL is working with Quantum Brilliance (QB) to integrate the latter’s quantum accelerators, helping the former’s researchers evaluate their viability for quantum computing.

The newer Frontier supercomputer is also far more efficient, as it only consumes a little over twice the power (22,786 kW) that Summit needed (10,096 kW) while delivering over eight times the performance. This is crucial especially as power consumption is now the number one concern in high-performance computing. It’s expected that a single modern H100 GPU would consume at least 3.7 MWh of power annually, with all the AI GPUs sold last year alone already accounting for 14,348.36 GWh of electricity use. And with Nvidia’s next-generation Blackwell GPUs expected to consume even more power, we’re in dire need of new processors that are far more efficient while still able to deliver the ever-growing computing power that our data-driven society needs.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.