NASDAQ: AMD, in partnership with HPE and Lawrence Livermore National Laboratory (LLNL). Just announced that the El-Class supercomputer (exaFLOPS) currently at LLNL Labs would make use of open-source AMD ROCm as its discrete computing software and the next-generation AMD EPYC and AMD Radeon Instinct GPU.
Also, the El Capitan system is scheduled to debut early 2023, with a duo precision performance of double exaFLOPS (ten billion times), and it’s said to become the world’s fastest supercomputer after its shipping.
The El Capitan exascale supercomputing system will utilize next gen Radeon Instinct GPUs based on a new compute-optimized architecture for workloads including HPC and AI. #ExploreElCapitan
— AMD Instinct (@AMDInstinct) March 5, 2020
This breakthrough performance would be mainly used to assist in the mission of the National Nuclear Security Administration to guarantee the security, reliability, and safety of US nuclear reserves.
Furthermore, AMD-centric nodes will also be upgraded to speed up artificial intelligence (AI) and machine learning (ML) workloads, for extensive research, analysis, and computing technology that will be beneficial to NNSA endeavors.
‘The El Capitan supercomputer will benchmark on HP’s next-gen AMD EPYC CPU and Radeon Instinct GPU to enhance the unequaled development of AI and HPC,’ said Forrest Norrod, the general manager of the Data Center and Embedded Solutions Group and AMD’s head vice president.
Based on their solid foundation and breathtaking product line, AMD thrives in helping the three NNSA Labs— Los Alamos, Sandia National Laboratory, and LLNL—in achieving their mission-critical goals and in contributing to the industry’s achievement for innovation in the field of AI.
Norrod continued, ‘We are proud of our long-time collaboration with NNSA and HPE in E-class supercomputing. Also, we plan to be the ones to deliver the world’s most revered supercomputer come early 2023.
So how does AMD plan to achieve this great feat?
AMD is making use of its experience in the HPC filed and taking that knowledge and improving its current CPU and GPU designs for the enablement of El Capitan to deliver ease of use and superior high-end performance.
So here’s what El Capitan’s AMD specifications look like:
The AMD EPYC processor comes with a ‘Zen 4’ processor core that is codenamed ‘Genoa.’ The purpose of these processors is to support next-generation memory input and output subsystems for accelerating HPC and AI workloads.
The Radeon Instinct GPUs will use next-generation high-bandwidth memory built to reach maximum deep learning analysis.
Also, the third-generation AMD infinity architecture is there to provide low-latency interconnections between the whole Radeon Instinct GPUs and one AMD EPYC CPU found in each node of El Capitan with a high-bandwidth. Impressive, isn’t it?
Simultaneously, the architecture also has a unified memory across CPU and GPU, which makes it easy for programmers to utilize accelerated computing.
Director of LLNL Labs, Bill Goldstein, stated: ‘AMD’s advanced CPU and PU technology gives access to unparalleled computing power that will help maintain the US’s global position in the field of high-performance computing. Today’s news release is proof of how governments and industry leaders can join heads together to the benefits of the entire country.’
More remarks were noted by HPE’s head vice president Peter Ungaro, who also said: ‘We are happy to work with AMD to combine HPE’s Cray Shasta architecture with the newest AMD EPYC CPU and Radeon Instinct GPUs. Technology plays a major role in powering critical HPC and AI workloads at LLNL. After past accolades, HPE technologies and systems are built with AMD GPUs and processors to provide distinct solutions for high-performance computing and exaFLOPS supercomputing. We are enthusiastic about the continuing partnership with AMD to unveil more innovation potential.’