Volume 15 (2024) . Issue 2 (61) . Paper No. 6 (453)

Hardware and software for distributed and supercomputer systems

Review Article

New generation of GPGPU and related hardware: computing systems microarchitecture and performance from servers to supercomputers

Mikhail Borisovich KuzminskyCorrespondent author

Zelinsky Institute of Organic Chemistry of RAS, Moscow, Russia
Mikhail Borisovich Kuzminsky — Correspondent author kus@free.net

Abstract. An overview of the current state of GPGPUs is given, with orientation towards their using to traditional HPC tasks (and less to AI). The basic GPGPUs in the review include Nvidia V100 and A100. Nvidia H100, AMD MI100 and MI200, Intel Ponte Vecchio (Data Center GPU Max), as well as BR100 from Biren Technology are considered as new generation GPGPUs. The important for HPC and AI tasks microarchitecture and hardware features of these GPGPUs, as well as the most important additional hardware for building computer systems with GPGPUs, that are CPUs specialized (albeit only possible for the initial period of their use) for working with the new generation of GPGPUs and interconnects — are analyzed and compared. Brief information is given about the servers (including multi-GPUs) using them, and new supercomputers (using these GPGPUs), where data on the achieved performance when working with GPGPUs was obtained.

The SDK of GPGPU manufacturers and software (including mathematical libraries) from other firms are briefly reviewed. Examples are given that demonstrate the tools of widely used programming models that are important for achieving maximum performance, while contributing to the non-portability of program codes to other GPGPU models.

Particular attention is paid to the possibilities of using tensor cores and their analogues in modern GPGPUs from other companies, including the possibility of using calculations with reduced (relative to the standard for HPC FP64 format) and mixed precision, which are relevant due to the sharp increase of the achieved performance when using them in GPGPU tensor cores. Data is analyzed on their “real-world” performance in benchmarks and applications for HPC and AI. The use of modern batch linear algebra libraries in GPGPU, including for HPC applications, is also briefly discussed. (Linked article texts in Russian and in English).

Keywords: GPGPU, V100, A100, H100, Grace, GH200 Grace Hopper, MI100, MI200, Ponte Vecchio, Data Center GPU Max, BR100, CUDA, HIP, DPC++, Fortran, performance, HPC, AI, deep learning

MSC-20202020 Mathematics Subject Classification 65Y05; 68M20MSC-2020 65-XX: Numerical analysis
MSC-2020 65Yxx: Computer aspects of numerical algorithms
MSC-2020 65Y05: Parallel numerical computation

Acknowledgments: The author thanks Alexnder Malyavko (NSTU) for assistance in preparing the text of the article

For citation: Mikhail B. Kuzminsky. New generation of GPGPU and related hardware: computing systems microarchitecture and performance from servers to supercomputers. Program Systems: Theory and Applications, 2024, 15:2, pp. 139–473. (In Russ., in Engl.). https://psta.psiras.ru/2024/2_139-473.

Full text of bilingual article (PDF): https://psta.psiras.ru/read/psta2024_2_139-473.pdf (Clicking on the flag in the header switches the page language).

English part of bilingual article (PDF): https://psta.psiras.ru/read/psta2024_2_139-473-en.pdf.

The article was submitted 16.10.2023; approved after reviewing 24.01.2024; accepted for publication 01.03.2024; published online 28.06.2024.

© Kuzminsky M. B.
2024
Editorial address: Ailamazyan Program Systems Institute of the Russian Academy of Sciences, Peter the First Street 4«a», Veskovo village, Pereslavl area, Yaroslavl region, 152021 Russia; Phone: +7(4852) 695-228; E-mail: ; Website:  http://psta.psiras.ru
© Ailamazyan Program System Institute of Russian Academy of Science (site design) 2010–2024 The text of CC-BY-4.0 license