TY - JOUR
T1 - Performance study of sustained petascale direct numerical simulation on Cray XC40 systems
AU - Hadri, Bilel
AU - Parsani, Matteo
AU - Hutchinson, Maxwell
AU - Heinecke, Alexander
AU - Dalcin, Lisandro
AU - Keyes, David E.
N1 - KAUST Repository Item: Exported on 2020-10-01
Acknowledgements: The research reported in this paper was funded by King Abdullah University of Science and Technology (KAUST) in Thuwal, Saudi Arabia. We are thankful for the computing resources of the Supercomputing Laboratory and the Extreme Computing Research Center at KAUST; the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231; and the Trinity project managed and operated by Los Alamos National Laboratory and Sandia National Laboratories.
PY - 2020/3/17
Y1 - 2020/3/17
N2 - We present in this paper a comprehensive performance study of highly efficient extreme scale direct numerical simulations of secondary flows, using an optimized version of Nek5000. Our investigations are conducted on various Cray XC40 systems, using a very high-order spectral element method. Single-node efficiency is achieved by auto-generated assembly implementations of small matrix multiplies and key vector-vector operations, streaming lossless I/O compression, aggressive loop merging, and selective single precision evaluations. Comparative studies across different Cray XC40 systems at scale, Trinity (LANL), Cori (NERSC), and ShaheenII (KAUST)
show that a Cray programming environment, network configuration, parallel file system, and burst buffer all have a major impact on the performance. All three systems possess a similar hardware with similar CPU nodes and parallel file system, but they have different theoretical peak network bandwidths, different OSs, and different versions of the programming environment. Our study reveals how these slight configuration differences can be critical in terms of performance of the application. We also find that with 9216 nodes (294 912 cores) on Trinity XC40 the applications sustain petascale performance, as well as 50% of peak memory bandwidth over the entire solver (500 TB/s in aggregate). On 3072 Xeon Phi nodes of Cori, we reach 378 TFLOP/s with an aggregated bandwidth of 310 TB/s, corresponding to time-to-solution 2.11× faster than obtained with the same number of (dual-socket) Xeon nodes.
AB - We present in this paper a comprehensive performance study of highly efficient extreme scale direct numerical simulations of secondary flows, using an optimized version of Nek5000. Our investigations are conducted on various Cray XC40 systems, using a very high-order spectral element method. Single-node efficiency is achieved by auto-generated assembly implementations of small matrix multiplies and key vector-vector operations, streaming lossless I/O compression, aggressive loop merging, and selective single precision evaluations. Comparative studies across different Cray XC40 systems at scale, Trinity (LANL), Cori (NERSC), and ShaheenII (KAUST)
show that a Cray programming environment, network configuration, parallel file system, and burst buffer all have a major impact on the performance. All three systems possess a similar hardware with similar CPU nodes and parallel file system, but they have different theoretical peak network bandwidths, different OSs, and different versions of the programming environment. Our study reveals how these slight configuration differences can be critical in terms of performance of the application. We also find that with 9216 nodes (294 912 cores) on Trinity XC40 the applications sustain petascale performance, as well as 50% of peak memory bandwidth over the entire solver (500 TB/s in aggregate). On 3072 Xeon Phi nodes of Cori, we reach 378 TFLOP/s with an aggregated bandwidth of 310 TB/s, corresponding to time-to-solution 2.11× faster than obtained with the same number of (dual-socket) Xeon nodes.
UR - http://hdl.handle.net/10754/662204
UR - https://onlinelibrary.wiley.com/doi/abs/10.1002/cpe.5725
UR - http://www.scopus.com/inward/record.url?scp=85082840036&partnerID=8YFLogxK
U2 - 10.1002/cpe.5725
DO - 10.1002/cpe.5725
M3 - Article
SN - 1532-0626
JO - Concurrency and Computation: Practice and Experience
JF - Concurrency and Computation: Practice and Experience
ER -