TY - JOUR
T1 - Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers
AU - Wu, Xingfu
AU - Taylor, Valerie
N1 - KAUST Repository Item: Exported on 2020-10-01
Acknowledged KAUST grant number(s): KUS-I1-010-01
Acknowledgements: This work is supported by NSF grant CNS-0911023 and the Award No. KUS-I1-010-01 made by King Abdullah University of Science and Technology (KAUST). The authors would like to acknowledge Argonne Leadership Computing Facility for the use of BlueGene/P under DOE INCITE project "Performance Evaluation and Analysis Consortium End Station", the SDSC for the use of DataStar P655 under TeraGrid project TG-ASC040031, and TAMU Supercomputing-Facilities for the use of Hydra. We would also like to thank Stephane Ethier from Princeton Plasma Physics Laboratory and Shirley Moore from University of Tennessee for providing the GTC code.
This publication acknowledges KAUST support, but has no KAUST affiliated authors.
PY - 2013/12
Y1 - 2013/12
N2 - In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel's MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.
AB - In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel's MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.
UR - http://hdl.handle.net/10754/599162
UR - https://linkinghub.elsevier.com/retrieve/pii/S0022000013000639
UR - http://www.scopus.com/inward/record.url?scp=84880573133&partnerID=8YFLogxK
U2 - 10.1016/j.jcss.2013.02.005
DO - 10.1016/j.jcss.2013.02.005
M3 - Article
SN - 0022-0000
VL - 79
SP - 1256
EP - 1268
JO - Journal of Computer and System Sciences
JF - Journal of Computer and System Sciences
IS - 8
ER -