TY - JOUR
T1 - Fast parallel multidimensional FFT using advanced MPI
AU - Dalcin, Lisandro
AU - Mortensen, Mikael
AU - Keyes, David E.
N1 - KAUST Repository Item: Exported on 2020-10-01
Acknowledgements: M. Mortensen acknowledges support from the 4DSpace Strategic Research Initiative at the University of Oslo, Norway. L. Dalcin and D.E. Keyes acknowledge support from King Abdullah University of Science and Technology (KAUST), Saudi Arabia and the KAUST Supercomputing Laboratory, Saudi Arabia for the use of the Shaheen supercomputer.
PY - 2019/3/11
Y1 - 2019/3/11
N2 - We present a new method for performing global redistributions of multidimensional arrays essential to parallel fast Fourier (or similar) transforms. Traditional methods use standard all-to-all collective communication of contiguous memory buffers, thus necessarily requiring local data realignment steps intermixed in-between redistribution and transform steps. Instead, our method takes advantage of subarray datatypes and generalized all-to-all scatter/gather from the MPI-2 standard to communicate discontiguous memory buffers, effectively eliminating the need for local data realignments. Despite generalized all-to-all communication of discontiguous data being generally slower, our proposal economizes in local work. For a range of strong and weak scaling tests, we found the overall performance of our method to be on par and often better than well-established libraries like MPI-FFTW, P3DFFT, and 2DECOMP&FFT. We provide compact routines implemented at the highest possible level using the MPI bindings for the C programming language. These routines apply to any global redistribution, over any two directions of a multidimensional array, decomposed on arbitrary Cartesian processor grids (1D slabs, 2D pencils, or even higher-dimensional decompositions). The high level implementation makes the code easy to read, maintain, and eventually extend. Our approach enables for future speedups from optimizations in the internal datatype handling engines within MPI implementations.
AB - We present a new method for performing global redistributions of multidimensional arrays essential to parallel fast Fourier (or similar) transforms. Traditional methods use standard all-to-all collective communication of contiguous memory buffers, thus necessarily requiring local data realignment steps intermixed in-between redistribution and transform steps. Instead, our method takes advantage of subarray datatypes and generalized all-to-all scatter/gather from the MPI-2 standard to communicate discontiguous memory buffers, effectively eliminating the need for local data realignments. Despite generalized all-to-all communication of discontiguous data being generally slower, our proposal economizes in local work. For a range of strong and weak scaling tests, we found the overall performance of our method to be on par and often better than well-established libraries like MPI-FFTW, P3DFFT, and 2DECOMP&FFT. We provide compact routines implemented at the highest possible level using the MPI bindings for the C programming language. These routines apply to any global redistribution, over any two directions of a multidimensional array, decomposed on arbitrary Cartesian processor grids (1D slabs, 2D pencils, or even higher-dimensional decompositions). The high level implementation makes the code easy to read, maintain, and eventually extend. Our approach enables for future speedups from optimizations in the internal datatype handling engines within MPI implementations.
UR - http://hdl.handle.net/10754/653025
UR - https://www.sciencedirect.com/science/article/pii/S074373151830306X
UR - http://www.scopus.com/inward/record.url?scp=85063063901&partnerID=8YFLogxK
U2 - 10.1016/j.jpdc.2019.02.006
DO - 10.1016/j.jpdc.2019.02.006
M3 - Article
SN - 0743-7315
VL - 128
SP - 137
EP - 150
JO - Journal of Parallel and Distributed Computing
JF - Journal of Parallel and Distributed Computing
ER -