TY - GEN
T1 - Scalable fast multipole accelerated vortex methods
AU - Hu, Qi
AU - Gumerov, Nail A.
AU - Yokota, Rio
AU - Barba, Lorena A.
AU - Duraiswami, Ramani
N1 - KAUST Repository Item: Exported on 2020-10-01
PY - 2014/5
Y1 - 2014/5
N2 - The fast multipole method (FMM) is often used to accelerate the calculation of particle interactions in particle-based methods to simulate incompressible flows. To evaluate the most time-consuming kernels - the Biot-Savart equation and stretching term of the vorticity equation, we mathematically reformulated it so that only two Laplace scalar potentials are used instead of six. This automatically ensuring divergence-free far-field computation. Based on this formulation, we developed a new FMM-based vortex method on heterogeneous architectures, which distributed the work between multicore CPUs and GPUs to best utilize the hardware resources and achieve excellent scalability. The algorithm uses new data structures which can dynamically manage inter-node communication and load balance efficiently, with only a small parallel construction overhead. This algorithm can scale to large-sized clusters showing both strong and weak scalability. Careful error and timing trade-off analysis are also performed for the cutoff functions induced by the vortex particle method. Our implementation can perform one time step of the velocity+stretching calculation for one billion particles on 32 nodes in 55.9 seconds, which yields 49.12 Tflop/s.
AB - The fast multipole method (FMM) is often used to accelerate the calculation of particle interactions in particle-based methods to simulate incompressible flows. To evaluate the most time-consuming kernels - the Biot-Savart equation and stretching term of the vorticity equation, we mathematically reformulated it so that only two Laplace scalar potentials are used instead of six. This automatically ensuring divergence-free far-field computation. Based on this formulation, we developed a new FMM-based vortex method on heterogeneous architectures, which distributed the work between multicore CPUs and GPUs to best utilize the hardware resources and achieve excellent scalability. The algorithm uses new data structures which can dynamically manage inter-node communication and load balance efficiently, with only a small parallel construction overhead. This algorithm can scale to large-sized clusters showing both strong and weak scalability. Careful error and timing trade-off analysis are also performed for the cutoff functions induced by the vortex particle method. Our implementation can perform one time step of the velocity+stretching calculation for one billion particles on 32 nodes in 55.9 seconds, which yields 49.12 Tflop/s.
UR - http://hdl.handle.net/10754/575821
UR - http://ieeexplore.ieee.org/document/6969486/
UR - http://www.scopus.com/inward/record.url?scp=84918821321&partnerID=8YFLogxK
U2 - 10.1109/IPDPSW.2014.110
DO - 10.1109/IPDPSW.2014.110
M3 - Conference contribution
SN - 9780769552088
SP - 966
EP - 975
BT - 2014 IEEE International Parallel & Distributed Processing Symposium Workshops
PB - Institute of Electrical and Electronics Engineers (IEEE)
ER -