TY - GEN
T1 - A Fault Tolerant Implementation for a Massively Parallel Seismic Framework
AU - Kayum, Suha N.
AU - Alsalim, Hussain
AU - Tonellot, Thierry Laurent
AU - Momin, Ali
N1 - KAUST Repository Item: Exported on 2021-06-30
Acknowledgements: The authors acknowledge the support of the KAUST Supercomputing Laboratory and the usage of KAUST Shaheen II supercomputer for the runs presented.
This publication acknowledges KAUST support, but has no KAUST affiliated authors.
PY - 2020/9/22
Y1 - 2020/9/22
N2 - An increase in the acquisition of seismic data volumes has resulted in applications processing seismic data running for weeks or months on large supercomputers. A fault occurring during processing would jeopardize the fidelity and quality of the results, hence necessitating a resilient application. GeoDRIVE is a High-Performance Computing (HPC) software framework tailored to massive seismic applications and supercomputers. A fault tolerance mechanism that capitalizes on Boost.asio for network communication is presented and tested quantitatively and qualitatively by simulating faults using fault injection. Resource provisioning is also illustrated by adding more resources to a job during simulation. Finally, a large-scale job of 2,500 seismic experiments and 358 billion grid elements is executed on 32,000 cores. Subsets of nodes are killed at different times, validating the resilience of the mechanism in large scale. While the implementation is demonstrated in a seismic application context, it can be tailored to any HPC application with embarrassingly parallel properties.
AB - An increase in the acquisition of seismic data volumes has resulted in applications processing seismic data running for weeks or months on large supercomputers. A fault occurring during processing would jeopardize the fidelity and quality of the results, hence necessitating a resilient application. GeoDRIVE is a High-Performance Computing (HPC) software framework tailored to massive seismic applications and supercomputers. A fault tolerance mechanism that capitalizes on Boost.asio for network communication is presented and tested quantitatively and qualitatively by simulating faults using fault injection. Resource provisioning is also illustrated by adding more resources to a job during simulation. Finally, a large-scale job of 2,500 seismic experiments and 358 billion grid elements is executed on 32,000 cores. Subsets of nodes are killed at different times, validating the resilience of the mechanism in large scale. While the implementation is demonstrated in a seismic application context, it can be tailored to any HPC application with embarrassingly parallel properties.
UR - http://hdl.handle.net/10754/669820
UR - https://ieeexplore.ieee.org/document/9286143/
UR - http://www.scopus.com/inward/record.url?scp=85099390236&partnerID=8YFLogxK
U2 - 10.1109/HPEC43674.2020.9286143
DO - 10.1109/HPEC43674.2020.9286143
M3 - Conference contribution
SN - 9781728192192
BT - 2020 IEEE High Performance Extreme Computing Conference (HPEC)
PB - IEEE
ER -