High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II

Thierry Tonellot, Vincent Etienne, Ewenet Gashawbeza, Emesto Sandoval Curiel, Azizur Khan, Saber Feki, Samuel Kortas

    Research output: Chapter in Book/Report/Conference proceedingConference contribution


    A combination of both shallow and deepwater, plus islands and coral reefs, are some of the main features contributing to the complexity of subsalt seismic exploration in the Red Sea transition zone. These features often result in degrading effects on seismic images. State-of-the-art ocean bottom acquisition technologies are therefore required to record seismic data with optimal fold and offset, as well as advanced processing and imaging techniques. Numerical simulations of such complex seismic data can help improve acquisition design and also help in customizing, validating and benchmarking the processing and imaging workflows that will be applied on the field data. Subsequently, realistic simulation of wave propagation is a computationally intensive process requiring a realistic model and an efficient 3D wave equation solver. Large-scale computing resources are also required to meet turnaround time compatible with a production time frame. In this work, we present the numerical simulation of an ocean bottom seismic survey to be acquired in the Red Sea transition zone starting in summer 2016. The survey's acquisition geometry comprises nearly 300,000 unique shot locations and 21,000 unique receiver locations, covering about 760 km2. Using well log measurements and legacy 2D seismic lines in this area, a 3D P-wave velocity model was built, with a maximum depth of 7 km. The model was sampled at 10 m in each direction, resulting in more than 5 billion cells. Wave propagation in this model was performed using a 3D finite difference solver in the time domain based on a staggered grid velocity-pressure formulation of acoustodynamics. To ensure that the resulting data could be generated sufficiently fast, the King Abdullah University of Science and Technology (KAUST) supercomputer Shaheen II Cray XC40 was used. A total of 21,000 three-component (pressure and vertical and horizontal velocity) common receiver gathers with a 50 Hz maximum frequency were computed in less than three days. After careful optimization of the finite difference kernel, each gather was computed at 184 gigaflops, on average. Up to 6,103 nodes could be used during the computation, resulting in a peak computation speed greater than 1.11 petaflops. The synthetic seismic data using the planned survey geometry was available one month before the actual acquisition, allowing for early real scale validation of our processing and imaging workflows. Moreover, the availability of a massive supercomputer such as Shaheen II enables fast reverse time migration (RTM) and full waveform inversion, and therefore, a more accurate velocity model estimation for future work.
    Original languageEnglish (US)
    Title of host publicationSPE Middle East Oil & Gas Show and Conference
    PublisherSociety of Petroleum Engineers (SPE)
    StatePublished - Mar 6 2017


    Dive into the research topics of 'High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II'. Together they form a unique fingerprint.

    Cite this