Scaling the “Memory Wall” for Multi-Dimensional Seismic Processing with Algebraic Compression on Cerebras CS-2 Systems

Hatem Ltaief, Yuxi Hong, Leighton Wilson, Mathias Jacquelin, Matteo Ravasi, David E. Keyes

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Scopus citations

Abstract

We exploit the high memory bandwidth of AIcustomized Cerebras CS-2 systems for seismic processing. By leveraging low-rank matrix approximation, we fit memoryhungry seismic applications onto memory-austere SRAM waferscale hardware, thus addressing a challenge arising in many wave-equation-based algorithms that rely on Multi-Dimensional Convolution (MDC) operators. Exploiting sparsity inherent in seismic data in the frequency domain, we implement embarrassingly parallel tile low-rank matrix-vector multiplications (TLRMVM), which account for most of the elapsed time in MDC operations, to successfully solve the Multi-Dimensional Deconvolution (MDD) inverse problem. By reducing memory footprint along with arithmetic complexity, we fit a standard seismic benchmark dataset into the small local memories of Cerebras processing elements. Deploying TLR-MVM execution onto 48 CS-2 systems in support of MDD gives a sustained memory bandwidth of 92.58PB/s on 35, 784, 000 processing elements, a significant milestone that highlights the capabilities of AIcustomized architectures to enable a new generation of seismic algorithms that will empower multiple technologies of our lowcarbon future.
Original languageEnglish (US)
Title of host publicationACM/IEEE International Conference for High Performance Computing, Networking, Storage, and Analysis (SC'23)
PublisherACM/IEEE
StatePublished - Sep 11 2023

Fingerprint

Dive into the research topics of 'Scaling the “Memory Wall” for Multi-Dimensional Seismic Processing with Algebraic Compression on Cerebras CS-2 Systems'. Together they form a unique fingerprint.

Cite this