Leveraging GPUs for matrix-free optimization with PyLops

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The use of Graphics Processing Units (GPUs) for scientific computing has become mainstream in the last decade. Applications ranging from deep learning to seismic modelling have benefitted from the increase in computational efficiency compared to their equivalent CPU-based implementations. Since many inverse problems in geophysics relies on similar core computations – e.g. dense linear algebra operations, convolutions, FFTs – it is reasonable to expect similar performance gains if GPUs are also leveraged in this context. In this paper we discuss how we have been able to take PyLops, a Python library for matrix-free linear algebra and optimization originally developed for singe-node CPUs, and create a fully compatible GPU backend with the help of CuPy and cuSignal. A benchmark suite of our core operators shows that an average 65x speed-up in computations can be achieved when running computations on a V100 GPU. Moreover, by careful modification of the inner working of the library, end users can obtain such a performance gain at virtually no cost: minimal code changes are required when switching between the CPU and GPU backends, mostly consisting of moving the data vector to the GPU device prior to solving an inverse problem with one of PyLops’ solvers.
Original languageEnglish (US)
Title of host publicationFifth EAGE Workshop on High Performance Computing for Upstream
PublisherEuropean Association of Geoscientists & Engineers
DOIs
StatePublished - 2021

Fingerprint

Dive into the research topics of 'Leveraging GPUs for matrix-free optimization with PyLops'. Together they form a unique fingerprint.

Cite this