Batched triangular dense linear algebra kernels for very small matrix sizes on GPUs

Ali Charara, David E. Keyes, Hatem Ltaief

Research output: Contribution to journalArticlepeer-review

7 Scopus citations


Batched dense linear algebra kernels are becoming ubiquitous in scientific applications, ranging from tensor contractions in deep learning to data compression in hierarchical low-rank matrix approximation. Within a single API call, these kernels are capable of simultaneously launching up to thousands of similar matrix computations, removing the expensive overhead of multiple API calls while increasing the occupancy of the underlying hardware. A challenge is that for the existing hardware landscape (x86, GPUs, etc.), only a subset of the required batched operations is implemented by the vendors, with limited support for very small problem sizes. We describe the design and performance of a new class of batched triangular dense linear algebra kernels on very small data sizes (up to 256) using single and multiple GPUs. By deploying recursive formulations, stressing the register usage, maintaining data locality, reducing threads synchronization, and fusing successive kernel calls, the new batched kernels outperform existing state-of-the-art implementations.
Original languageEnglish (US)
Pages (from-to)1-28
Number of pages28
JournalACM Transactions on Mathematical Software
Issue number2
StatePublished - May 6 2019


Dive into the research topics of 'Batched triangular dense linear algebra kernels for very small matrix sizes on GPUs'. Together they form a unique fingerprint.

Cite this