AMA: Asynchronous management of accelerators for task-based programming models

Judit Planas, Rosa M. Badia, Eduard Ayguade, Jesus Labarta

Research output: Chapter in Book/Report/Conference proceedingConference contribution

6 Scopus citations

Abstract

Computational science has benefited in the last years from emerging accelerators that increase the performance of scientific simulations, but using these devices hinders the programming task. This paper presents AMA: a set of optimization techniques to efficiently manage multiaccelerator systems. AMA maximizes the overlap of computation and communication in a blocking-free way. Then, we can use such spare time to do other work while waiting for device operations. Implemented on top of a task-based framework, the experimental evaluation of AMA on a quad-GPU node shows that we reach the performance of a hand-tuned native CUDA code, with the advantage of fully hiding the device management. In addition, we obtain up to more than 2x performance speed-up with respect to the original framework implementation.
Original languageEnglish (US)
Title of host publicationProcedia Computer Science
PublisherElsevier BV
Pages130-139
Number of pages10
DOIs
StatePublished - Jun 1 2015
Externally publishedYes

Fingerprint

Dive into the research topics of 'AMA: Asynchronous management of accelerators for task-based programming models'. Together they form a unique fingerprint.

Cite this