TY - GEN
T1 - Optimizing strassen matrix multiply on GPUs
AU - ul Hasan Khan, Ayaz
AU - Al-Mouhamed, Mayez
AU - Fatayer, Allam
N1 - KAUST Repository Item: Exported on 2020-10-01
Acknowledgements: The authors would like to acknowledge the support provided by King Abdulaziz City for Science and Technology (KACST) through the Science & Technology Unit at King Fahd University of Petroleum & Minerals (KFUPM) for funding this work through project No.12-INF3008-04 as part of the National Science, Technology and Innovation Plan. We are also very thankful to King Abullah University of Science and Technology (KAUST) for providing access to their K20X GPU cluster to run the experiments.
This publication acknowledges KAUST support, but has no KAUST affiliated authors.
PY - 2015/6
Y1 - 2015/6
N2 - © 2015 IEEE. Many core systems are basically designed for applications having large data parallelism. Strassen Matrix Multiply (MM) can be formulated as a depth first (DFS) traversal of a recursion tree where all cores work in parallel on computing each of the NxN sub-matrices that reduces storage at the detriment of large data motion to gather and aggregate the results. We propose Strassen and Winograd algorithms (S-MM and W-MM) based on three optimizations: a set of basic algebra functions to reduce overhead, invoking efficient library (CUBLAS 5.5), and parameter-tuning of parametric kernel to improve resource occupancy. On GPUs, W-MM and S-MM with one recursion level outperform CUBLAS 5.5 Library with up to twice as faster for large arrays satisfying N>=2048 and N>=3072, respectively. Compared to NVIDIA SDK library, S-MM and W-MM achieved a speedup between 20x to 80x for the above arrays. The proposed approach can be used to enhance the performance of CUBLAS and MKL libraries.
AB - © 2015 IEEE. Many core systems are basically designed for applications having large data parallelism. Strassen Matrix Multiply (MM) can be formulated as a depth first (DFS) traversal of a recursion tree where all cores work in parallel on computing each of the NxN sub-matrices that reduces storage at the detriment of large data motion to gather and aggregate the results. We propose Strassen and Winograd algorithms (S-MM and W-MM) based on three optimizations: a set of basic algebra functions to reduce overhead, invoking efficient library (CUBLAS 5.5), and parameter-tuning of parametric kernel to improve resource occupancy. On GPUs, W-MM and S-MM with one recursion level outperform CUBLAS 5.5 Library with up to twice as faster for large arrays satisfying N>=2048 and N>=3072, respectively. Compared to NVIDIA SDK library, S-MM and W-MM achieved a speedup between 20x to 80x for the above arrays. The proposed approach can be used to enhance the performance of CUBLAS and MKL libraries.
UR - http://hdl.handle.net/10754/599106
UR - http://ieeexplore.ieee.org/document/7176172/
UR - http://www.scopus.com/inward/record.url?scp=84947080589&partnerID=8YFLogxK
U2 - 10.1109/SNPD.2015.7176172
DO - 10.1109/SNPD.2015.7176172
M3 - Conference contribution
SN - 9781479986767
BT - 2015 IEEE/ACIS 16th International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)
PB - Institute of Electrical and Electronics Engineers (IEEE)
ER -