TY - GEN
T1 - Accelerating Matrix-Vector Multiplication on Hierarchical Matrices Using Graphical Processing Units
AU - Boukaram, W.
AU - Ltaief, H.
AU - Litvinenko, Alexander
AU - Abdelfattah, A.
AU - Keyes, David E.
N1 - KAUST Repository Item: Exported on 2020-10-01
Acknowledgements: SRI Uncertainty Quantification Center at KAUST,
Extreme Computing Research Center at KAUST
PY - 2015/3/25
Y1 - 2015/3/25
N2 - Large dense matrices arise from the discretization of
many physical phenomena in computational sciences. In statistics very large dense covariance matrices are used for describing random fields and processes. One can, for instance, describe distribution of dust particles in the atmosphere, concentration of mineral resources in the earth's crust or uncertain permeability coefficient in reservoir modeling.
When the problem size grows, storing and computing with
the full dense matrix becomes prohibitively expensive
both in terms of computational complexity and physical
memory requirements. Fortunately, these matrices can often
be approximated by a class of data sparse matrices called
hierarchical matrices (H-matrices) where various sub-blocks of the matrix
are approximated by low rank matrices. These matrices can be stored in
memory that grows linearly with the problem size.
In addition, arithmetic operations on these H-matrices,
such as matrix-vector multiplication, can be completed
in almost linear time. Originally the H-matrix technique was developed for the approximation of stiffness matrices coming from partial differential and integral equations.
Parallelizing these arithmetic operations on the GPU has been
the focus of this work and we will present work done on the
matrix vector operation on the GPU using the KSPARSE library.
AB - Large dense matrices arise from the discretization of
many physical phenomena in computational sciences. In statistics very large dense covariance matrices are used for describing random fields and processes. One can, for instance, describe distribution of dust particles in the atmosphere, concentration of mineral resources in the earth's crust or uncertain permeability coefficient in reservoir modeling.
When the problem size grows, storing and computing with
the full dense matrix becomes prohibitively expensive
both in terms of computational complexity and physical
memory requirements. Fortunately, these matrices can often
be approximated by a class of data sparse matrices called
hierarchical matrices (H-matrices) where various sub-blocks of the matrix
are approximated by low rank matrices. These matrices can be stored in
memory that grows linearly with the problem size.
In addition, arithmetic operations on these H-matrices,
such as matrix-vector multiplication, can be completed
in almost linear time. Originally the H-matrix technique was developed for the approximation of stiffness matrices coming from partial differential and integral equations.
Parallelizing these arithmetic operations on the GPU has been
the focus of this work and we will present work done on the
matrix vector operation on the GPU using the KSPARSE library.
UR - http://hdl.handle.net/10754/347275
M3 - Conference contribution
BT - International Computational Science and Engineering Conference (ICSEC15)
PB - Extended abstract to the International Computational Science and Engineering Conference (ICSEC15)
ER -