TY - JOUR
T1 - Streaming Overlay Architecture for Lightweight LSTM Computation on FPGA SoCs
AU - Ioannou, Lenos
AU - Fahmy, Suhaib A.
N1 - KAUST Repository Item: Exported on 2022-06-23
Acknowledgements: This work was supported in part by the U.K. Engineering and Physical Sciences Research Council (EPSRC) under grant EP/N509796/1 and in part by a Royal Academy of Engineering/The Leverhulme Trust Research Fellowship.
PY - 2022/6/21
Y1 - 2022/6/21
N2 - Long-Short Term Memory (LSTM) networks, and Recurrent Neural Networks (RNNs) in general, have demonstrated their suitability in many time series data applications, especially in Natural Language Processing (NLP). Computationally, LSTMs introduce dependencies on previous outputs in each layer that complicate their computation and the design of custom computing architectures, compared to traditional feed-forward networks. Most neural network acceleration work has focused on optimising the core matrix-vector operations on highly capable FPGAs in server environments. Research that considers the embedded domain has often been unsuitable for streaming inference, relying heavily on batch processing to achieve high throughput. Moreover, many existing accelerator architectures have not focused on fully exploiting the underlying FPGA architecture, resulting in designs that achieve lower operating frequencies than the theoretical maximum. This paper presents a flexible overlay architecture for LSTMs on FPGA SoCs that is built around a streaming dataflow arrangement, uses DSP block capabilities directly, and is tailored to keep parameters within the architecture while moving input data serially to mitigate external memory access overheads. The architecture is designed as an overlay that can be configured to implement alternative models or update model parameters at runtime. It achieves higher operating frequency and demonstrates higher performance than other lightweight LSTM accelerators, as demonstrated in an FPGA SoC implementation.
AB - Long-Short Term Memory (LSTM) networks, and Recurrent Neural Networks (RNNs) in general, have demonstrated their suitability in many time series data applications, especially in Natural Language Processing (NLP). Computationally, LSTMs introduce dependencies on previous outputs in each layer that complicate their computation and the design of custom computing architectures, compared to traditional feed-forward networks. Most neural network acceleration work has focused on optimising the core matrix-vector operations on highly capable FPGAs in server environments. Research that considers the embedded domain has often been unsuitable for streaming inference, relying heavily on batch processing to achieve high throughput. Moreover, many existing accelerator architectures have not focused on fully exploiting the underlying FPGA architecture, resulting in designs that achieve lower operating frequencies than the theoretical maximum. This paper presents a flexible overlay architecture for LSTMs on FPGA SoCs that is built around a streaming dataflow arrangement, uses DSP block capabilities directly, and is tailored to keep parameters within the architecture while moving input data serially to mitigate external memory access overheads. The architecture is designed as an overlay that can be configured to implement alternative models or update model parameters at runtime. It achieves higher operating frequency and demonstrates higher performance than other lightweight LSTM accelerators, as demonstrated in an FPGA SoC implementation.
UR - http://hdl.handle.net/10754/679246
UR - https://dl.acm.org/doi/10.1145/3543069
U2 - 10.1145/3543069
DO - 10.1145/3543069
M3 - Article
SN - 1936-7406
JO - ACM Transactions on Reconfigurable Technology and Systems
JF - ACM Transactions on Reconfigurable Technology and Systems
ER -