TY - GEN
T1 - Training-aware Low Precision Quantization in Spiking Neural Networks
AU - Shymyrbay, Ayan
AU - Fouda, Mohammed E.
AU - Eltawil, Ahmed
N1 - KAUST Repository Item: Exported on 2023-03-28
PY - 2022/3/7
Y1 - 2022/3/7
N2 - Spiking neural networks (SNNs) have become an attractive alternative to conventional artificial neural networks (ANN) due to their temporal information processing capability, energy efficiency, and high biological plausibility. Yet, their computational and memory costs still restrict them from being widely deployed on portable devices. The quantization of SNNs, which converts the full-precision synaptic weights into low-bit versions, emerged as one of the solutions. The development of quantization techniques is far more advanced in the ANN domain compared to the SNN domain. In this work, we utilize the concept of one of the promising ANN quantization methods called Learned Step Size Quantization (LSQ) to adapt to SNN. Furthermore, we extend the mentioned technique for binary quantization of SNNs. Our analysis shows that the proposed method for SNN quantization yields a negligible drop in accuracy and a significant reduction in the needed memory.
AB - Spiking neural networks (SNNs) have become an attractive alternative to conventional artificial neural networks (ANN) due to their temporal information processing capability, energy efficiency, and high biological plausibility. Yet, their computational and memory costs still restrict them from being widely deployed on portable devices. The quantization of SNNs, which converts the full-precision synaptic weights into low-bit versions, emerged as one of the solutions. The development of quantization techniques is far more advanced in the ANN domain compared to the SNN domain. In this work, we utilize the concept of one of the promising ANN quantization methods called Learned Step Size Quantization (LSQ) to adapt to SNN. Furthermore, we extend the mentioned technique for binary quantization of SNNs. Our analysis shows that the proposed method for SNN quantization yields a negligible drop in accuracy and a significant reduction in the needed memory.
UR - http://hdl.handle.net/10754/690669
UR - https://ieeexplore.ieee.org/document/10051957/
UR - http://www.scopus.com/inward/record.url?scp=85150169631&partnerID=8YFLogxK
U2 - 10.1109/IEEECONF56349.2022.10051957
DO - 10.1109/IEEECONF56349.2022.10051957
M3 - Conference contribution
SN - 9781665459068
SP - 1147
EP - 1151
BT - 2022 56th Asilomar Conference on Signals, Systems, and Computers
PB - IEEE
ER -