Training-aware Low Precision Quantization in Spiking Neural Networks

Ayan Shymyrbay, Mohammed E. Fouda, Ahmed Eltawil

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Scopus citations

Abstract

Spiking neural networks (SNNs) have become an attractive alternative to conventional artificial neural networks (ANN) due to their temporal information processing capability, energy efficiency, and high biological plausibility. Yet, their computational and memory costs still restrict them from being widely deployed on portable devices. The quantization of SNNs, which converts the full-precision synaptic weights into low-bit versions, emerged as one of the solutions. The development of quantization techniques is far more advanced in the ANN domain compared to the SNN domain. In this work, we utilize the concept of one of the promising ANN quantization methods called Learned Step Size Quantization (LSQ) to adapt to SNN. Furthermore, we extend the mentioned technique for binary quantization of SNNs. Our analysis shows that the proposed method for SNN quantization yields a negligible drop in accuracy and a significant reduction in the needed memory.
Original languageEnglish (US)
Title of host publication2022 56th Asilomar Conference on Signals, Systems, and Computers
PublisherIEEE
Pages1147-1151
Number of pages5
ISBN (Print)9781665459068
DOIs
StatePublished - Mar 7 2022

Fingerprint

Dive into the research topics of 'Training-aware Low Precision Quantization in Spiking Neural Networks'. Together they form a unique fingerprint.

Cite this