TY - GEN
T1 - Efficient Hardware Implementation for Online Local Learning in Spiking Neural Networks
AU - Guo, Wenzhe
AU - Fouda, Mohammed E.
AU - Eltawil, Ahmed
AU - Salama, Khaled N.
N1 - KAUST Repository Item: Exported on 2022-09-09
Acknowledgements: This work was funded by the King Abdullah University of Science and Technology (KAUST) AI Initiative, Saudi Arabia.
PY - 2022/9/5
Y1 - 2022/9/5
N2 - Local learning schemes have shown promising performance in spiking neural networks and are considered as a step towards more biologically plausible learning. Despite many efforts to design high-performance neuromorphic systems, a fast and efficient neuromorphic hardware system is still missing. This work proposes a scalable, fast, and efficient spiking neuromorphic hardware system with on-chip local learning capability that can achieve competitive classification accuracy. We introduce an effective hardware-friendly local training algorithm that is compatible with sparse temporal input coding and binary random classification weights. The algorithm is demonstrated to deliver competitive accuracy. The proposed digital system explores spike sparsity in communication, parallelism in vector-matrix operations, and locality of training errors, which leads to low cost and fast training speed. Taking into consideration energy, speed, resource, and accuracy, our design shows 7.7× efficiency over a recent spiking direct feedback alignment method and 2.7× efficiency over the spike-timing-dependent plasticity method.
AB - Local learning schemes have shown promising performance in spiking neural networks and are considered as a step towards more biologically plausible learning. Despite many efforts to design high-performance neuromorphic systems, a fast and efficient neuromorphic hardware system is still missing. This work proposes a scalable, fast, and efficient spiking neuromorphic hardware system with on-chip local learning capability that can achieve competitive classification accuracy. We introduce an effective hardware-friendly local training algorithm that is compatible with sparse temporal input coding and binary random classification weights. The algorithm is demonstrated to deliver competitive accuracy. The proposed digital system explores spike sparsity in communication, parallelism in vector-matrix operations, and locality of training errors, which leads to low cost and fast training speed. Taking into consideration energy, speed, resource, and accuracy, our design shows 7.7× efficiency over a recent spiking direct feedback alignment method and 2.7× efficiency over the spike-timing-dependent plasticity method.
UR - http://hdl.handle.net/10754/680988
UR - https://ieeexplore.ieee.org/document/9869946/
U2 - 10.1109/aicas54282.2022.9869946
DO - 10.1109/aicas54282.2022.9869946
M3 - Conference contribution
BT - 2022 IEEE 4th International Conference on Artificial Intelligence Circuits and Systems (AICAS)
PB - IEEE
ER -