TY - GEN
T1 - Error-triggered Three-Factor Learning Dynamics for Crossbar Arrays
AU - Payvand, Melika
AU - Fouda, Mohammed E.
AU - Kurdahi, Fadi
AU - Eltawil, Ahmed
AU - Neftci, Emre O.
N1 - KAUST Repository Item: Exported on 2021-01-28
PY - 2020/8
Y1 - 2020/8
N2 - Recent breakthroughs suggest that local, approximate gradient descent learning is compatible with Spiking Neural Networks (SNNs). Although SNNs can be scalably implemented using neuromorphic VLSI, an architecture, that can learn in situ as accurately as conventional processors, is still missing. Here, we propose a subthreshold circuit architecture designed through insights obtained from machine learning and computational neuroscience that could achieve such accuracy. Using a surrogate gradient learning framework, we derive local, error-triggered learning dynamics compatible with crossbar arrays and the temporal dynamics of SNNs. The derivation reveals that circuits used for inference and training dynamics can be shared, which simplifies the circuit and suppresses the effects of fabrication mismatch. We present SPICE simulations on XFAB 180nm process, as well as large-scale simulations of the spiking neural networks on event-based benchmarks, including a gesture recognition task. Our results show that the number of updates can be reduced hundred-fold compared to the standard rule while achieving performances that are on par with the state-of-the-art.
AB - Recent breakthroughs suggest that local, approximate gradient descent learning is compatible with Spiking Neural Networks (SNNs). Although SNNs can be scalably implemented using neuromorphic VLSI, an architecture, that can learn in situ as accurately as conventional processors, is still missing. Here, we propose a subthreshold circuit architecture designed through insights obtained from machine learning and computational neuroscience that could achieve such accuracy. Using a surrogate gradient learning framework, we derive local, error-triggered learning dynamics compatible with crossbar arrays and the temporal dynamics of SNNs. The derivation reveals that circuits used for inference and training dynamics can be shared, which simplifies the circuit and suppresses the effects of fabrication mismatch. We present SPICE simulations on XFAB 180nm process, as well as large-scale simulations of the spiking neural networks on event-based benchmarks, including a gesture recognition task. Our results show that the number of updates can be reduced hundred-fold compared to the standard rule while achieving performances that are on par with the state-of-the-art.
UR - http://hdl.handle.net/10754/667062
UR - https://ieeexplore.ieee.org/document/9073998/
U2 - 10.1109/aicas48895.2020.9073998
DO - 10.1109/aicas48895.2020.9073998
M3 - Conference contribution
SN - 9781728149226
BT - 2020 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS)
PB - IEEE
ER -