Novel Knowledge Distillation to Improve Training Accuracy of Spin-based SNN

Hanrui Li, Aijaz H. Lone, Fengshi Tian, Jie Yang, Mohamad Sawan, Nazek Elatab

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Scopus citations

Abstract

Spintronics-based magnetic tunnel junction (MTJ) devices have shown the ability working as both synapse and spike threshold neurons, which is perfectly suitable with the hardware implementation of spike neural network (SNN). It has the inherent advantage of high energy efficiency with ultra-low operation voltage due to its small nanometric size and low depinning current densities. However, hardware-based SNNs training always suffers a significant performance loss compared with original neural networks due to variations among devices and information deficiency as the weights map with device synaptic conductance. Knowledge distillation is a model compression and acceleration method that enables transferring the learning knowledge from a large machine learning model to a smaller model with minimal loss in performance. In this paper, we propose a novel training scheme based on spike knowledge distillation which helps improve the training performance of spin-based SNN (SSNN) model via transferring knowledge from a large CNN model. We propose novel distillation methodologies and demonstrate the effectiveness of the proposed method with detailed experiments on four datasets. The experimental results indicate that our proposed training scheme consistently improves the performance of SSNN model by a large margin.
Original languageEnglish (US)
Title of host publication2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)
PublisherIEEE
DOIs
StatePublished - Jul 7 2023

Fingerprint

Dive into the research topics of 'Novel Knowledge Distillation to Improve Training Accuracy of Spin-based SNN'. Together they form a unique fingerprint.

Cite this