TY - GEN
T1 - TinyML for EEG Decoding on Microcontrollers
AU - Tragoudaras, Antonios
AU - Antoniadis, Charalampos
AU - Massoud, Yehia Mahmoud
N1 - KAUST Repository Item: Exported on 2023-07-25
PY - 2023/7/21
Y1 - 2023/7/21
N2 - The accurate decoding of ElectroEncephaloGraphy (EEG) signals would bring us closer to understanding brain functionality, opening new pathways to fixing brain impairments and devising new Brain-Computer Interface (BCI)-related applications. The impressive success of deep convolutional neural networks extracting information from raw data in computer vision and natural language processing has motivated their investigation into EEG signal decoding. Consequently, a number of deep convolutional neural network models with state-of-the-art performance have been proposed in the literature for EEG signal decoding. However, because all these works aimed to find the model architecture with the best decoding accuracy, the model's size was left unbounded in that exploration. Considering the model size in the design of deep convolutional neural networks for EEG decoding could make their implementation on low-power microcontrollers (that may be integrated into a wearable system) with limited memory feasible. Thus, in this paper, we search for the most accurate deep convolutional neural network on the BCI Competition IV 2a dataset that can also fit in a microcontroller with less than 256KB SRAM. Specifically, we use a Neural Architecture Search (NAS) algorithm that considers, apart from the model's accuracy, the model size, the latency, and the peak memory utilization when running the model's inference. We compare our models with the model with the best decoding accuracy in the literature on the BCI Competition IV 2a dataset (baseline). We show that the discovered models could achieve similar accuracy to the baseline model while shrinking the memory footprint during inference by a factor of ≈ ×20, with a speedup in latency of up to ×1.7 on average.
AB - The accurate decoding of ElectroEncephaloGraphy (EEG) signals would bring us closer to understanding brain functionality, opening new pathways to fixing brain impairments and devising new Brain-Computer Interface (BCI)-related applications. The impressive success of deep convolutional neural networks extracting information from raw data in computer vision and natural language processing has motivated their investigation into EEG signal decoding. Consequently, a number of deep convolutional neural network models with state-of-the-art performance have been proposed in the literature for EEG signal decoding. However, because all these works aimed to find the model architecture with the best decoding accuracy, the model's size was left unbounded in that exploration. Considering the model size in the design of deep convolutional neural networks for EEG decoding could make their implementation on low-power microcontrollers (that may be integrated into a wearable system) with limited memory feasible. Thus, in this paper, we search for the most accurate deep convolutional neural network on the BCI Competition IV 2a dataset that can also fit in a microcontroller with less than 256KB SRAM. Specifically, we use a Neural Architecture Search (NAS) algorithm that considers, apart from the model's accuracy, the model size, the latency, and the peak memory utilization when running the model's inference. We compare our models with the model with the best decoding accuracy in the literature on the BCI Competition IV 2a dataset (baseline). We show that the discovered models could achieve similar accuracy to the baseline model while shrinking the memory footprint during inference by a factor of ≈ ×20, with a speedup in latency of up to ×1.7 on average.
UR - http://hdl.handle.net/10754/693204
UR - https://ieeexplore.ieee.org/document/10181950/
U2 - 10.1109/iscas46773.2023.10181950
DO - 10.1109/iscas46773.2023.10181950
M3 - Conference contribution
BT - 2023 IEEE International Symposium on Circuits and Systems (ISCAS)
PB - IEEE
ER -