The accurate decoding of ElectroEncephaloGraphy (EEG) signals would bring us closer to understanding brain functionality, opening new pathways to fixing brain impairments and devising new Brain-Computer Interface (BCI)-related applications. The impressive success of deep convolutional neural networks extracting information from raw data in computer vision and natural language processing has motivated their investigation into EEG signal decoding. Consequently, a number of deep convolutional neural network models with state-of-the-art performance have been proposed in the literature for EEG signal decoding. However, because all these works aimed to find the model architecture with the best decoding accuracy, the model's size was left unbounded in that exploration. Considering the model size in the design of deep convolutional neural networks for EEG decoding could make their implementation on low-power microcontrollers (that may be integrated into a wearable system) with limited memory feasible. Thus, in this paper, we search for the most accurate deep convolutional neural network on the BCI Competition IV 2a dataset that can also fit in a microcontroller with less than 256KB SRAM. Specifically, we use a Neural Architecture Search (NAS) algorithm that considers, apart from the model's accuracy, the model size, the latency, and the peak memory utilization when running the model's inference. We compare our models with the model with the best decoding accuracy in the literature on the BCI Competition IV 2a dataset (baseline). We show that the discovered models could achieve similar accuracy to the baseline model while shrinking the memory footprint during inference by a factor of ≈ ×20, with a speedup in latency of up to ×1.7 on average.