Resistive Neural Hardware Accelerators

Kamilya Smagulova, Mohamed E. Fouda, Fadi Kurdahi, Khaled N. Salama, Ahmed Eltawil

Research output: Contribution to journalArticlepeer-review


Deep neural networks (DNNs), as a subset of machine learning (ML) techniques, entail that real-world data can be learned, and decisions can be made in real time. However, their wide adoption is hindered by a number of software and hardware limitations. The existing general-purpose hardware platforms used to accelerate DNNs are facing new challenges associated with the growing amount of data and are exponentially increasing the complexity of computations. Emerging nonvolatile memory (NVM) devices and the compute-in-memory (CIM) paradigm are creating a new hardware architecture generation with increased computing and storage capabilities. In particular, the shift toward resistive random access memory (ReRAM)-based in-memory computing has great potential in the implementation of area- and power-efficient inference and in training large-scale neural network architectures. These can accelerate the process of IoT-enabled AI technologies entering our daily lives. In this survey, we review the state-of-the-art ReRAM-based DNN many-core accelerators, and their superiority compared to CMOS counterparts was shown. The review covers different aspects of hardware and software realization of DNN accelerators, their present limitations, and prospects. In particular, a comparison of the accelerators shows the need for the introduction of new performance metrics and benchmarking standards. In addition, the major concerns regarding the efficient design of accelerators include a lack of accuracy in simulation tools for software and hardware codesign.
Original languageEnglish (US)
Pages (from-to)500-527
Number of pages28
JournalProceedings of the IEEE
Issue number5
StatePublished - May 16 2023

ASJC Scopus subject areas

  • Electrical and Electronic Engineering


Dive into the research topics of 'Resistive Neural Hardware Accelerators'. Together they form a unique fingerprint.

Cite this