TY - JOUR
T1 - A hybrid approximate computing approach for associative in-memory processors
AU - Yantir, Hasan Erdem
AU - Eltawil, Ahmed M.
AU - Kurdahi, Fadi J.
N1 - Generated from Scopus record by KAUST IRTS on 2019-11-20
PY - 2018/12/1
Y1 - 2018/12/1
N2 - The complexity of the computational problems is rising faster than the computational platforms' capabilities which are also becoming increasingly costly to operate due to their increased need for energy. This forces researchers to find alternative paradigms and methods for efficient computing. One promising paradigm is accelerating compute-intensive kernels using in-memory computing accelerators, where data movements are significantly reduced. Another increasingly popular method for improving energy efficiency is approximate computing. In this paper, we propose a methodology for efficient approximate in-memory computing. To maximize energy savings for a given approximation constraints, a hybrid approach is presented combining both voltage and precision scaling. This can be applied to an associative memory-based architecture that can be implemented today using CMOS memories (SRAM) but can be seamlessly scaled to emerging ReRAM-based memory technology later with minimal effort. For the evaluation of the proposed methodology, a diverse set of domains is covered, such as image processing, machine learning, machine vision, and digital signal processing. When compared to full-precision, unscaled implementations, average energy savings of 5.17 × and 59.11 ×, and speedups of 2.1 × and 3.24 × in SRAM-based and ReRAM-based architectures, respectively, are reported.
AB - The complexity of the computational problems is rising faster than the computational platforms' capabilities which are also becoming increasingly costly to operate due to their increased need for energy. This forces researchers to find alternative paradigms and methods for efficient computing. One promising paradigm is accelerating compute-intensive kernels using in-memory computing accelerators, where data movements are significantly reduced. Another increasingly popular method for improving energy efficiency is approximate computing. In this paper, we propose a methodology for efficient approximate in-memory computing. To maximize energy savings for a given approximation constraints, a hybrid approach is presented combining both voltage and precision scaling. This can be applied to an associative memory-based architecture that can be implemented today using CMOS memories (SRAM) but can be seamlessly scaled to emerging ReRAM-based memory technology later with minimal effort. For the evaluation of the proposed methodology, a diverse set of domains is covered, such as image processing, machine learning, machine vision, and digital signal processing. When compared to full-precision, unscaled implementations, average energy savings of 5.17 × and 59.11 ×, and speedups of 2.1 × and 3.24 × in SRAM-based and ReRAM-based architectures, respectively, are reported.
UR - https://ieeexplore.ieee.org/document/8402197/
UR - http://www.scopus.com/inward/record.url?scp=85049437556&partnerID=8YFLogxK
U2 - 10.1109/JETCAS.2018.2852701
DO - 10.1109/JETCAS.2018.2852701
M3 - Article
SN - 2156-3357
VL - 8
JO - IEEE Journal on Emerging and Selected Topics in Circuits and Systems
JF - IEEE Journal on Emerging and Selected Topics in Circuits and Systems
IS - 4
ER -