Publication
IEEE TETCI
Paper

Accurate and Energy-Efficient Bit-Slicing for RRAM-Based Neural Networks

View publication

Abstract

<italic>Computation-in-memory</italic> (CIM) paradigm leverages emerging memory technologies such as resistive random access memories (RRAMs) to process the data within the memory itself. This alleviates the memory-processor bottleneck resulting in much higher hardware efficiency compared to von-Neumann architecture-based conventional hardware. Hence, CIM becomes an attractive alternative for applications like neural networks which require a huge number of data transfer operations in conventional hardware. CIM-based neural networks typically employ bit-slicing scheme which represents a single neural weight using multiple RRAM devices (called slices) to meet the high bit-precision demand. However, such neural networks suffer from significant accuracy degradation due to non-zero G<inline-formula><tex-math notation="LaTeX">$_{\min }$</tex-math></inline-formula> error where a zero weight in the neural network is represented by an RRAM device with a non-zero conductance. This paper proposes an unbalanced bit-slicing scheme to mitigate the impact of non-zero G<inline-formula><tex-math notation="LaTeX">$_{\min }$</tex-math></inline-formula> error. It achieves this by allocating appropriate sensing margins for different slices based on their binary positions. It also tunes the sensing margins to meet the demands of either high accuracy or energy-efficiency. The sensing margin allocation is supported by 2&#x2019;s complement arithmetic which further reduces the influence of non-zero G<inline-formula><tex-math notation="LaTeX">$_{\min }$</tex-math></inline-formula> error. Simulation results show that our proposed scheme achieves up to 7.3&#x00D7; accuracy and up to 7.8&#x00D7; correct operations per unit energy consumption compared to state-of-the-art.

Date

Publication

IEEE TETCI