About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
DAC 2021
Poster
Reliable RRAM-based In-Memory Computing in Light of Model Stability
Abstract
RRAM-based in-memory computing (IMC) effectively accelerates DNNs. Quantization and pruning improve the hardware performance but aggravate the effect of RRAM device variations and, reduce the post-mapping accuracy. This work proposes model stability as a new metric to guide algorithmic solutions. Based on 65nm statistical RRAM data, we incorporate algorithm and architecture parameters to benchmark post-mapping accuracy and hardware performance. Furthermore, we develop a novel variation-aware training method to improve model stability, in which there exists an optimal scale of training variation for best accuracy. Experimental evaluation shows up to 21% improvement in post-mapping accuracy for CIFAR-10, CIFAR-100, and SVHN datasets.