About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
AICAS 2020
Conference paper
Accelerating Deep Neural Networks with Analog Memory Devices
Abstract
Acceleration of training and inference of Deep Neural Networks (DNNs) with non-volatile memory (NVM) arrays, such as Phase-Change Memory (PCM), shows promising advantages in terms of energy efficiency and speed with respect to digital implementations using CPUs and GPUs. By leveraging a combination of PCM devices and CMOS circuits, high training accuracy can be achieved, leading to software-equivalent results on small and medium datasets. In addition, weights encoded with multiple PCM devices can lead to high speed and low-power inference, as shown here for Long-Short Term Memory (LSTM) networks.