Workshop paper

Simulating the Training and Inference of Analog In-Memory Computing Systems

Abstract

Analog in-memory computing (AIMC) is a promising approach to reduce the latency and energy consumption of Deep Neural Network (DNN) inference and training. However, the noisy and non-linear device characteristics, and the non-ideal peripheral circuitry in AIMC chips, require adapting DNNs to be deployed on such hardware to achieve equivalent accuracy to digital computing. In order to efficiently and accurately model key circuit and device behaviors of these systems, customized simulation frameworks can be used, which are seamlessly integrated with modernized deep learning frameworks. IBM released in 2021 the IBM Analog Hardware Acceleration Kit (AIHWKit), freely available at https://github.com/IBM/aihwkit, which simulates inference and training of DNNs using AIMC. More recently, a faster variant with less features that supports large-language models, AIHWKit-Lightning, has been released as well at https://github.com/IBM/aihwkit-lightning. In this workshop session, I will provide a deep dive into how inference and training can be performed using the AIHWKit and AIHWKit-lightning, and how users can expand and customize the toolkits for their own needs. Participants will be equipped with practical skills to model the training and inference of complex analog in-memory computing systems, using models developed from experimental data.