Divya Taneja, Jonathan Grenier, et al.
ECTC 2024
The recent success of Transformer-based language models has been driven by very large model sizes, tremendously increasing compute, memory and energy requirements of neural networks. Fully connected layers that dominate Transformers can be mapped to Analog non-volatile memory, implementing ‘weight-stationary’ architectures with in-place multiply-and-accumulate computations and reduced off-chip data transfer, offering significant energy benefits. I will review key challenges for analog in-memory computing, including device, circuit, architecture and algorithmic aspects highlighting IBM’s cross-layer AnalogAI research.
Divya Taneja, Jonathan Grenier, et al.
ECTC 2024
Max Bloomfield, Amogh Wasti, et al.
ITherm 2025
David Stutz, Nandhini Chandramoorthy, et al.
MLSys 2021
Juan Miguel De Haro, Rubén Cano, et al.
IPDPS 2022