Samuele Ruffino, Kumudu Geethan Karunaratne, et al.
DATE 2024
Analog Non-Volatile Memory-based accelerators offer high-throughput and energy-efficient Multiply-Accumulate operations for the large Fully-Connected layers that dominate Transformer-based Large Language Models. We describe architectural, wafer-scale testing, chip-demo, and hardware-aware training efforts towards such accelerators, and quantify the unique raw-throughput and latency benefits of Fully- (rather than Partially-) Weight-Stationary systems.
Samuele Ruffino, Kumudu Geethan Karunaratne, et al.
DATE 2024
Corey Liam Lammie, Hadjer Benmeziane, et al.
Nat. Rev. Electr. Eng.
Sidney Tsai
MRS Fall Meeting 2023
Olivier Maher, N. Harnack, et al.
DRC 2023