M.Aater Suleman, Onur Mutlu, et al.
ASPLOS 2009
DRAM-based main memories have read operations that destroy the read data, and as a result, must buffer large amounts of data on each array access to keep chip costs low. Unfortunately, system-level trends such as increased memory contention in multi-core architectures and data mapping schemes that improve memory parallelism lead to only a small amount of the buffered data to be accessed. This makes buffering large amounts of data on every memory array access energy-inefficient; yet organizing DRAM chips to buffer small amounts of data is costly, as others have shown [11]. Emerging non-volatile memories (NVMs) such as PCM, STT-RAM, and RRAM, however, do not have destructive read operations, opening up opportunities for employing small row buffers without incurring additional area penalty and/or design complexity. In this work, we discuss and evaluate architectural changes to enable small row buffers at a low cost in NVMs. We find that on a multi-core system, reducing the row buffer size can greatly reduce main memory dynamic energy compared to a DRAM baseline with large row sizes, without greatly affecting endurance, and for some NVM technologies, leads to improved performance. © 2012 IEEE.
M.Aater Suleman, Onur Mutlu, et al.
ASPLOS 2009
Gagandeep Singh, Dionysios Diamantopoulos, et al.
FPL 2020
Yi Kang, Wei Huang, et al.
ICCD 2012
Jing Li, Robert Montoye, et al.
VLSI Technology 2013