Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. BuMP: Bulk Memory Access Prediction and Streaming
 
conference paper

BuMP: Bulk Memory Access Prediction and Streaming

Volos, Stavros  
•
Picorel, Javier  
•
Falsafi, Babak  
Show more
2014
Proceedings of the 47th Annual IEEE/ACM International Symposium on Microarchitecture
47th Annual IEEE/ACM International Symposium on Microarchitecture

With the end of Dennard scaling, server power has emerged as the limiting factor in the quest for more capable datacenters. Without the benefit of supply voltage scaling, it is essential to lower the energy per operation to improve server efficiency. As the industry moves to lean-core server processors, the energy bottleneck is shifting toward main memory as a chief source of server energy consumption in modern datacenters. Maximizing the energy efficiency of today's DRAM chips and interfaces requires amortizing the costly DRAM page activations over multiple row buffer accesses. This work introduces Bulk Memory Access Prediction and Streaming, or BuMP. We make the observation that a significant fraction (59-79%) of all memory accesses fall into DRAM pages with high access density, meaning that the majority of their cache blocks will be accessed within a modest time frame of the first access. Accesses to high-density DRAM pages include not only memory reads in response to load instructions, but also reads stemming from store instructions as well as memory writes upon a dirty LLC eviction. The remaining accesses go to low-density pages and virtually unpredictable reference patterns (e.g., hashed key lookups). BuMP employs a low-cost predictor to identify high-density pages and triggers bulk transfer operations upon the first read or write to the page. In doing so, BuMP enforces high row buffer locality where it is profitable, thereby reducing DRAM energy per access by 23%, and improves server throughput by 11% across a wide range of server applications.

  • Files
  • Details
  • Metrics
Type
conference paper
DOI
10.1109/MICRO.2014.44
Author(s)
Volos, Stavros  
Picorel, Javier  
Falsafi, Babak  
Grot, Boris  
Date Issued

2014

Published in
Proceedings of the 47th Annual IEEE/ACM International Symposium on Microarchitecture
Start page

545

End page

557

Subjects

energy efficiency

•

memory streaming

•

DRAM

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
PARSA  
Event nameEvent date
47th Annual IEEE/ACM International Symposium on Microarchitecture

December 13-17, 2014

Available on Infoscience
October 2, 2014
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/107204
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés