TY - GEN
T1 - A Cost-effective and Energy-efficient Architecture for Die-stacked DRAM/NVM Memory Systems
AU - Guo, Yuhua
AU - Xiao, Weijun
AU - Liu, Qing
AU - He, Xubin
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/7/2
Y1 - 2018/7/2
N2 - Traditional DRAM-based memory systems are facing two major scalability issues. First, the memory wall problem becomes a major performance bottleneck. Second, conventional memory systems consume increasing power as the capacity increases, which could be as much as 40% of the total system power. These issues hinder the scaling of DRAM-based memory systems. Fortunately, emerging memory technologies, such as high bandwidth memory (HBM) and phase change memory (PCM), have the potential to solve these scalability issues. However, there is no single memory technology that can overcome these issues together. Therefore, a hybrid memory system could be a promising way to build a high-performance, large-capacity, and energy-efficient memory system. To achieve this goal, we propose a cost-effective and energy-efficient architecture for HBM/PCM memory systems, called Dual Role HBM (DR-HBM). In DR-HBM, the HBM plays two roles and is divided into two parts. A small portion of which, called HBM cache, is used as a cache for the PCM. The remaining HBM is used as a part of main memory. Furthermore, the HBM cache is also used to track page hotness without additional hardware support. Hot pages will be migrated to HBM when they are evicted from the HBM cache. The experimental results show DR-HBM outperforms two state-of-the-art hybrid memory systems, CAMEO [1] and RaPP [2]. Compared to the baseline in which both HBM and PCM are architected as a part of main memory without page migration, DR-HBM improves the performance by 63% on average.
AB - Traditional DRAM-based memory systems are facing two major scalability issues. First, the memory wall problem becomes a major performance bottleneck. Second, conventional memory systems consume increasing power as the capacity increases, which could be as much as 40% of the total system power. These issues hinder the scaling of DRAM-based memory systems. Fortunately, emerging memory technologies, such as high bandwidth memory (HBM) and phase change memory (PCM), have the potential to solve these scalability issues. However, there is no single memory technology that can overcome these issues together. Therefore, a hybrid memory system could be a promising way to build a high-performance, large-capacity, and energy-efficient memory system. To achieve this goal, we propose a cost-effective and energy-efficient architecture for HBM/PCM memory systems, called Dual Role HBM (DR-HBM). In DR-HBM, the HBM plays two roles and is divided into two parts. A small portion of which, called HBM cache, is used as a cache for the PCM. The remaining HBM is used as a part of main memory. Furthermore, the HBM cache is also used to track page hotness without additional hardware support. Hot pages will be migrated to HBM when they are evicted from the HBM cache. The experimental results show DR-HBM outperforms two state-of-the-art hybrid memory systems, CAMEO [1] and RaPP [2]. Compared to the baseline in which both HBM and PCM are architected as a part of main memory without page migration, DR-HBM improves the performance by 63% on average.
UR - http://www.scopus.com/inward/record.url?scp=85066472705&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85066472705&partnerID=8YFLogxK
U2 - 10.1109/PCCC.2018.8711335
DO - 10.1109/PCCC.2018.8711335
M3 - Conference contribution
AN - SCOPUS:85066472705
T3 - 2018 IEEE 37th International Performance Computing and Communications Conference, IPCCC 2018
BT - 2018 IEEE 37th International Performance Computing and Communications Conference, IPCCC 2018
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 37th IEEE International Performance Computing and Communications Conference, IPCCC 2018
Y2 - 17 November 2018 through 19 November 2018
ER -