TY - GEN
T1 - An efficient transactional memory algorithm for computing minimum spanning forest of sparse graphs
AU - Kang, Seunghwa
AU - Bader, David A.
PY - 2009
Y1 - 2009
N2 - Due to power wall, memory wall, and ILP wall, we are facing the end of ever increasing single-threaded performance. For this reason, multicore and manycore processors are arising as a new paradigm to pursue. However, to fully exploit all the cores in a chip, parallel programming is often required, and the complexity of parallel programming raises a significant concern. Data synchronization is a major source of this programming complexity, and TransactionalMemory is proposed to reduce the difficulty caused by data synchronization requirements, while providing high scalability and low performance overhead. The previous literature on Transactional Memory mostly focuses on architectural designs. Its impact on algorithms and applications has not yet been studied thoroughly. In this paper, we investigate Transactional Memory from the algorithm designer's perspective. This paper presents an algorithmic model to assist in the design of efficient Transactional Memory algorithms and a novel Transactional Memory algorithm for computing a minimum spanning forest of sparse graphs. We emphasize multiple Transactional Memory related design issues in presenting our algorithm. We also provide experimental results on an existing software Transactional Memory system. Our algorithm demonstrates excellent scalability in the experiments, but at the same time, the experimental results reveal the clear limitation of software TransactionalMemory due to its high performance overhead. Based on our experience, we highlight the necessity of efficient hardware support for Transactional Memory to realize the potential of the technology.
AB - Due to power wall, memory wall, and ILP wall, we are facing the end of ever increasing single-threaded performance. For this reason, multicore and manycore processors are arising as a new paradigm to pursue. However, to fully exploit all the cores in a chip, parallel programming is often required, and the complexity of parallel programming raises a significant concern. Data synchronization is a major source of this programming complexity, and TransactionalMemory is proposed to reduce the difficulty caused by data synchronization requirements, while providing high scalability and low performance overhead. The previous literature on Transactional Memory mostly focuses on architectural designs. Its impact on algorithms and applications has not yet been studied thoroughly. In this paper, we investigate Transactional Memory from the algorithm designer's perspective. This paper presents an algorithmic model to assist in the design of efficient Transactional Memory algorithms and a novel Transactional Memory algorithm for computing a minimum spanning forest of sparse graphs. We emphasize multiple Transactional Memory related design issues in presenting our algorithm. We also provide experimental results on an existing software Transactional Memory system. Our algorithm demonstrates excellent scalability in the experiments, but at the same time, the experimental results reveal the clear limitation of software TransactionalMemory due to its high performance overhead. Based on our experience, we highlight the necessity of efficient hardware support for Transactional Memory to realize the potential of the technology.
KW - Minimum spanning foreset
KW - Minimum spanning tree
KW - Transactional memory
UR - http://www.scopus.com/inward/record.url?scp=67650093462&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=67650093462&partnerID=8YFLogxK
U2 - 10.1145/1504176.1504182
DO - 10.1145/1504176.1504182
M3 - Conference contribution
AN - SCOPUS:67650093462
SN - 9781605583976
T3 - Proceedings of the ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPOPP
SP - 15
EP - 24
BT - Proceedings of the 2009 ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPoPP'09
T2 - 2009 ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPoPP'09
Y2 - 14 February 2009 through 18 February 2009
ER -