Abstract
Betweenness centrality is a graph analytic that states the importance of a vertex based on the number of shortest paths that it is on. As such, betweenness centrality is a building block for graph analysis tools and is used by many applications, including finding bottlenecks in communication networks and community detection. Computing betweenness centrality is computationally demanding, O(V2 + V · E) (for the best known algorithm), which motivates the use of parallelism. Parallelism is especially needed for large graphs with millions of vertices and billions of edges. While the the memory requirements for computing betweenness are not as demanding, O(V +E) (for the best known sequential algorithm), these bound increase for different parallel algorithms. We show that is possible to reduce the memory requirements for computing betweenness centrality from O(V + E) to O(V) at the expense of doing additional traversals. We show that not only does this not hurt performance it actually improves performance for coarse grain parallelism. Further, we show that using the new approach allows parallel scaling that previously was not possible. One example is that the new approach is able to scale to 40 x86 cores for a graph with 32M vertices and 2B edges, whereas the previous approach is only able to scale upto 6 cores because of memory requirements. We also do analysis of fine-grain parallel betweenness centrality on both the x86 and the Cray XMT.
Original language | English (US) |
---|---|
Pages (from-to) | 399-408 |
Number of pages | 10 |
Journal | Procedia Computer Science |
Volume | 18 |
DOIs | |
State | Published - 2013 |
Externally published | Yes |
Event | 13th Annual International Conference on Computational Science, ICCS 2013 - Barcelona, Spain Duration: Jun 5 2013 → Jun 7 2013 |
All Science Journal Classification (ASJC) codes
- General Computer Science
Keywords
- Betweenness centrality
- Experimental algorithms
- Graph algorithms
- Optimizations
- Parallel algorithms