TY - GEN
T1 - Leopard
T2 - 42nd IEEE International Conference on Distributed Computing Systems, ICDCS 2022
AU - Hu, Kexin
AU - Guo, Kaiwen
AU - Tang, Qiang
AU - Zhang, Zhenfeng
AU - Cheng, Hao
AU - Zhao, Zhiyang
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - With the emergence of large-scale decentralized applications, a scalable and efficient Byzantine Fault Tolerant (BFT) protocol of hundreds of replicas is desirable. Although the throughput of existing leader-based BFT protocols has reached a high level of 105 requests per second for a small scale of replicas, it drops significantly when the scale increases.This paper focuses on preserving high throughput as the BFT protocol's scale is increasing. We identify and analyze a major bottleneck to leader-based BFT protocols due to the excessive workload of the leader at large scales. A new metric of scaling factor is defined to capture whether a BFT protocol will get stuck when the scale gets larger, which can be used to measure the performance of throughput and scalability of BFT protocols. We propose "Leopard", the first leader-based BFT protocol that scales to multiple hundreds of replicas, and more importantly, preserves high throughput. We remove the bottleneck by introducing a technique of achieving the ideal constant scaling factor, which takes full advantage of the idle resource and balances the workload of the leader among all replicas. We implemented Leopard and evaluated its performance compared to HotStuff, a state-of-the-art leader-based BFT protocol. We ran extensive experiments with up to 600 replicas. The results show that Leopard achieves significant throughput improvements. In particular, the throughput of Leopard remains at a high level of 105 when the scale is 600. It achieves a 5× throughput over HotStuff when the scale is 300, and the gap becomes wider as the scale further increases.
AB - With the emergence of large-scale decentralized applications, a scalable and efficient Byzantine Fault Tolerant (BFT) protocol of hundreds of replicas is desirable. Although the throughput of existing leader-based BFT protocols has reached a high level of 105 requests per second for a small scale of replicas, it drops significantly when the scale increases.This paper focuses on preserving high throughput as the BFT protocol's scale is increasing. We identify and analyze a major bottleneck to leader-based BFT protocols due to the excessive workload of the leader at large scales. A new metric of scaling factor is defined to capture whether a BFT protocol will get stuck when the scale gets larger, which can be used to measure the performance of throughput and scalability of BFT protocols. We propose "Leopard", the first leader-based BFT protocol that scales to multiple hundreds of replicas, and more importantly, preserves high throughput. We remove the bottleneck by introducing a technique of achieving the ideal constant scaling factor, which takes full advantage of the idle resource and balances the workload of the leader among all replicas. We implemented Leopard and evaluated its performance compared to HotStuff, a state-of-the-art leader-based BFT protocol. We ran extensive experiments with up to 600 replicas. The results show that Leopard achieves significant throughput improvements. In particular, the throughput of Leopard remains at a high level of 105 when the scale is 600. It achieves a 5× throughput over HotStuff when the scale is 300, and the gap becomes wider as the scale further increases.
KW - BFT
KW - high throughput
KW - partially synchronous
KW - scalability
UR - http://www.scopus.com/inward/record.url?scp=85140900812&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85140900812&partnerID=8YFLogxK
U2 - 10.1109/ICDCS54860.2022.00024
DO - 10.1109/ICDCS54860.2022.00024
M3 - Conference contribution
AN - SCOPUS:85140900812
T3 - Proceedings - International Conference on Distributed Computing Systems
SP - 157
EP - 167
BT - Proceedings - 2022 IEEE 42nd International Conference on Distributed Computing Systems, ICDCS 2022
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 10 July 2022 through 13 July 2022
ER -