TY - JOUR
T1 - A Unified Linear Speedup Analysis of Federated Averaging and Nesterov FedAvg
AU - Qu, Zhaonan
AU - Lin, Kaixiang
AU - Li, Zhaojian
AU - Zhou, Jiayu
AU - Zhou, Zhengyuan
N1 - Publisher Copyright:
© 2023 AI Access Foundation. All rights reserved.
PY - 2023
Y1 - 2023
N2 - Federated learning (FL) learns a model jointly from a set of participating devices without sharing each other’s privately held data. The characteristics of non-i.i.d. data across the network, low device participation, high communication costs, and the mandate that data remain private bring challenges in understanding the convergence of FL algorithms, particularly regarding how convergence scales with the number of participating devices. In this paper, we focus on Federated Averaging (FedAvg), one of the most popular and effective FL algorithms in use today, as well as its Nesterov accelerated variant, and conduct a systematic study of how their convergence scale with the number of participating devices under non-i.i.d. data and partial participation in convex settings. We provide a unified analysis that establishes convergence guarantees for FedAvg under strongly convex, convex, and overparameterized strongly convex problems. We show that FedAvg enjoys linear speedup in each case, although with different convergence rates and communication efficiencies. For strongly convex and convex problems, we also characterize the corresponding convergence rates for the Nesterov accelerated FedAvg algorithm, which are the first linear speedup guarantees for momentum variants of FedAvg in convex settings. Empirical studies of the algorithms in various settings have supported our theoretical results.
AB - Federated learning (FL) learns a model jointly from a set of participating devices without sharing each other’s privately held data. The characteristics of non-i.i.d. data across the network, low device participation, high communication costs, and the mandate that data remain private bring challenges in understanding the convergence of FL algorithms, particularly regarding how convergence scales with the number of participating devices. In this paper, we focus on Federated Averaging (FedAvg), one of the most popular and effective FL algorithms in use today, as well as its Nesterov accelerated variant, and conduct a systematic study of how their convergence scale with the number of participating devices under non-i.i.d. data and partial participation in convex settings. We provide a unified analysis that establishes convergence guarantees for FedAvg under strongly convex, convex, and overparameterized strongly convex problems. We show that FedAvg enjoys linear speedup in each case, although with different convergence rates and communication efficiencies. For strongly convex and convex problems, we also characterize the corresponding convergence rates for the Nesterov accelerated FedAvg algorithm, which are the first linear speedup guarantees for momentum variants of FedAvg in convex settings. Empirical studies of the algorithms in various settings have supported our theoretical results.
UR - https://www.scopus.com/pages/publications/85181924630
UR - https://www.scopus.com/pages/publications/85181924630#tab=citedBy
U2 - 10.1613/JAIR.1.15180
DO - 10.1613/JAIR.1.15180
M3 - Article
AN - SCOPUS:85181924630
SN - 1076-9757
VL - 78
SP - 1143
EP - 1200
JO - Journal of Artificial Intelligence Research
JF - Journal of Artificial Intelligence Research
ER -