Abstract
The proliferation of Internet-of-Things (IoT) devices and cloud-computing applications over siloed data centers is motivating renewed interest in the collaborative training of a shared model by multiple individual clients via federated learning (FL). To improve the communication efficiency of FL implementations in wireless systems, recent works have proposed compression and dimension reduction mechanisms, along with digital and analog transmission schemes that account for channel noise, fading, and interference. The prior art has mainly focused on star topologies consisting of distributed clients and a central server. In contrast, this paper studies FL over wireless device-to-device (D2D) networks by providing theoretical insights into the performance of digital and analog implementations of decentralized stochastic gradient descent (DSGD). First, we introduce generic digital and analog wireless implementations of communication-efficient DSGD algorithms, leveraging random linear coding (RLC) for compression and over-the-air computation (AirComp) for simultaneous analog transmissions. Next, under the assumptions of convexity and connectivity, we provide convergence bounds for both implementations. The results demonstrate the dependence of the optimality gap on the connectivity and on the signal-to-noise ratio (SNR) levels in the network. The analysis is corroborated by experiments on an image-classification task.
Original language | English (US) |
---|---|
Pages (from-to) | 3723-3741 |
Number of pages | 19 |
Journal | IEEE Journal on Selected Areas in Communications |
Volume | 39 |
Issue number | 12 |
DOIs | |
State | Published - Dec 1 2021 |
All Science Journal Classification (ASJC) codes
- Computer Networks and Communications
- Electrical and Electronic Engineering
Keywords
- D2D networks
- Federated learning
- decentralized stochastic gradient descent
- distributed learning
- over-the-air computation
- random linear coding