Complement Sparsification: Low-Overhead Model Pruning for Federated Learning

Xiaopeng Jiang, Cristian Borcea

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations


Federated Learning (FL) is a privacy-preserving distributed deep learning paradigm that involves substantial communication and computation effort, which is a problem for resource-constrained mobile and IoT devices. Model pruning/sparsification develops sparse models that could solve this problem, but existing sparsification solutions cannot satisfy at the same time the requirements for low bidirectional communication overhead between the server and the clients, low computation overhead at the clients, and good model accuracy, under the FL assumption that the server does not have access to raw data to fine-tune the pruned models. We propose Complement Sparsification (CS), a pruning mechanism that satisfies all these requirements through a complementary and collaborative pruning done at the server and the clients. At each round, CS creates a global sparse model that contains the weights that capture the general data distribution of all clients, while the clients create local sparse models with the weights pruned from the global model to capture the local trends. For improved model performance, these two types of complementary sparse models are aggregated into a dense model in each round, which is subsequently pruned in an iterative process. CS requires little computation overhead on the top of vanilla FL for both the server and the clients. We demonstrate that CS is an approximation of vanilla FL and, thus, its models perform well. We evaluate CS experimentally with two popular FL benchmark datasets. CS achieves substantial reduction in bidirectional communication, while achieving performance comparable with vanilla FL. In addition, CS outperforms baseline pruning mechanisms for FL.

Original languageEnglish (US)
Title of host publicationAAAI-23 Technical Tracks 7
EditorsBrian Williams, Yiling Chen, Jennifer Neville
PublisherAAAI press
Number of pages9
ISBN (Electronic)9781577358800
StatePublished - Jun 27 2023
Externally publishedYes
Event37th AAAI Conference on Artificial Intelligence, AAAI 2023 - Washington, United States
Duration: Feb 7 2023Feb 14 2023

Publication series

NameProceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023


Conference37th AAAI Conference on Artificial Intelligence, AAAI 2023
Country/TerritoryUnited States

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence


Dive into the research topics of 'Complement Sparsification: Low-Overhead Model Pruning for Federated Learning'. Together they form a unique fingerprint.

Cite this