Straggler-Resilient Differentially-Private Decentralized Learning

Yauhen Yakimenka, Chung Wei Weng, Hsuan Yin Lin, Eirik Rosnes, Jorg Kliewer

Research output: Chapter in Book/Report/Conference proceedingConference contribution


We consider straggler resiliency in decentralized learning using stochastic gradient descent under the notion of network differential privacy (DP). In particular, we extend the recently proposed framework of privacy amplification by decentralization by Cyffers and Bellet to include training latency -comprising both computation and communication latency. Analytical results on both the convergence speed and the DP level are derived for training over a logical ring for both a skipping scheme (which ignores the stragglers after a timeout) and a baseline scheme that waits for each node to finish before the training continues. Our results show a trade-off between training latency, accuracy, and privacy, parameterized by the timeout of the skipping scheme. Finally, results when training a logistic regression model on a real-world dataset are presented.

Original languageEnglish (US)
Title of host publication2022 IEEE Information Theory Workshop, ITW 2022
PublisherInstitute of Electrical and Electronics Engineers Inc.
Number of pages6
ISBN (Electronic)9781665483414
StatePublished - 2022
Event2022 IEEE Information Theory Workshop, ITW 2022 - Mumbai, India
Duration: Nov 1 2022Nov 9 2022

Publication series

Name2022 IEEE Information Theory Workshop, ITW 2022


Conference2022 IEEE Information Theory Workshop, ITW 2022

All Science Journal Classification (ASJC) codes

  • Information Systems
  • Artificial Intelligence
  • Computational Theory and Mathematics
  • Computer Networks and Communications


Dive into the research topics of 'Straggler-Resilient Differentially-Private Decentralized Learning'. Together they form a unique fingerprint.

Cite this