LAGC: Lazily Aggregated Gradient Coding for Straggler-Tolerant and Communication-Efficient Distributed Learning

Jingjing Zhang, Osvaldo Simeone

Research output: Contribution to journalArticlepeer-review

17 Scopus citations

Abstract

Gradient-based distributed learning in parameter server (PS) computing architectures is subject to random delays due to straggling worker nodes and to possible communication bottlenecks between PS and workers. Solutions have been recently proposed to separately address these impairments based on the ideas of gradient coding (GC), worker grouping, and adaptive worker selection. This article provides a unified analysis of these techniques in terms of wall-clock time, communication, and computation complexity measures. Furthermore, in order to combine the benefits of GC and grouping in terms of robustness to stragglers with the communication and computation load gains of adaptive selection, novel strategies, named lazily aggregated GC (LAGC) and grouped-LAG (G-LAG), are introduced. Analysis and results show that G-LAG provides the best wall-clock time and communication performance while maintaining a low computational cost, for two representative distributions of the computing times of the worker nodes.

Original languageEnglish (US)
Article number9056809
Pages (from-to)962-974
Number of pages13
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume32
Issue number3
DOIs
StatePublished - Mar 2021

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence

Keywords

  • Adaptive selection
  • coding
  • distributed learning
  • gradient descent (GD)
  • grouping

Fingerprint

Dive into the research topics of 'LAGC: Lazily Aggregated Gradient Coding for Straggler-Tolerant and Communication-Efficient Distributed Learning'. Together they form a unique fingerprint.

Cite this