Abstract
In vertical federated learning (FL), the features of a data sample are distributed across multiple agents. As such, inter-agent collaboration can be beneficial not only during the learning phase, as is the case for standard horizontal FL, but also during the inference phase. A fundamental theoretical question in this setting is how to quantify the cost, or performance loss, of decentralization for learning and/or inference. In this paper, we study general supervised learning problems with any number of agents, and provide a novel information-theoretic quantification of the cost of decentralization in the presence of privacy constraints on inter-agent communication within a Bayesian framework. The cost of decentralization for learning and/or inference is shown to be quantified in terms of conditional mutual information terms involving features and label variables.
| Original language | English (US) |
|---|---|
| Article number | 485 |
| Journal | Entropy |
| Volume | 24 |
| Issue number | 4 |
| DOIs | |
| State | Published - Apr 2022 |
| Externally published | Yes |
All Science Journal Classification (ASJC) codes
- Information Systems
- Mathematical Physics
- Physics and Astronomy (miscellaneous)
- General Physics and Astronomy
- Electrical and Electronic Engineering
Keywords
- Bayesian learning
- information-theoretic analysis
- vertical federated learning
Fingerprint
Dive into the research topics of 'An Information-Theoretic Analysis of the Cost of Decentralization for Learning and Inference under Privacy Constraints'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver