Abstract
In vertical federated learning (FL), the features of a data sample are distributed across multiple agents. As such, inter-agent collaboration can be beneficial not only during the learning phase, as is the case for standard horizontal FL, but also during the inference phase. A fundamental theoretical question in this setting is how to quantify the cost, or performance loss, of decentralization for learning and/or inference. In this paper, we study general supervised learning problems with any number of agents, and provide a novel information-theoretic quantification of the cost of decentralization in the presence of privacy constraints on inter-agent communication within a Bayesian framework. The cost of decentralization for learning and/or inference is shown to be quantified in terms of conditional mutual information terms involving features and label variables.
Original language | English (US) |
---|---|
Article number | 485 |
Journal | Entropy |
Volume | 24 |
Issue number | 4 |
DOIs | |
State | Published - Apr 2022 |
All Science Journal Classification (ASJC) codes
- Information Systems
- Electrical and Electronic Engineering
- General Physics and Astronomy
- Mathematical Physics
- Physics and Astronomy (miscellaneous)
Keywords
- Bayesian learning
- information-theoretic analysis
- vertical federated learning