Keyphrases
Information-theoretic Bounds
100%
Jensen-Shannon Divergence
100%
Generalization Gap
100%
Data Distribution
75%
Training Data
50%
Source Distribution
50%
Training Loss
50%
Information Theory
25%
Divergence
25%
Transfer Students
25%
Two-parameter
25%
Transfer Learning
25%
Numerical Examples
25%
Mutual Information
25%
Kullback-Leibler Divergence
25%
Target Distribution
25%
Testing Data
25%
Learning Data
25%
Population Loss
25%
Risk Minimization
25%
Cumulant Generating Function
25%
Excess Risk
25%
Unbounded Loss Function
25%
New Upper Bound
25%
Domain Shift
25%
Computer Science
Data Distribution
100%
Training Dataset
66%
Source Distribution
66%
Transfer Learning
33%
Numerical Example
33%
Mutual Information
33%
Risk Minimization
33%
Leibler Divergence
33%
Weighted Average
33%
Target Distribution
33%
Population Loss
33%
Testing Data Set
33%
Parameter Family
33%
Mathematics
Jensen-Shannon Divergence
100%
Data Distribution
75%
Upper Bound
50%
Minimizes
25%
Numerical Example
25%
Kullback-Leibler Divergence
25%
Loss Function
25%
Generating Function
25%
Mutual Information
25%
Parameter Family
25%
Transfer Learning
25%
Training Dataset
25%
Weighted Average
25%
Excess Risk
25%
target distribution π
25%
Cumulants
25%