Abstract
Domain generalization (DG) uses multiple source (training) domains to learn a model that generalizes well to unseen domains. Existing approaches to DG need more scrutiny over (i) the ability to imagine data beyond the source domains and (ii) the ability to cope with the scarcity of training data. To address these shortcomings, we propose a novel framework - interpolation robustness, where we view each training domain as a point on a domain manifold and learn class-specific representations that are domain invariant across all interpolations between domains. We use this representation to propose a generic domain generalization approach that can be seamlessly combined with many state-of-the-art methods in DG. Through extensive experiments, we show that our approach can enhance the performance of several methods in the conventional and the limited training data setting.
| Original language | English (US) |
|---|---|
| Pages (from-to) | 1039-1054 |
| Number of pages | 16 |
| Journal | Proceedings of Machine Learning Research |
| Volume | 222 |
| State | Published - 2023 |
| Externally published | Yes |
| Event | 15th Asian Conference on Machine Learning, ACML 2023 - Istanbul, Turkey Duration: Nov 11 2023 → Nov 14 2023 |
All Science Journal Classification (ASJC) codes
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability
Keywords
- domain generalization
- invariant representation
- latent interpolation
- limited data
- robustness
Fingerprint
Dive into the research topics of 'Domain Generalization with Interpolation Robustness'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver