Abstract
Typical adversarial-training-based unsupervised domain adaptation (UDA) methods are vulnerable when the source and target datasets are highly complex or exhibit a large discrepancy between their data distributions. Recently, several Lipschitz-constraint-based methods have been explored. The satisfaction of Lipschitz continuity guarantees a remarkable performance on a target domain. However, they lack a mathematical analysis of why a Lipschitz constraint is beneficial to UDA and usually perform poorly on large-scale datasets. In this article, we take the principle of utilizing a Lipschitz constraint further by discussing how it affects the error bound of UDA. A connection between them is built, and an illustration of how Lipschitzness reduces the error bound is presented. A local smooth discrepancy is defined to measure the Lipschitzness of a target distribution in a pointwise way. When constructing a deep end-to-end model, to ensure the effectiveness and stability of UDA, three critical factors are considered in our proposed optimization strategy, i.e., the sample amount of a target domain, dimension, and batchsize of samples. Experimental results demonstrate that our model performs well on several standard benchmarks. Our ablation study shows that the sample amount of a target domain, the dimension, and batchsize of samples, indeed, greatly impact Lipschitz-constraint-based methods' ability to handle large-scale datasets.
Original language | English (US) |
---|---|
Pages (from-to) | 4181-4195 |
Number of pages | 15 |
Journal | IEEE Transactions on Neural Networks and Learning Systems |
Volume | 34 |
Issue number | 8 |
DOIs | |
State | Published - Aug 1 2023 |
All Science Journal Classification (ASJC) codes
- Software
- Computer Science Applications
- Computer Networks and Communications
- Artificial Intelligence
Keywords
- Lipschitz constraint
- local smooth discrepancy
- transfer learning
- unsupervised domain adaptation (UDA)