Learning Smooth Representation for Unsupervised Domain Adaptation

Guanyu Cai, Lianghua He, Mengchu Zhou, Hesham Alhumade, Die Hu

Research output: Contribution to journalArticlepeer-review

7 Scopus citations


Typical adversarial-training-based unsupervised domain adaptation (UDA) methods are vulnerable when the source and target datasets are highly complex or exhibit a large discrepancy between their data distributions. Recently, several Lipschitz-constraint-based methods have been explored. The satisfaction of Lipschitz continuity guarantees a remarkable performance on a target domain. However, they lack a mathematical analysis of why a Lipschitz constraint is beneficial to UDA and usually perform poorly on large-scale datasets. In this article, we take the principle of utilizing a Lipschitz constraint further by discussing how it affects the error bound of UDA. A connection between them is built, and an illustration of how Lipschitzness reduces the error bound is presented. A local smooth discrepancy is defined to measure the Lipschitzness of a target distribution in a pointwise way. When constructing a deep end-to-end model, to ensure the effectiveness and stability of UDA, three critical factors are considered in our proposed optimization strategy, i.e., the sample amount of a target domain, dimension, and batchsize of samples. Experimental results demonstrate that our model performs well on several standard benchmarks. Our ablation study shows that the sample amount of a target domain, the dimension, and batchsize of samples, indeed, greatly impact Lipschitz-constraint-based methods' ability to handle large-scale datasets.

Original languageEnglish (US)
Pages (from-to)4181-4195
Number of pages15
JournalIEEE Transactions on Neural Networks and Learning Systems
Issue number8
StatePublished - Aug 1 2023

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence


  • Lipschitz constraint
  • local smooth discrepancy
  • transfer learning
  • unsupervised domain adaptation (UDA)


Dive into the research topics of 'Learning Smooth Representation for Unsupervised Domain Adaptation'. Together they form a unique fingerprint.

Cite this