Unsupervised domain adaptation with adversarial residual transform networks

Guanyu Cai, Yuqin Wang, Lianghua He, Mengchu Zhou

Research output: Contribution to journalArticlepeer-review

58 Scopus citations

Abstract

Domain adaptation (DA) is widely used in learning problems lacking labels. Recent studies show that deep adversarial DA models can make markable improvements in performance, which include symmetric and asymmetric architectures. However, the former has poor generalization ability, whereas the latter is very hard to train. In this article, we propose a novel adversarial DA method named adversarial residual transform networks (ARTNs) to improve the generalization ability, which directly transforms the source features into the space of target features. In this model, residual connections are used to share features and adversarial loss is reconstructed, thus making the model more generalized and easier to train. Moreover, a special regularization term is added to the loss function to alleviate a vanishing gradient problem, which enables its training process stable. A series of experiments based on Amazon review data set, digits data sets, and Office-31 image data sets are conducted to show that the proposed ARTN can be comparable with the methods of the state of the art.

Original languageEnglish (US)
Article number8833506
Pages (from-to)3073-3086
Number of pages14
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume31
Issue number8
DOIs
StatePublished - Aug 2020

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence

Keywords

  • Adversarial neural networks
  • residual connections
  • transfer learning
  • unsupervised domain adaptation (DA)

Fingerprint

Dive into the research topics of 'Unsupervised domain adaptation with adversarial residual transform networks'. Together they form a unique fingerprint.

Cite this