Deeply Supervised Subspace Learning for Cross-Modal Material Perception of Known and Unknown Objects

Pengwen Xiong, Junjie Liao, Meng Chu Zhou, Aiguo Song, Peter X. Liu

Research output: Contribution to journalArticlepeer-review

24 Scopus citations

Abstract

In order to help robots understand and perceive an object's properties during noncontact robot-object interaction, this article proposes a deeply supervised subspace learning method. In contrast to previous work, it takes the advantages of low noise and fast response of noncontact sensors and extracts novel contactless feature information to retrieve cross-modal information, so as to estimate and infer material properties of known as well as unknown objects. Specifically, a depth-supervised subspace cross-modal material retrieval model is trained to learn a common low-dimensional feature representation to capture the clustering structure among different modal features of the same class of objects. Meanwhile, all of unknown objects are accurately perceived by an energy-based model, which forces an unlabeled novel object's features to be mapped beyond the common low-dimensional features. The experimental results show that our approach is effective in comparison with other advanced methods.

Original languageEnglish (US)
Pages (from-to)2259-2268
Number of pages10
JournalIEEE Transactions on Industrial Informatics
Volume19
Issue number2
DOIs
StatePublished - Feb 1 2023

All Science Journal Classification (ASJC) codes

  • Control and Systems Engineering
  • Information Systems
  • Computer Science Applications
  • Electrical and Electronic Engineering

Keywords

  • Cross-modal retrieval
  • deep subspace learning
  • machine learning
  • material perception

Fingerprint

Dive into the research topics of 'Deeply Supervised Subspace Learning for Cross-Modal Material Perception of Known and Unknown Objects'. Together they form a unique fingerprint.

Cite this