Deeply Supervised Subspace Learning for Cross-Modal Material Perception of Known and Unknown Objects

Pengwen Xiong, Junjie Liao, Meng Chu Zhou, Aiguo Song, Peter X. Liu

Research output: Contribution to journalArticlepeer-review


In order to help robots understand and perceive an object's properties during non-contact robot-object interaction, this work proposes a Deeply Supervised Subs-pace Learning (DSSL) method. In contrast to previous work, it takes the advantages of low noise and fast response of non-contact sensors and extracts novel contactless feature information to retrieve cross-modal information, so as to estimate and infer material properties of known as well as unknown objects. Specifically, a depth-supervised subspace cross-modal material retrieval model is trained to learn a common low-dimensional feature representation to capture the clustering structure among different modal features of the same class of objects. Meanwhile, all of unknown objects are accurately perceived by energy-based model, which forces an unlabeled novel object's features to be mapped beyond the common low-dimensional features. The experimental results show that our approach is effective in comparison with other advanced methods.

Original languageEnglish (US)
Pages (from-to)1-10
Number of pages10
JournalIEEE Transactions on Industrial Informatics
StateAccepted/In press - 2022

All Science Journal Classification (ASJC) codes

  • Control and Systems Engineering
  • Information Systems
  • Computer Science Applications
  • Electrical and Electronic Engineering


  • Correlation
  • Cross-modal retrieval
  • Feature extraction
  • Material properties
  • Robot sensing systems
  • Task analysis
  • Training
  • Visualization
  • deep subspace learning
  • material perception


Dive into the research topics of 'Deeply Supervised Subspace Learning for Cross-Modal Material Perception of Known and Unknown Objects'. Together they form a unique fingerprint.

Cite this