Abstract
Soft sensing is a promising solution to predict key quality variables in various industries. One of the major obstacles to building an accurate data-driven soft sensor is the scarcity of labeled data and the challenge of extracting useful information from unlabeled data. To mitigate this issue, this work proposes a semi-supervised soft sensing method called dual attention-aided cooperative deep spatiotemporal-feature-extraction network. It leverages an encoder-decoder structure to explicitly exploit the spatial and temporal information in both labeled and unlabeled data, enabling efficient utilization of the latter for prediction performance improvement. The encoder is used to realize more detailed spatiotemporal dependencies capture while a dual attention mechanism is developed for feature extraction. In addition, a gated neuron is added between the encoder and decoder to boost model accuracy by quantifying the contributions of extracted features and adaptively fusing them. To optimize our model while incorporating both labeled and unlabeled data, a mixed form loss is employed in the decoder. Experiments are carried out on a real-life industrial process. The results demonstrate that our proposed model achieves state-of-the-art performance.
Original language | English (US) |
---|---|
Pages (from-to) | 2184-2190 |
Number of pages | 7 |
Journal | IEEE Robotics and Automation Letters |
Volume | 10 |
Issue number | 3 |
DOIs | |
State | Published - 2025 |
All Science Journal Classification (ASJC) codes
- Control and Systems Engineering
- Biomedical Engineering
- Human-Computer Interaction
- Mechanical Engineering
- Computer Vision and Pattern Recognition
- Computer Science Applications
- Control and Optimization
- Artificial Intelligence
Keywords
- deep learning
- dual attention mechanism
- semi-supervised learning
- Softsensing
- spatiotemporal feature