TY - JOUR
T1 - Deep dictionary learning with reconstruction for texture recognition
AU - Xiong, Pengwen
AU - Zhang, Ke
AU - Shi, Zhi
AU - Zhou, Meng Chu
AU - Song, Aiguo
N1 - Publisher Copyright:
© The Author(s) 2025.
PY - 2025/12
Y1 - 2025/12
N2 - Texture recognition underpins critical applications in industrial quality control, robotic manipulation, and biomedical imaging. Traditional deep dictionary learning methods for texture recognition often emphasize deep feature extraction. However, they tend to lose crucial features as model depth increases, which can reduce their overall effectiveness. To address this issue, we propose a dictionary-reconstruction-based deep learning approach by incorporating a novel hybrid fusion method designed to enhance the accuracy of texture recognition. Our approach involves the successive fusion of multimodality and multi-level features. By reconstructing dictionaries learned at different levels, we integrate both deep and intuitive features. Additionally, we introduce a grouping optimization technique, based on single-sample learning, to train these reconstructed dictionaries, thereby improving feature learning and training efficiency. The proposed approach fuses feature data from various multimodal sources and constructs dictionaries at different learning levels, which enables effective feature fusion across these levels. We evaluate our approach against recent deep learning methods by using the LMT-108 and SpectroVision datasets. The results demonstrate its 97.7% and 89.4% accuracy rates, respectively, outperforming its peers and validating its robustness when handling diverse and challenging data.
AB - Texture recognition underpins critical applications in industrial quality control, robotic manipulation, and biomedical imaging. Traditional deep dictionary learning methods for texture recognition often emphasize deep feature extraction. However, they tend to lose crucial features as model depth increases, which can reduce their overall effectiveness. To address this issue, we propose a dictionary-reconstruction-based deep learning approach by incorporating a novel hybrid fusion method designed to enhance the accuracy of texture recognition. Our approach involves the successive fusion of multimodality and multi-level features. By reconstructing dictionaries learned at different levels, we integrate both deep and intuitive features. Additionally, we introduce a grouping optimization technique, based on single-sample learning, to train these reconstructed dictionaries, thereby improving feature learning and training efficiency. The proposed approach fuses feature data from various multimodal sources and constructs dictionaries at different learning levels, which enables effective feature fusion across these levels. We evaluate our approach against recent deep learning methods by using the LMT-108 and SpectroVision datasets. The results demonstrate its 97.7% and 89.4% accuracy rates, respectively, outperforming its peers and validating its robustness when handling diverse and challenging data.
KW - Deep dictionary learning
KW - Dictionary reconstruction
KW - Feature fusion
KW - Texture recognition
UR - https://www.scopus.com/pages/publications/105013867944
UR - https://www.scopus.com/pages/publications/105013867944#tab=citedBy
U2 - 10.1038/s41598-025-16456-w
DO - 10.1038/s41598-025-16456-w
M3 - Article
C2 - 40850961
AN - SCOPUS:105013867944
SN - 2045-2322
VL - 15
JO - Scientific reports
JF - Scientific reports
IS - 1
M1 - 31164
ER -