Abstract
In autonomous driving scenarios, the traffic sign recognition problem requires incremental learning methods to continuously realize the expansion of the model's capabilities under real-world conditions. However, mostly proposed deep learning methods store and replay a small fraction of old data, which do not apply to storage forbidden scenarios. This work contributes to a deeper understanding of the no-exemplar class incremental learning approach and its application to traffic sign classification through systematic analysis. From an empirical standpoint, it reveals that concept drift is the crucial cause of the failure of distillation-based methods and the restriction in feature space can provide stable prototypes used as an anchor in the proposed prototype distillation loss. To make better trade-off between stability and plasticity of a deep neural network, this work proposes three components: prototype distillation, post-rebalancing, and contrastive loss minimization to enhance training strategy. Experimental results on benchmark datasets show that the proposed training method obtains superior performance over it as well as other state-of-the-art exemplar-free methods. In some settings, it can even outperform some replay-based algorithms and better mitigate the problem of expanding the model's capacity in traffic application scenarios.
| Original language | English (US) |
|---|---|
| Journal | IEEE Transactions on Vehicular Technology |
| DOIs | |
| State | Accepted/In press - 2025 |
All Science Journal Classification (ASJC) codes
- Automotive Engineering
- Aerospace Engineering
- Computer Networks and Communications
- Electrical and Electronic Engineering
Keywords
- Class Incremental Learning
- Exemplar-free
- Knowledge Distillation
- Traffic Sign Classification