Abstract
Lane detection, one of the crucial foundations of the autonomous driving of Rubber-Tired Gantries (RTGs), plays a vital role in automating manual container terminals. Deep-learning-based lane detection methods have robust and generalized global feature extraction capabilities to deal with complex scenarios well. However, the high preparation cost of large-scale labeled data has limited their application in RTG lane detection. Therefore, this paper presents a cost-effective, scalable incremental learning-based detection method. Specifically, some lane images are collected online, with reliable segmentation labels generated by an image-processing-based lane detection method. Next, a semi-supervised clustering approach is employed to construct a dynamically expanding sample pool, ensuring that samples are representative and diverse. Finally, a lane detection network model is self-trained by using all labeled and unlabeled samples. Extensive experimental results show that our proposed method outperforms existing methods and achieves a lane detection accuracy of 94.87% and a detection success rate of 99.06%, with the potential for further performance improvement as data size increases.
Original language | English (US) |
---|---|
Pages (from-to) | 3168-3179 |
Number of pages | 12 |
Journal | IEEE Transactions on Circuits and Systems for Video Technology |
Volume | 34 |
Issue number | 5 |
DOIs | |
State | Published - May 1 2024 |
All Science Journal Classification (ASJC) codes
- Media Technology
- Electrical and Electronic Engineering
Keywords
- Container terminal
- contrastive learning
- incremental learning
- lane detection
- rubber-tired gantry