TY - GEN
T1 - Illumination Adaptation for SAM to Achieve Accurate Segmentation of Images Taken in Low-Light Scenes
AU - Mu, Hongmin
AU - Zhou, Mengchu
AU - Cao, Zhengcai
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Achieving accurate segmentation in low-light scenes is challenging due to 1) severe domain shift encountered when models trained on daylight data are applied to such scenes and 2) lack of large-scale fine-grained labels in low-light conditions. A good idea is to use the generalization capabilities of segmentation foundation models like Segment Anything Model (SAM) to address the scarcity of annotated data. However, applying SAM to low-light scenes faces a severe domain shift issue due to the lack of inductive bias in effectively transforming low-light features into natural-light ones. To address this issue, we propose to adapt SAM for low-light scenes. To reduce the reliance on labels of low-light data, we develop a self-training method that makes SAM generate source-free predictions. To reduce the domain gap between low-light target data and SAM's natural-light trained data, we design a transformation head that enhances low-light features prior to the application of SAM. We further propose a domain shift compensation loss that trains our model to select a domain-adaptation-optimal illumination-enhanced feature map. Experimental results demonstrate that our method well outperforms the state of the art on the Dark Zurich and Nighttime Driving datasets. Code is available at https://github.com/HongminμSALS.
AB - Achieving accurate segmentation in low-light scenes is challenging due to 1) severe domain shift encountered when models trained on daylight data are applied to such scenes and 2) lack of large-scale fine-grained labels in low-light conditions. A good idea is to use the generalization capabilities of segmentation foundation models like Segment Anything Model (SAM) to address the scarcity of annotated data. However, applying SAM to low-light scenes faces a severe domain shift issue due to the lack of inductive bias in effectively transforming low-light features into natural-light ones. To address this issue, we propose to adapt SAM for low-light scenes. To reduce the reliance on labels of low-light data, we develop a self-training method that makes SAM generate source-free predictions. To reduce the domain gap between low-light target data and SAM's natural-light trained data, we design a transformation head that enhances low-light features prior to the application of SAM. We further propose a domain shift compensation loss that trains our model to select a domain-adaptation-optimal illumination-enhanced feature map. Experimental results demonstrate that our method well outperforms the state of the art on the Dark Zurich and Nighttime Driving datasets. Code is available at https://github.com/HongminμSALS.
UR - https://www.scopus.com/pages/publications/105016681240
UR - https://www.scopus.com/pages/publications/105016681240#tab=citedBy
U2 - 10.1109/ICRA55743.2025.11128183
DO - 10.1109/ICRA55743.2025.11128183
M3 - Conference contribution
AN - SCOPUS:105016681240
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 16977
EP - 16983
BT - 2025 IEEE International Conference on Robotics and Automation, ICRA 2025
A2 - Ott, Christian
A2 - Admoni, Henny
A2 - Behnke, Sven
A2 - Bogdan, Stjepan
A2 - Bolopion, Aude
A2 - Choi, Youngjin
A2 - Ficuciello, Fanny
A2 - Gans, Nicholas
A2 - Gosselin, Clement
A2 - Harada, Kensuke
A2 - Kayacan, Erdal
A2 - Kim, H. Jin
A2 - Leutenegger, Stefan
A2 - Liu, Zhe
A2 - Maiolino, Perla
A2 - Marques, Lino
A2 - Matsubara, Takamitsu
A2 - Mavromatti, Anastasia
A2 - Minor, Mark
A2 - O'Kane, Jason
A2 - Park, Hae Won
A2 - Park, Hae-Won
A2 - Rekleitis, Ioannis
A2 - Renda, Federico
A2 - Ricci, Elisa
A2 - Riek, Laurel D.
A2 - Sabattini, Lorenzo
A2 - Shen, Shaojie
A2 - Sun, Yu
A2 - Wieber, Pierre-Brice
A2 - Yamane, Katsu
A2 - Yu, Jingjin
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2025 IEEE International Conference on Robotics and Automation, ICRA 2025
Y2 - 19 May 2025 through 23 May 2025
ER -