TY - JOUR
T1 - Interpretable machine learning for weather and climate prediction
T2 - A review
AU - Yang, Ruyi
AU - Hu, Jingyu
AU - Li, Zihao
AU - Mu, Jianli
AU - Yu, Tingzhao
AU - Xia, Jiangjiang
AU - Li, Xuhong
AU - Dasgupta, Aritra
AU - Xiong, Haoyi
N1 - Publisher Copyright:
© 2024 Elsevier Ltd
PY - 2024/12/1
Y1 - 2024/12/1
N2 - Advanced machine learning models have recently achieved high predictive accuracy for weather and climate prediction. However, these complex models often lack inherent transparency and interpretability, acting as “black boxes” that impede user trust and hinder further model improvements. As such, interpretable machine learning techniques have become crucial in enhancing the credibility and utility of weather and climate modeling. In this paper, we review current interpretable machine learning approaches applied to meteorological predictions. We categorize methods into two major paradigms: (1) Post-hoc interpretability techniques that explain pre-trained models, such as perturbation-based, game theory based, and gradient-based attribution methods. (2) Designing inherently interpretable models from scratch using architectures like tree ensembles and explainable neural networks. We summarize how each technique provides insights into the predictions, uncovering novel meteorological relationships captured by machine learning. Lastly, we discuss research challenges and provide future perspectives around achieving deeper mechanistic interpretations aligned with physical principles, developing standardized evaluation benchmarks, integrating interpretability into iterative model development workflows, and providing explainability for large foundation models.
AB - Advanced machine learning models have recently achieved high predictive accuracy for weather and climate prediction. However, these complex models often lack inherent transparency and interpretability, acting as “black boxes” that impede user trust and hinder further model improvements. As such, interpretable machine learning techniques have become crucial in enhancing the credibility and utility of weather and climate modeling. In this paper, we review current interpretable machine learning approaches applied to meteorological predictions. We categorize methods into two major paradigms: (1) Post-hoc interpretability techniques that explain pre-trained models, such as perturbation-based, game theory based, and gradient-based attribution methods. (2) Designing inherently interpretable models from scratch using architectures like tree ensembles and explainable neural networks. We summarize how each technique provides insights into the predictions, uncovering novel meteorological relationships captured by machine learning. Lastly, we discuss research challenges and provide future perspectives around achieving deeper mechanistic interpretations aligned with physical principles, developing standardized evaluation benchmarks, integrating interpretability into iterative model development workflows, and providing explainability for large foundation models.
KW - Climate prediction
KW - Interpretability
KW - Machine learning
KW - Post-hoc interpretability
KW - Weather prediction
UR - http://www.scopus.com/inward/record.url?scp=85203821376&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85203821376&partnerID=8YFLogxK
U2 - 10.1016/j.atmosenv.2024.120797
DO - 10.1016/j.atmosenv.2024.120797
M3 - Review article
AN - SCOPUS:85203821376
SN - 1352-2310
VL - 338
JO - Atmospheric Environment
JF - Atmospheric Environment
M1 - 120797
ER -