i
A Machine Learning Explainability Tutorial for Atmospheric Sciences
-
2024
-
-
Source: Artificial Intelligence for the Earth Systems, 3(1)
Details:
-
Journal Title:Artificial Intelligence for the Earth Systems
-
Personal Author:
-
NOAA Program & Office:
-
Description:With increasing interest in explaining machine learning (ML) models, this paper synthesizes many topics related to ML explainability. We distinguish explainability from interpretability, local from global explainability, and feature importance versus feature relevance. We demonstrate and visualize different explanation methods, how to interpret them, and provide a complete Python package (scikit-explain) to allow future researchers and model developers to explore these explainability methods. The explainability methods include Shapley additive explanations (SHAP), Shapley additive global explanation (SAGE), and accumulated local effects (ALE). Our focus is primarily on Shapley-based techniques, which serve as a unifying framework for various existing methods to enhance model explainability. For example, SHAP unifies methods like local interpretable model-agnostic explanations (LIME) and tree interpreter for local explainability, while SAGE unifies the different variations of permutation importance for global explainability. We provide a short tutorial for explaining ML models using three disparate datasets: a convection-allowing model dataset for severe weather prediction, a nowcasting dataset for subfreezing road surface prediction, and satellite-based data for lightning prediction. In addition, we showcase the adverse effects that correlated features can have on the explainability of a model. Finally, we demonstrate the notion of evaluating model impacts of feature groups instead of individual features. Evaluating the feature groups mitigates the impacts of feature correlations and can provide a more holistic understanding of the model. All code, models, and data used in this study are freely available to accelerate the adoption of machine learning explainability in the atmospheric and other environmental sciences.
-
Source:Artificial Intelligence for the Earth Systems, 3(1)
-
DOI:
-
ISSN:2769-7525
-
Format:
-
Publisher:
-
Document Type:
-
Funding:
-
Rights Information:Other
-
Compliance:Submitted
-
Main Document Checksum:
-
Download URL:
-
File Type: