Understanding Spatial Context in Convolutional Neural Networks using Explainable Methods: Application to Interpretable GREMLIN
Advanced Search
Select up to three search categories and corresponding keywords using the fields to the right. Refer to the Help section for more detailed instructions.

Search our Collections & Repository

For very narrow results

When looking for a specific result

Best used for discovery & interchangable words

Recommended to be used in conjunction with other fields

Dates

to

Document Data
Library
People
Clear All
Clear All

For additional assistance using the Custom Query please check out our Help Page

The NOAA IR serves as an archival repository of NOAA-published products including scientific findings, journal articles, guidelines, recommendations, or other information authored or co-authored by NOAA or funded partners. As a repository, the NOAA IR retains documents in their original published format to ensure public access to scientific information.
i

Understanding Spatial Context in Convolutional Neural Networks using Explainable Methods: Application to Interpretable GREMLIN

Filetype[PDF-3.19 MB]



Details:

  • Journal Title:
    Artificial Intelligence for the Earth Systems
  • Personal Author:
  • NOAA Program & Office:
  • Description:
    Convolutional neural networks (CNNs) are opening new possibilities in the realm of satellite remote sensing. CNNs are especially useful for capturing the information in spatial patterns that is evident to the human eye but has eluded classical pixelwise retrieval algorithms. However, the black box nature of CNN predictions makes them difficult to interpret, hindering their trustworthiness. This paper explores a new way to simplify CNNs that allows them to be implemented in a fully transparent and interpretable framework. This clarity is accomplished by moving the inner workings of the CNN out into a feature engineering step and replacing the CNN with a regression model. The specific example of GREMLIN (GOES Radar Estimation via Machine Learning to Inform NWP) is used to demonstrate that such simplifications are possible and show the benefits of the interpretable approach. GREMLIN translates images of GOES radiances and lightning into images of radar reflectivity, and previous research used Explainable AI (XAI) approaches to explain some aspects of how GREMLIN makes predictions. However, the Interpretable GREMLIN model shows that XAI missed several strategies, and XAI does not provide guarantees on how the model will respond when confronted with new scenarios. In contrast, the interpretable model establishes well defined relationships between inputs and outputs, offering a clear mapping of the spatial context utilized by the CNN to make accurate predictions; and providing guarantees on how the model will respond to new inputs. The significance of this work is that it provides a new approach for developing trustworthy AI models.
  • Source:
    Artificial Intelligence for the Earth Systems (2023)
  • DOI:
  • ISSN:
    2769-7525
  • Format:
  • Publisher:
  • Document Type:
  • Funding:
  • Rights Information:
    Other
  • Compliance:
    Library
  • Main Document Checksum:
  • Download URL:
  • File Type:

Supporting Files

  • No Additional Files
More +

You May Also Like

Checkout today's featured content at repository.library.noaa.gov

Version 3.27.1