The NOAA IR serves as an archival repository of NOAA-published products including scientific findings, journal articles, guidelines, recommendations, or other information authored or co-authored by NOAA or funded partners.
As a repository, the NOAA IR retains documents in their original published format to ensure public access to scientific information.
i
Improving Best Track Verification of Tropical Cyclones: A New Metric to Identify Forecast Consistency
-
2023
-
-
Source: Weather and Forecasting, 38(6), 817-831
Details:
-
Journal Title:Weather and Forecasting
-
Personal Author:
-
NOAA Program & Office:
-
Description:This paper introduces a new tool for verifying tropical cyclone (TC) forecasts. Tropical cyclone forecasts made by operational centers and by numerical weather prediction (NWP) models have been objectively verified for decades. Typically, the mean absolute error (MAE) and/or MAE skill are calculated relative to values within the operational center’s best track. Yet, the MAE can be strongly influenced by outliers and yield misleading results. Thus, this paper introduces an assessment of consistency between the MAE skill as well as two other measures of forecast performance. This “consistency metric” objectively evaluates the forecast-error evolution as a function of lead time based on thresholds applied to the 1) MAE skill; 2) the frequency of superior performance (FSP), which indicates how often one forecast outperforms another; and 3) median absolute error (MDAE) skill. The utility and applicability of the consistency metric is validated by applying it to four research and forecasting applications. Overall, this consistency metric is a helpful tool to guide analysis and increase confidence in results in a straightforward way. By augmenting the commonly used MAE and MAE skill with this consistency metric and creating a single scorecard with consistency metric results for TC track, intensity, and significant-wind-radii forecasts, the impact of observing systems, new modeling systems, or model upgrades on TC-forecast performance can be evaluated both holistically and succinctly. This could in turn help forecasters learn from challenging cases and accelerate and optimize developments and upgrades in NWP models. Significance Statement Evaluating the impact of observing systems, new modeling systems, or model upgrades on TC forecasts is vital to ensure more rapid and accurate implementations and optimizations. To do so, errors between model forecasts and observed TC parameters are calculated. Historically, analyzing these errors heavily relied on using one or two metrics: mean absolute errors (MAE) and/or MAE skill. Yet, doing so can lead to misleading conclusions if the error distributions are skewed, which often occurs (e.g., a poorly forecasted TC). This paper presents a new, straightforward way to combine useful information from several different metrics to enable a more holistic assessment of forecast errors when assessing the MAE and MAE skill.
-
Keywords:
-
Source:Weather and Forecasting, 38(6), 817-831
-
DOI:
-
ISSN:0882-8156;1520-0434;
-
Format:
-
Publisher:
-
Document Type:
-
Funding:
-
Rights Information:Other
-
Compliance:Submitted
-
Main Document Checksum:
-
Download URL:
-
File Type: