Good practices, trade-offs, and precautions for model diagnostics in integrated stock assessments
Advanced Search
Select up to three search categories and corresponding keywords using the fields to the right. Refer to the Help section for more detailed instructions.

Search our Collections & Repository

For very narrow results

When looking for a specific result

Best used for discovery & interchangable words

Recommended to be used in conjunction with other fields

Dates

to

Document Data
Library
People
Clear All
Clear All

For additional assistance using the Custom Query please check out our Help Page

The NOAA IR serves as an archival repository of NOAA-published products including scientific findings, journal articles, guidelines, recommendations, or other information authored or co-authored by NOAA or funded partners. As a repository, the NOAA IR retains documents in their original published format to ensure public access to scientific information.
i

Good practices, trade-offs, and precautions for model diagnostics in integrated stock assessments

Filetype[PDF-2.70 MB]



Details:

  • Journal Title:
    Fisheries Research
  • Personal Author:
  • NOAA Program & Office:
  • Description:
    Carvalho et al. (2021) provided a “cookbook” for implementing contemporary model diagnostics, which included convergence checks, examinations of fits to data, retrospective and hindcasting analyses, likelihood profiling, and model-free validation. However, it remains unclear whether these widely-used diagnostics exhibit consistent behavior in the presence of model misspecification, and whether there are trade-offs in diagnostic performance that the assessment community should consider. This illustrative study uses a statistical catch-at-age simulation framework to compare diagnostic performance across a spectrum of correctly specified and mis-specified assessment models that incorporate compositional, survey, and catch data. Results are used to contextualize how reliably common diagnostic tests perform given the degree and nature of known model issues, including parameter and model process misspecification, and combinations thereof, and trade-offs among model fits, prediction skill, and retrospective bias that analysts must consider as they evaluate diagnostic performance. A surprising number of mis-specified models were able to pass certain diagnostic tests, although there was a trend of more frequent failure with increased mis-specification for most diagnostic tests. Nearly all models that failed multiple tests were mis-specified, indicating the value of examining multiple diagnostics during model evaluation. Diagnostic performance was best (most sensitive) when recruitment variability was low and historical exploitation rates were high, likely due to the induction of better contrast in the data, particularly indices of abundance, under this scenario. These results suggest caution when using standalone diagnostic results as the basis for selecting a “best” assessment model, a set of models to include within an ensemble, or to inform model weighting. The discussion advises stock assessors to consider the interplay across multiple dynamics. Future work should evaluate how the resolution of the production function, quality and quantity of data time series, and exploitation history can influence diagnostic performance.
  • Source:
    Fisheries Research, 281, 107206
  • DOI:
  • ISSN:
    0165-7836
  • Format:
  • Publisher:
  • Document Type:
  • Rights Information:
    Accepted Manuscript
  • Compliance:
    Submitted
  • Main Document Checksum:
  • Download URL:
  • File Type:

Supporting Files

  • No Additional Files
More +

You May Also Like

Checkout today's featured content at repository.library.noaa.gov

Version 3.27.1