Correcting the corrected AIC
Advanced Search
Select up to three search categories and corresponding keywords using the fields to the right. Refer to the Help section for more detailed instructions.

Search our Collections & Repository

All these words:

For very narrow results

This exact word or phrase:

When looking for a specific result

Any of these words:

Best used for discovery & interchangable words

None of these words:

Recommended to be used in conjunction with other fields

Language:

Dates

Publication Date Range:

to

Document Data

Title:

Document Type:

Library

Collection:

Series:

People

Author:

Help
Clear All

Query Builder

Query box

Help
Clear All

For additional assistance using the Custom Query please check out our Help Page

i

Correcting the corrected AIC

Filetype[PDF-443.20 KB]



Details:

  • Journal Title:
    Statistics & Probability Letters
  • Description:
    Akaike’s Information Criterion (AIC) has a known tendency to select overfitted models. Hurvich and Tsai (1989) showed that the cause of this overfitting tendency lies in the asymptotic approximations used to derive AIC. To derive a bias-corrected version of AIC, Hurvich and Tsai (1989) evaluated the Kullback–Leibler (KL) divergence exactly for normal distributions, assuming the candidate family of models includes the true model. The resulting criterion, AICc, often outperforms its competitors (McQuarrie and Tsai, 1998) and has become a standard criterion recommended by many investigators (e.g., Burnham and Anderson, 2002 p66). However, an assumption that is not always emphasized in the derivation of AICc is that predictor values are the same in the training and validation samples. Rosset and Tibshirani (2020) call this the “Same-X” assumption, and note that many model selection criteria implicitly assume Same-X. In contrast, many applications of model selection fall under the “Random-X” assumption, in which predictor values differ from training to validation. Although the Same-X and Random-X distinction has been known for some time (see Rosset and Tibshirani, 2020 for a review of this literature), the generalization of standard model selection criteria to Random-X is more recent. For instance, the extension of Mallows’ Cp to Random-X has appeared only recently (Rosset and Tibshirani, 2020). In this paper, we derive a new criterion, AICm, which is an exactly unbiased estimate of the Kullback–Leibler-based criterion for regression models containing an arbitrary mix of Same-X and Random-X predictors. Such models include the Analysis of Covariance (ANCOVA) model. The multivariate generalization of AICm also is derived.
  • Source:
    Statistics & Probability Letters, 173, 109064
  • ISSN:
    0167-7152
  • Format:
  • Publisher:
  • Document Type:
  • Rights Information:
    CC BY-NC-ND
  • Compliance:
    Library
  • Main Document Checksum:
  • File Type:

Supporting Files

  • No Additional Files

More +

You May Also Like

Checkout today's featured content at repository.library.noaa.gov

Version 3.26