Correcting the corrected AIC
Advanced Search
Select up to three search categories and corresponding keywords using the fields to the right. Refer to the Help section for more detailed instructions.

Search our Collections & Repository

For very narrow results

When looking for a specific result

Best used for discovery & interchangable words

Recommended to be used in conjunction with other fields



Document Data
Clear All
Clear All

For additional assistance using the Custom Query please check out our Help Page


Correcting the corrected AIC

Filetype[PDF-443.20 KB]


  • Journal Title:
    Statistics & Probability Letters
  • Personal Author:
  • NOAA Program & Office:
  • Description:
    Akaike’s Information Criterion (AIC) has a known tendency to select overfitted models. Hurvich and Tsai (1989) showed that the cause of this overfitting tendency lies in the asymptotic approximations used to derive AIC. To derive a bias-corrected version of AIC, Hurvich and Tsai (1989) evaluated the Kullback–Leibler (KL) divergence exactly for normal distributions, assuming the candidate family of models includes the true model. The resulting criterion, AICc, often outperforms its competitors (McQuarrie and Tsai, 1998) and has become a standard criterion recommended by many investigators (e.g., Burnham and Anderson, 2002 p66). However, an assumption that is not always emphasized in the derivation of AICc is that predictor values are the same in the training and validation samples. Rosset and Tibshirani (2020) call this the “Same-X” assumption, and note that many model selection criteria implicitly assume Same-X. In contrast, many applications of model selection fall under the “Random-X” assumption, in which predictor values differ from training to validation. Although the Same-X and Random-X distinction has been known for some time (see Rosset and Tibshirani, 2020 for a review of this literature), the generalization of standard model selection criteria to Random-X is more recent. For instance, the extension of Mallows’ Cp to Random-X has appeared only recently (Rosset and Tibshirani, 2020). In this paper, we derive a new criterion, AICm, which is an exactly unbiased estimate of the Kullback–Leibler-based criterion for regression models containing an arbitrary mix of Same-X and Random-X predictors. Such models include the Analysis of Covariance (ANCOVA) model. The multivariate generalization of AICm also is derived.
  • Keywords:
  • Source:
    Statistics & Probability Letters, 173, 109064
  • DOI:
  • ISSN:
  • Format:
  • Publisher:
  • Document Type:
  • Funding:
  • License:
  • Rights Information:
  • Compliance:
  • Main Document Checksum:
  • Download URL:
  • File Type:

Supporting Files

  • No Additional Files
More +

You May Also Like

Checkout today's featured content at

Version 3.26.1