COPYRIGHT: TOM LEONARD. EDINBURGH SCOTLAND. JANUARY 2002
THIS IS MY SCRAPBOOK. It contains background information for my 2022 research project with
Professor John S.J,. Hsu (UCSB)
Gelman,Hwang and, Vehtari et al (2013)
A New Look at the Statistical Model Identification (Akaike, 1974)
Number 2 is chosen arbitrarily
No proper justification for magical number 2 in early papers .Silly Bayesian attempt by Akaike (1978) Approximated by DIC in many cases. Therefore DIC also lacks theoretical justification
Best validation of AIC is by asymptotic equivalence with cross-validation (which is itself arbitrary)
MEMORIES OF MERVYN STONE (1932-2021)
Schwarz's BIC (1978) is unconvincing, as are Bayes factors (Unintuitive . Sensitivity problems w.r.t prior Lindley's 1957 paradox. Bayes factors can favour a model when it is refuted at any sensible significance level) Asymptotical derivation open to criticiism
Schwarx's note has been cited almost 50000 times in literature.
DID TURING INVENT THE BAYES FACTOR?
Jack Good was a disciple of Alan Turing at Bletchley Park, and Good has emphasized Turing’s use of the Bayes factor in several publications. However, the claim that Turing ought to be credited with the invention of the Bayes factor appears to be flat-out wrong, for at least two reasons. As stated in Etz & Wagenmakers (2017):
“When the hypotheses in question are simple point hypotheses, the Bayes factor reduces to a likelihood ratio, a method of measuring evidential strength which dates back as far as Johann Lambert in 1760 (Lambert and DiLaura, 2001) and Daniel Bernoulli in 1777 (Kendall et al., 1961; see Edwards, 1974 for a historical review); C. S. Peirce had specifically called it a
measure of ‘weight of evidence’ as far back as 1878 (Peirce, 1878; see Good, 1979). Alan Turing also independently developed likelihood ratio tests using Bayes’ theorem, deriving decibans to describe the intensity of the evidence, but this approach was again based on
the comparison of simple versus simple hypotheses. For example, Turing used decibans when decrypting the Enigma codes to infer the identity of a given letter in German military communications during World War II (Turing, 1941/2012). As Good (1979) notes, Jeffreys’s Bayes factor approach to testing hypotheses “is especially ‘Bayesian’ [because] either [hypothesis] is composite” (p. 393).
Posterior Bayes Factors (Murray Aitken)
Aitkin, M. (1991). Posterior Bayes factors (with discussion) Journal of the Royal Statistical Society, Series B, 53, 111–142.
Aitkin, M. (1997). The calibration of P-values, posterior Bayes factors and the AIC from the posterior distribution of the likelihood, Statistics and Computing, 7, 253–261.
Aitkin, M. (2010). Statistical Inference. An Integrated Bayesian/Likelihood Approach, Chapman & Hall/CRC, Boca Raton, FL. Aitkin, M., R.J. Boys, and T. Chadwick (2005).
Bayesian point null hypothesis testing via the posterior likelihood ratio, Statistics and Computing, 15, 217– 230.
We propose Inferential approach that compares entire posterior densities of the log-
likelihoods . In many cases these
are approximated by
L = Maximised log-likelihood - chi-squared variate with p degrees of freedom where
p denotes number of unknown parameters in model
But expectation subtracts 1/2 and not 1 corresponding to AIC=--2logL+2
and not square root of 2 as advocated by Murray Aitken,
.We propose consideration of a FRACTILE MATCHING ZETA PROBABILITY.
Our criteria will be compared with AIC and DIC and their frequency properties
investigated Poisson Log-Linear Models are currently under consideration, following
Leonard, Papasouliotis and Main (Journal of Geophysics (2001) and Stretfaris (2000)
No comments:
Post a Comment