Search This Blog

Saturday, 2 July 2016

My statistical debate with an eminent professor of psychological and brain sciences



                                                                       



                           PROFESSOR JOHN K. KRUSCHKE, UNIVERSITY OF INDIANA


                                                              PUBLICATIONS


                                           


HERE IS MY STATISTICAL DEBATE WITH PROFESSOR KRUSCHKE

THIS ITEM WAS RECENTLY PUBLISHED BY JOHN KRUSCHKE ON THE JUNIOR ISBA FACEBOOK PAGE, with a readership of over 1000 Bayesian Statisticians. Fixing seems to be an appropriate word. The 95% HDI later referred to by John may, or may not, have been based upon non-standard assumptions, My understanding, from John's later discussion of 'heavy tails', is that it is much too broad when compared with the standard procedures. The state of the art concerning the Bayesian investigation of hypotheses is described by BASKURT AND EVANS in their recent paper in Bayesian Analysis. In regression situations where there is vague prior knowledge and under appropriate normality assumptions, their procedures are consistent with the usual F, chi-squared, and t-tests, but with weights of evidence which supplement the usual p-values. John seems to be unaware of this.

Fixing the intercept at zero in Bayesian linear regression:
In DBDA2E and in workshops, I present an example of simple linear regression: predicting an adult's weight from his/her height. A participant at a recent workshop suggested that maybe the y-intercept should be fixed at zero, because a person of zero height should have zero weight. I replied that the linear trend is really only intended as a description over the range of reasonable adult heights, not to be extrapolated all the way to a height of zero. Nevertheless, in principle it would be easy to make the restriction in the JAGS model specification. But then it was pointed out to me that the JAGS model specification in DBDA2E standardizes the variables -- to make the MCMC more efficient -- and setting the y intercept of the standardized y to zero is (of course) not the same as setting the y intercept of the raw scale to zero. This blog post shows how to set the y intercept on the raw scale to zero.

LikeShow more reactions
Comment
Comments
Thomas Leonard Maybe a quadratic through zero would be more sensible. With linear regression, the zero would presumably be refuted by the usual t-test
LikeReply1June 29 at 11:36pmEdited
John K. Kruschke Of course, we wouldn't use a frequentist t test! 
Diego Andrés Pérez Ruiz Do you mean Region of Practical Equivalence (ROPE) ?
Thomas Leonard maybe we don't need anything as fancy as that, I'm just eyeballing the data.
LikeReply23 hrs
John K. Kruschke When the intercept is not fixed at zero, Figure 17.3 of DBDA2E shows the estimate of the intercept (see image). The 95% HDI on the estimate includes zero, so we definitely would not want to reject zero. More comments follow...

LikeReply16 hrs
John K. Kruschke There are two different issues here. One is assessing an intercept of zero. There are two Bayesian approaches to assessing null values; you can read about those in Chapter 12 of DBDA2E, and in this manuscript: http://ssrn.com/abstract=2606016Continued in next comment..


.


In the practice of data analysis, there is a conceptual distinction between hypothesis testing, on the one hand, and estimation with quantified uncertainty, on
PAPERS.SSRN.COM
LikeReply16 hrs

John K. Kruschke The second issue is what kind of trend curve to use for describing the relation of weight to height for adults, and whether that curve should be constrained to go through the point <0,0>. We might want to use a linear trend to describe the relation foradults, only over the reasonable range of adult heights, withOUT fixing the intercept at zero, because the linear trend is not intended to be extrapolated beyond the reasonable range. In other words, the trend is purely descriptive; the trend is not a process model of growth since the moment of conception. If, on the other hand, one wants a a trend to capture growth over time, that is another issue entirely. Perhaps in that case we would want the data to include age, and the the trend would be some curved function of age and height, and in that case we might fix the intercept at zero.
LikeReply1June 30 at 6:56pm
Thomas Leonard I believe, like my former colleague George Box that Bayesian theory needs to be combined with sensible practical data analysis!!! The conclusions in your first diagram certainly ain't sensible! The state of the art for assessing null hypotheses was developed by Michael Evans and his student and are highlighted in my Personal History of Bayesian Statistics (Wiley 2014). Have you investigated the sensitivity of your own procedures to the clear heteroscadicity in the data? When the data are heights and weights it would usually be better to take logs
John K. Kruschke I am indeed trying to emphasize sensible practical data analysis. (!!!) Please be aware of the multiple possible purposes of these examples. The intended purpose was to illustrate how to achieve constraints in JAGS code; that's all. (!) A different purpose entirely is to create some scientifically meaningful model of the relation of weight to height, and that is why I was so careful to distinguish those issues in my comments. The issue of model building (what shape curve, what type of noise distribution, etc.) is way beyond the scope of this simple example. Indeed, the fictitious data were actually generated by a mixture of three bivariate normal distributions, and no simple trend curve will adequately capture the actual generating process. (Notice also that while this example assumes homogeneous variance across the range of x, it uses a heavy-tailed noise distribution to accommodate outliers.) Finally, the issue of testing null hypotheses is a red herring for the purposes of this example -- it's just trying to show how to implement a constraint in JAGS.
LikeReply1June 30 at 7:22pm
Thomas Leonard I guess I've said my bit already" Good luck, Tom
Thomas Leonard Anything at all sensible would refute the zero, Mr, Kruschke. An unconstrained linear regression through these data points would hit the height axis at about x=58 and this would strongly refute x=0. According to recent work by Michael Stephens, appropriately scaled Bayes factors are effectively equivalent to significance tests, Therefore, the p-value can be reported along with the weight of evidence, The Bayesian-Laplacian approach is extremely subtle indeed, and needs to be subtlely applied,
Thomas Leonard or take logs of both variables and try to fit a linear regression. Attention Diego Andrés Pérez Ruiz Juan Carlos Martínez Ovando
LikeReply1Yesterday at 10:00amEdited
John K. Kruschke Mr Leonard, I don't know why you've latched onto being so contentious about a point that wasn't even being made in the original post. And I think you are attributing to me conclusions that I would not abide. I've illustrated how to use a simple model (albeit with a heavy-tailed noise distribution to accommodate outliers) and interpret its posterior distribution. There is no claim whatsoever that this simple illustration is the end of any realistic data analysis. Of course one should check that the model is a reasonable description of the data, and one may want to consider other models, which in turn may lead to other interpretations of the data. May I return the wish of good luck, sincerely.
LikeReply16 hrs
Thomas Leonard As I say, I've already said my bit on this John, and I think that my comments should be convincing enough for applied statisticians, I didn't realise that you were an eminent Professor of Psychology and Neuroscience, I'm very glad that you don't actually agree with the conclusions to be drawn from your first graph, I'm sure that you are much more careful when drawing conclusions which might affect the well being of patients in psychology, psychiatry, and neurology


                                             FURTHER DISCUSSION

Thomas Leonard

No comments:

Post a Comment