Have you ever had a brilliant idea for a paper---one with an interesting and important question and a believable identification strategy? After a lot of work, you acquire and manage to clean up the perfect data set. You run that regression. Maybe you even get the expected sign and a reasonable estimate of a coefficient.....but the standard errors are just too big. You run a few more regressions and no matter what you try, you just can't get that p-value below .05 and so you end up putting that project in the filing cabinet. There just isn't enough variation in the data to identify anything. <sigh>
Well, it seems like this actually isn't happening as often as it should. Too many papers are being published that cannot be reproduced. Daniel Benjamin, a behavioral economist at USC, and 71 coauthors from a variety of fields have just published a paper with this one sentence summary:
We
propose to change the default P-value threshold for statistical significance for
claims of new discoveries from 0.05 to 0.005.
This would certainly require larger sample sizes if we wanted to keep publishing the same number of papers with "significant" results. My personal view: To start, why not do away with the norm of reporting just standard errors with the little stars? Why not instead publish p values? That way readers could easily and quickly distinguish between a p-value of .049 and .0049. I wonder if this would a make a difference in terms of which papers get published in the different journals.
And now, because I have no shame, I will share the song that I can't get out of my head as I am writing about p's. Have a listen to this and share with your kids. :)
(h/t David McKenzie, again)
No comments:
Post a Comment