The syllogism that ate social science
Statistical Modeling, Causal Inference, and Social Science 2018-04-28
I’ve been thinking about this one for awhile and expressed it most recently in this blog comment:
There’s the following reasoning which I’ve not seen explicitly stated but is I think how many people think. It goes like this: – Researcher does a study which he or she thinks is well designed. – Researcher obtains statistical significance. (Forking paths are involved, but the researcher is not aware of this.) – Therefore, the researcher thinks that the sample size and measurement quality was sufficient. After all, the purpose of a high sample size and good measurements is to get your standard error down. If you achieved statistical significance, the standard error was by definition low enough. Thus in retrospect the study was just fine.
So part of this is self-interest: It takes less work to do a sloppy study and it can still get published. But part of it is, I think, genuine misunderstanding, an attitude that statistical significance retroactively solves all potential problems of design and data collection.
Type M and S errors are a way of getting at this, the idea that just cos an estimate is statistically significant, it doesn’t mean it’s any good. But I think we need to somehow address the above flawed reasoning head-on.
The post The syllogism that ate social science appeared first on Statistical Modeling, Causal Inference, and Social Science.