Effect size expectations and common method bias
Numbers Rule Your World 2023-05-25
I think researchers in the social sciences often have unrealistic expectations about effect sizes. This has many causes, including publication bias (and selection bias more generally) and forking paths. Old news here.
Will Hobbs pointed me to his (PPNAS!) paper with Anthony Ong that highlights and examines another cause: common method bias.
Common method bias is the well-known (in some corners at least) phenomenon whereby specific common variance in variables measured through the same methods can produce bias. You can come up with many mechanisms for this. Variables measured in the same questionnaire can be correlated because of consistency motivations, the same tendency to give social desirably responses, similar uses of the similar scales, etc.
Many of these biases result in inflated correlations. Hobbs writes:
[U]nreasonable effect size priors is one of my main motivations for this line of work.
A lot of researchers seem to consider effect sizes meaningful only if they’re comparable to the huge observational correlations seen among subjective closed-ended survey items.
But often the quantities we really care about — or at least we are planning more ambitious field studies to estimate — are inherently going to be not measured in the same ways. We might assign a treatment and measure a survey outcome. We might measure a survey outcome, use this to target an intervention, and then look at outcomes in administrative data (e.g., income, health insurance data).
Here at this blog, perhaps there’s the most coverage of tiny studies, forking paths, and selection bias as causes of inflated effect size expectations. So this is a good reminder there are plenty of other causes, even with big samples or pre-registered analysis plans, like common method bias and confounding more generally.
This post is by Dean Eckles.