N=43, “a statistically significant 226% improvement,” . . . what could possibly go wrong??

Statistical Modeling, Causal Inference, and Social Science 2024-04-16

Enjoy.

They looked at least 12 cognitive outcomes, one of which had p = 0.02, but other differences “were just shy of statistical significance.” Also:

The degree of change in the brain measure was not significantly correlated with the degree of change in the behavioral measure (p > 0.05) but this may be due to the reduced power in this analysis which necessarily only included the smaller subset of individuals who completed neuropsychological assessments during in-person visits.

This is one of the researcher degrees of freedom we see all the time: an analysis with p > 0.05 can be labeled as “marginally statistically significant” or even published straight-up as a main result (“P < 0.10"), it can get some sort of honorable mention ("this may be due to the reduced power"), or it can be declared to be a null effect. The "this may be due to the reduced power" thing is confused, for two reasons. First, of course it's due to the reduced power! Set n to 1,000,000,000 and all your comparisons will be statistically significant! Second, the whole point of having these measures of sampling and measurement error is to reveal the uncertainty in an estimate's magnitude and sign. It's flat-out wrong to take a point estimate and just suppose that it would persist under a larger sample size. People are trained in bad statistical methods, so they use bad statistical methods, it happens every day. In this one, I'm just bothered that this "226% improvement" thing didn't set off any alarms. To the extent that these experimental results might be useful, the authors should be publishing the raw data rather than trying to fish out statistically significant comparisons. They also include a couple of impressive-looking graphs which wouldn't look so impressive if they were to graph all the averages in the data rather than just those that randomly exceeded a significance threshold. Did they publish the raw data? No! Here's the Data availability statement:

The datasets presented in this article are not readily available because due to reasonable privacy and security concerns, the underlying data are not easily redistributable to researchers other than those engaged in the current project’s Institutional Review Board-approved research. The corresponding author may be contacted for an IRB-approved collaboration. Requests to access the datasets should be directed to …

It seems like it would be pretty trivial to remove names and any other identifying information and then release the raw data. This is a study on “whether older adults retain or improve their cognitive ability over a six-month period after daily olfactory enrichment at night.” What’s someone gonna do, track down participants based on their “daily exposure to essential oil scents”?

One problem here is that Institutional Review Boards are set up with a default no-approval stance. I think it should be the opposite: no IRB approval unless you commit ahead of time to posting your raw data. (Not that my collaborators and I usually post our raw data either. Posting raw data can be difficult. That’s one reason I think it should required, because otherwise it’s not likely to be done.)