Likelihood Ratio ≠ 1 Journal
Statistical Modeling, Causal Inference, and Social Science 2013-03-22
Dan Kahan writes:
The basic idea . . . is to promote identification of study designs that scholars who disagree about a proposition would agree would generate evidence relevant to their competing conjectures—regardless of what studies based on such designs actually find. Articles proposing designs of this sort would be selected for publication and only then be carried out, by the proposing researchers with funding from the journal, which would publish the results too.
Now I [Kahan] am aware of a set of real journals that have a similar motivation.
One is the Journal of Articles in Support of the Null Hypothesis, which as its title implies publishes papers reporting studies that fail to “reject” the null. Like JASNH, LR ≠1J would try to offset the “file drawer” bias and like bad consequences associated with the convention of publishing only findings that are “significant at p
But it would try to do more. By publishing studies that are deemed to have valid designs and that have not actually been performed yet, LR ≠1J would seek to change the odd, sad professional sensibility favoring studies that confirm researchers’ hypotheses. . . .
Some additional journals that likewise try (very sensibly) to promote recognition of studies that report unexpected, surprising, or controversial findings include Contradicting Results in Science; Journal of Serendipitous and Unexpected Results; and Journal of Negative Results in Biomedicine. These journals are very worthwhile, too, but still focus on results, not the identification of designs the validity of such would be recognized ex ante by reasonable people who disagree!
I am also aware of the idea to set up registries for designs for studies before they are carried out. See this program, e.g. A great idea, certainly. But it doesn’t seem realistic, since there is little incentive for people to register, even less than that to report “nonfindings,” and no mechanism that steers researchers toward selection of designs that disagreeing scholars would agree in advance will yield knowledge no matter what the resulting studies find. . . .
Papers describing the design and ones reporting the results will be published separately, and in sequence, to promote the success of LR≠1′s sister journal, “Put Your Money Where Your Mouth Is, Mr./Ms. ‘That’s Obvious,’ ” which will conduct on-line predication markets for “experts” & others willing to bet on the outcome of pending LR≠1 studies. . . .
For comic relief, LR ≠1J will also run a feature that publishes reviews of articles submitted to other journals that LR≠1J referees agree suggest the potential operation of one of the influences identified above.
More details at the link, also Dan follows up here, where he writes:
LR ≠1J would (1) publish pre-study designs that (2) reviewers with opposing priors agree would generate evidence — regardless of the actual results — that warrant revising assessments of the relative likelihood of competing hypotheses. The journal would then (3) fund the study, and finally, (4) publish the results.
This procedure would generate the same benefits as “adversary collaboration” but without insisting that adversaries collaborate.
Rather than adding any new comments, I’ll just refer you to my two discussions (here and here) from last year of four other entries (by Brendan Nyhan, Larry Wasserman, Chris Said, and Niko Kriegeskorte) in the ever-popular genre of, Our Peer-Review System is in Trouble; How Can We Fix It? I stand by whatever I happened to have written on this when the question came up before.
And, if I could get all Dave Krantz-y for a moment, I’d suggest that this discussion could be improved on all sides (including my own) by starting with goals and going from there, rather than jumping straight into problems and potential solutions.