A thought about the rigor of statistical peer-review

Statistical Epidemiology 2013-03-15

Some scientific research confirms what we think we already know. Others research leads to novel or unexpected results. As a peer-reviewer, I’ve noticed that papers reporting the later are much more rigorously reviewed by my co-reviewers. I have no scientific evidence that this actually happens, but it makes sense to me that people would work harder to explain away unexpected results.

Unfortunately, the flip side of this coin is that papers reporting results that confirm our expectations are reviewed much less rigorously. This is clearly a problem, particularly for research using “advanced” statistical methods where the often heroic assumptions are much less widely understood.

Consequently, its important for reviewers to develop some standard for how they assess the quality of research (I’m working on my own right now), and perhaps more importantly, to notify editors when they are not qualified to comment on statistical methods. It might also be a good idea to review the methods section of a paper before reading anything else – even though you can’t fully evaluate the methods without knowledge of the research question, you can get a very good sense of it.

Here are some related papers and links. Please feel free to add your own in the comments below.

Statistical reviewing for medical journals

Quality and value: Statistics in peer-review

Statistical errors in medical research – a review of common pitfalls

Statistical reviewing policies of medical journals

Methods and Biostatistics: a concise guide for peer reviewers

Guidelines for Statistical Reporting in Articles for Medical Journals: Amplifications and Explanations

Detailed guidelines for reporting quantitative research in Health & Social Care in the Community

STROBE-Statement

TREND-Statement (for non-randomized trials)

STARD-Statement (diagnostics)

CONSORT-Statement (RCTs)