Getting Over TOP : Epidemiology

peter.suber's bookmarks 2021-12-12

Summary:

"In May 2015, the Center for Open Science invited Epidemiology to support the Transparency and Openness Promotion (TOP) Guidelines.1 After consulting our editors and former Editors-in-Chief, I declined this invitation and published an editorial to explain the rationale.2 Nonetheless, the Center for Open Science has assigned a TOP score to the journal and disseminated the score via Clarivate, which also disseminates the Journal Impact Factor. Given that Epidemiology has been scored despite opting not to support the TOP Guidelines, and that our score has been publicized by the Center for Open Science, we here restate and expand our concerns with the TOP Guidelines and emphasize that the guidelines are at odds with Epidemiology’s mission and principles. We declined the invitation to support the TOP Guidelines for three main reasons. First, Epidemiology prefers that authors, reviewers, and editors focus on the quality of the research and the clarity of its presentation over adherence to one-size guidelines. For this reason, among others, the editors of Epidemiology have consistently declined opportunities to endorse or implement endeavors such as the TOP Guidelines.3–5 Second, the TOP Guidelines did not include a concrete plan for program evaluation or revision. Well-meaning guidelines with similar goals sometimes have the opposite of their intended effect.6 Our community would never accept a public health or medical intervention that had little evidence to support its effectiveness (more on that below) and no plan for longitudinal evaluation. We hold publication guidelines to the same standard. Third, we declined the invitation to support the TOP Guidelines because they rest on the untenable premise that each research article’s results are right or wrong, as eventually determined by whether its results are reproducible or not. Too often, and including in the study of reproducibility that was foundational in the promulgation of the TOP Guidelines,7 reproducibility is evaluated by whether results are concordant in terms of statistical significance. This faulty approach has been used frequently, even though the idea that two results—one statistically significant and the other not—are necessarily different from one another is a well-known fallacy.8,9 "

Link:

https://journals.lww.com/epidem/Fulltext/2022/01000/Getting_Over_TOP.1.aspx

From feeds:

Open Access Tracking Project (OATP) » peter.suber's bookmarks

Tags:

oa.new oa.top oa.guidelines oa.objections oa.debates oa.cos oa.reproducibility oa.editorials

Date tagged:

12/12/2021, 14:22

Date published:

12/12/2021, 09:22