“Nightshifts Linked to Increased Risk for Ovarian Cancer”
Statistical Modeling, Causal Inference, and Social Science 2013-03-17
Zosia Chustecka writes:
Much of the previous work on the link between cancer and nightshifts has focused on breast cancer . . . The latest report, focusing on ovarian cancer, was published in the April issue of Occupational and Environmental Medicine.
This increase in the risk for ovarian cancer with nightshift work is consistent with, and of similar magnitude to, the risk for breast cancer, say lead author Parveen Bhatti, PhD, and colleagues from the epidemiology program at the Fred Hutchinson Cancer Research Center in Seattle, Washington.
The researchers examined data from a local population-based cancer registry that is part of the Surveillance Epidemiology and End Results (SEER) Program. They identified 1101 women with advanced epithelial ovarian cancer, 389 with borderline disease, and 1832 without ovarian cancer (control group).
The women, who were 35 to 74 years of age, were asked about the hours they worked, and specifically whether they had ever worked the nightshift.
The researchers found that 26.6% of the women with invasive cancer had worked nights at some point, as had 32.4% of those with borderline disease and 22.5% of those in the control group.
In the entire cohort, the median duration of nightshift work was 2.7 to 3.5 years. The most common types of nightshift jobs were in healthcare, food preparation and service, and office and admin support.
I hadn’t known so many people worked night shifts, but I guess these numbers make sense given that they’re asking people whether they’d ever worked nights. I wonder if I’d count? I taught a night class for a couple of semesters.
Here’s the punchline:
The researchers conclude that working nights is associated with an increased risk for both invasive ovarian cancer (odds ratio [OR], 1.24, 95% confidence interval [CI], 1.04 – 1.49) and borderline disease (OR, 1.48; 95% CI, 1.15 – 1.90).
The Bayesian in me suspects the true population odds ratios are on the low end of this range, and the Uri Simonsohn in me is suspicious that the low ends of these confidence intervals are so close to 1.0. Indeed, a look at the statistical analysis section of the article suggests the researchers had various degrees of freedom which could induce small changes in the p-value. I also worry about the sensitivity of their results to their choice of adjustments for pre-treatment variables. They say they did their analysis in Stata, which is fine, but I’m not clear what exactly they did in their adjustments.
This is not to say I think the published findings are wrong. Any particular study is a brick in the wall, it provides some information and future studies can give more.
P.S. I found the above news article via Google after reading this summary which sent me to a not-so-detailed unsigned news report that was not so detailed and had no link to the original study. The unsigned report was attributed to HealthDay: “Daily Health News and Medical News for Licensing & Syndication.” Creepy.