“This Device Is ‘Proven’ to Protect Athletes’ Brains. The Science Is Under Fire.”

Statistical Modeling, Causal Inference, and Social Science 2025-01-29

Science reporter Stephanie Lee brings another banger:

For athletes who collide on the field, a neck accessory called the Q-Collar has a reassuring pitch: It’s the only medical device “proven to help protect the brain,” a claim authorized by the Food and Drug Administration.

But some of the studies offering evidence for its efficacy are now coming under scrutiny. Outside researchers have identified apparent discrepancies and errors in at least a half-dozen studies about the Q-Collar . . .

In response, scientists who worked on the studies told The Chronicle that they are planning to fix some of their data. “While these identified errors do not change the overall interpretation of findings,” Gregory D. Myer, a researcher who oversaw many of the papers, said in an email, “we are committed to the highest standards of accuracy in reporting our research findings.”

Yeah, right. We’ll get back to this bit. But first, more from Lee:

The apparent issues are statistical in nature: identical data for different groups of subjects, different data for seemingly identical subjects, improbable data, omitted data. . . .

The FDA’s authorization was based in part on a study of 284 high-school football players. Some wore the Q-Collar, some didn’t, and all of them wore an accelerometer device that measured head impacts. Images generated from brain scans, before and after the season, showed significant changes in certain parts of the non-collar-wearers’ brains versus no significant changes in the collar-wearers. But outside scientists have warned against drawing conclusions from that imaging technology. . . .

In a 2017 paper that reported results from the study of football players, those who wore collars and those who didn’t demonstrated identical rates of accuracy on post-season cognitive tests, down to the second decimal point.

Myer, who was the study’s senior author when he was a professor of pediatrics and orthopedic surgery at Cincinnati Children’s Hospital Medical Center, said by email that he would submit a corrected table. But Myer, who is now at Emory University, did not address another observation of the data sleuths — that about 10 subjects appeared to be missing from one of the groups studied.

[Data sleuths] Smoliga and Yang also raised apparent inconsistencies between the 2017 article and another published in a different journal. The articles appeared to report results from the same group of people: 62 high-school football players with identically numbered subsets of collar-wearers. Yet the average number of hits and amount of gravitational forces reportedly experienced by the athletes differed across the two papers. Myer did not address this matter in his proposed corrections.

Nor did he address a seemingly similar inconsistency in a 2019 paper about SWAT team members who were exposed to blasts during a one-day training. This experiment appeared to be the same one featured in another paper in a different journal, because both reported involving 23 male SWAT team members between the ages of 31 and 68. But the sleuths calculated that the number of blasts experienced by one collar-wearing group was not mathematically compatible with those experienced by the equivalent group in the other paper.

The opposite dynamic emerged when the 2019 SWAT study was compared with a 2018 study of female high-school soccer players. Despite being about two different populations, a table with the same eight rows and 10 columns of data appeared in both. Myer said that his team had provided the wrong table in the SWAT study.

There are a couple more examples beyond that!

But the leader of the project is on it:

Myer said by email that most of the “questioned anomalies are derived from a misunderstanding of our analyses and data reporting.” He also said that he has “addressed each issue and question” with the Journal of Neurotrauma.

But Yang and Smoliga said that there could be even more issues in these and other papers. . . .

And you’ll not be surprised to see a conflict of interest:

Smith, the Q-Collar’s inventor, is a collaborator on four of the six studies flagged by Yang and Smoliga. . . . All six studies were paid for by Q30 Innovations. . . .

The best (i.e., worst) part

That quote near the beginning from the project leader:

“While these identified errors do not change the overall interpretation of findings” . . .

As I wrote a couple years ago regarding a completely different example: Bad data but they do not affect the paper’s conclusions, huh? We’ve heard that one before. It kinda makes you wonder why they bother collecting data at all, given that the conclusions never seem to change.

Here’s a solution to the problem

I think what this guy really needs is some fawning NPR interviews, a chapter in the next editions of Freakonomics and Nudge, and a few Ted talks. He could then go on the podcast circuit to complain that he’s being canceled. Then any money he loses from the Q-collar he can make up in speaking engagements.