Plagiarism searches and post-publication review

Statistical Modeling, Causal Inference, and Social Science 2024-11-22

Jonathan Bailey writes:

Yesterday, Luke Rosiak and Christopher F. Rufo at the conservative publication The Daily Wire published plagiarism allegations against Natalie J. Perry, the head of Cultural North Star, a diversity, equity, and inclusion (DEI) program at the University of California, Los Angeles (UCLA). . . . the evidence is damming, and an investigation is in order, likely by both UCLA and the University of Virginia. Regardless of the reasons for this investigation, it does appear to highlight serious issues with this work that need to be addressed.

To be clear, I have not been able to verify these allegations independently. However, even if only parts of these allegations are accurate, there are significant problems that need to be closely examined. . . .

In their conclusions, the Daily Caller said, “These institutions have dramatically lowered expectations for favored groups and pushed a cohort of ‘scholars’ through the system without enforcing basic standards of academic integrity.”

Regardless of what one thinks about DEI programs, my experience has been that these issues are not unique to DEI, any particular field of study or any particular institution. Rather, they are pervasive issues in the whole of academia and fixing them will mean addressing issues larger than any one component. . . .

As we [Bailey] discussed in January, these efforts are part of the weaponization of plagiarism. Critics of DEI programs are aggressively checking the work of DEI allies, administrators and perceived “DEI hires” for plagiarism. The goal isn’t to preserve academic/research integrity but to discredit DEI programs.

Ironically, this results in only certain groups having their work thoroughly checked for plagiarism. This creates a double standard in academia . . .

I’m not a fan of the term “weaponization,” as it reminds me of the charge that is sometimes made of critics of bad science, which is that the critics are inappropriately obsessed in some way. I don’t think this argument is useful: of course, if you go to the trouble of criticizing something, it makes sense that you care about the topic. Selection seems inevitable.

Bailey continues:

As we discussed in February, there is so much plagiarism around right now because it’s relatively easy to check a small number of works for plagiarism. If one doesn’t find anything, they simply move on with little lost, but if they do, it can be fatal to the target’s career.

However, those efforts don’t scale easily. While an initial check of a dissertation for plagiarism may only take a few hours, doing that for hundreds or thousands of students is much more difficult and less rewarding.

The result is that plagiarism often slips through the cracks, even in theses and dissertations, only to come back up when the now-former student is targeted for whatever reason.

This is maybe not such bad thing! Post-publication review occurs when people bother to check things. For example, had that now-notorious supply-chain publisher remained as an obscure professor, his ridiculous publications might never have been noticed, but when he was appointed dean of engineering at a flagship state university, somebody noticed and sent me a heads-up. And from that we have the memorable phrase, “torment executioners.”

More generally, one argument in favor of post-publication review is that it is more efficient than traditional academic peer review, in that post-publication review typically focuses on papers that, for some reason or another, some people care about.