Peer Review
Computational Complexity 2023-03-06
I ran into a partner of a computer scientist at a social event who asked me "Is the publication system in CS screwed up or really screwed up?" If you don't know my response you haven't been reading this blog long.
Today let's talk about peer review. Kevin McCurley and Adam Mastroianni have recent, not so positive, takes on this topic.
Peer review came out of a system where we had limited slots in journals and, in computer science, conferences and we had to make tough decisions. Journals and conferences would gain a reputation based somewhat on how difficult it was to get papers published there.
Now we have basically unlimited space to publish your results. And you definitely should do so, posting your papers on your own webpage, and a paper archive site like arXiv or ECCC. The research community would flourish in a world where everyone posts their paper online for public comment, people can promote their favorite papers on social media and we have a TikTok-system for recommending papers to you.
So why do we still need paper review? Mostly because we have to review researchers for jobs and grants, and with the questioning the value of recommendation letters, publication quality and quantity has become a stronger proxy for measuring people for better or for worse.
First of all, peer review is a cornerstone of science. Would you rather have papers reviewed by faceless bureaucrats who know little about the subject area? Or papers only ranked by manipulable statistics like citations.
But the way we apply peer review, to decide acceptances in conferences, just adds too much randomness to the system. CS conferences have multiplied and continue to get increased submissions as the field grows. It's just impossible to maintain any sort of uniformity in quality of acceptances. Or too often, we find conference committees and funding panels playing it safe rather than take risks with new research. With so much randomness, it's best to try many papers instead of focusing on a stronger result, leading to too much incremental research, especially in academia.
For hiring, promotion, tenure and funding decisions, we too often rely on short cuts, such as the number of papers accepted to major conferences. Those who don't win the conference lottery get disillusioned and often leave academia for industry and no one wins.