Randomized Acceptances
Computational Complexity 2023-06-16
NeurIPS recently released their 2021 consistency report, a sequel to the 2014 experiment. While the conference has grown dramatically, the results remain "consistent", about 23% disagreement from two separated program committee groups. As before I don't find this too surprising--different committee members have different tastes.
Roughly conference submissions fall into three categories
- Clearly strong papers
- Clear rejects
- A bunch that could go either way.
What if instead we took a different approach. Accept all the strong papers and reject the weak ones. Choose the rest randomly, either with a uniform or weighted distribution based on the ranking. Maybe reduce the probability of those who submit multiple papers.
Choosing randomly reduces biases and can increase diversity, if there is diversity in submissions. Knowing there is randomness in the process allows those with rejected papers to blame the randomness and those whose papers gets in claim they were in the first group. Randomness encourages more submissions and is fair over time.
Note we're just acknowledging the randomness in the process instead of pretending there is a perfect linear order to the papers that only a lengthy program committee discussion can suss out.
We should do the same for grant proposals--all worthy proposals should get a chance to be funded.
I doubt any of this will ever happen. People would rather trust human decisions with all their inconsistencies over pure randomness.