The most effective online fact-checkers? Your peers
beSpacific 2025-11-19
Research shows that being called out by peers, not algorithms or experts, makes online authors think twice about spreading misinformation. “When the social media platform X (formerly Twitter) invited users to flag false or misleading posts, critics initially scoffed. How could the same public that spreads misinformation be trusted to correct it? But a recent study by researchers from the University of Rochester, the University of Illinois Urbana–Champaign, and the University of Virginia finds that “crowdchecking” (X’s collaborative fact-checking experiment known as Community Notes) actually works. X posts with public correction notes were 32 percent more likely to be deleted by the authors than those with just private notes. The paper, published in the journal Information Systems Research, shows that when a community note about a post’s potential inaccuracy appears beneath a tweet, its author is far more likely to retract that tweet. “Trying to define objectively what is misinformation and then removing that content is controversial and may even backfire,” notes coauthor Huaxia Rui, the Xerox Professor of Information Systems and Technology at URochester’s Simon Business School. “In the long run, I think a better way for misleading posts to disappear is for the authors themselves to remove those posts.” Using a causal inference method called regression discontinuity and a vast dataset of X posts (previously known as tweets), the researchers find that public, peer-generated corrections can do something experts and algorithms have struggled to achieve. Showing some notes or corrective content alongside potentially misleading information, Rui says, can indeed “nudge the author to remove that content.”