Twitter bots can reduce racist slurs—if people think the bots are white

Ars Technica » Scientific Method 2016-11-16

An NYU PhD student used these avatars for his Twitter accounts, which he used to study the effects of rebukes on the site. (credit: Kevin Munger)

Twitter users—and Twitter as a company—have grappled with ways to deal with hateful, bigoted, and harassing speech throughout the platform's lifetime. The service has added a few tools to try to keep things in check, but one PhD student at NYU's school of politics set out to see whether social checks and balances could reduce the platform's most abhorrent speech.

The results, published in the November edition of Political Behavior, concluded that direct, negative responses to racist tweets could have an impact—but, at least in this test's case, they were far more effective when they appeared to come from white users.

NYU student Kevin Munger began his experiment by identifying 231 Twitter accounts with a propensity for using the n-word in a targeted manner (meaning, the message included the "@" symbol and used second-person language). All of these accounts were at least six months old and had used the n-word in at least three percent of their posts during the period Munger monitored them (late summer last year). Munger explains that he chose white men as the study's subjects "because they are the largest and most politically salient demographic engaging in racist online harassment of blacks," and also to control "the in-groups of interest (gender and race)."

Read 10 remaining paragraphs | Comments