Mixed Messages? The Limits of Automated Content Analysis

amarashar's bookmarks 2017-11-30

Summary:

Governments and companies are turning to automated tools to make sense of what people post on social media, for everything ranging from hate speech detection to law enforcement investigations. Policymakers routinely call for social media companies to identify and take down hate speech, terrorist propaganda, harassment, “fake news” or disinformation, and other forms of problematic speech. Other policy proposals have focused on mining social media to inform law enforcement and immigration decisions. But these proposals wrongly assume that automated technology can accomplish on a large scale the kind of nuanced analysis that humans can accomplish on a small scale.

Link:

https://cdt.org/files/2017/11/2017-11-13-Mixed-Messages-Paper.pdf

From feeds:

Harmful Speech » kiratebbe's bookmarks
Harmful Speech » amarashar's bookmarks

Tags:

Date tagged:

11/30/2017, 15:03

Date published:

11/30/2017, 10:03