Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse

Zotero / D&S Group / Top-Level Items 2022-06-24

Type Journal Article Author Anna Lauren Hoffmann URL https://www.tandfonline.com/doi/full/10.1080/1369118X.2019.1573912 Volume 22 Issue 7 Pages 900-915 Publication Information, Communication & Society ISSN 1369-118X, 1468-4462 Date 2019-06-07 Extra tex.ids: HoffmannWherefairnessfails2019, HoffmannWherefairnessfails2019a DOI 10.1080/1369118X.2019.1573912 Accessed 2020-04-14 21:59:50 Library Catalog Crossref Language en Abstract Problems of bias and fairness are central to data justice, as they speak directly to the threat that ‘big data’ and algorithmic decision-making may worsen already existing injustices. In the United States, grappling with these problems has found clearest expression through liberal discourses of rights, due process, and antidiscrimination. Work in this area, however, has tended to overlook certain established limits of antidiscrimination discourses for bringing about the change demanded by social justice. In this paper, I engage three of these limits: 1) an overemphasis on discrete ‘bad actors’, 2) single-axis thinking that centers disadvantage, and 3) an inordinate focus on a limited set of goods. I show that, in mirroring some of antidiscrimination discourse’s most problematic tendencies, efforts to achieve fairness and combat algorithmic discrimination fail to address the very hierarchical logic that produces advantaged and disadvantaged subjects in the first place. Finally, I conclude by sketching three paths for future work to better account for the structural conditions against which we come to understand problems of data and unjust discrimination in the first place. Short Title Where fairness fails