Hiring by Algorithm: Predicting and Preventing Disparate Impact
Zotero / D&S Group / Top-Level Items 2016-03-23
Type
Report
Author
Ifeoma Ajunwa
Author
Sorelle Friedler
Author
Carlos E. Scheidegger
Author
Suresh Venkatasubramanian
URL
http://papers.ssrn.com/abstract=2746078
Place
Rochester, NY
Date
2016/03/10
Accessed
2016-03-23 16:13:01
Institution
Social Science Research Network
Report Type
SSRN Scholarly Paper
Library Catalog
papers.ssrn.com
Abstract
Major advances in machine learning have encouraged corporations to rely on Big Data and algorithmic decision making with the presumption that such decisions are efficient and impartial. In this Essay, we show that protected information that is encoded in seemingly facially neutral data could be predicted with high accuracy by algorithms and employed in the decision-making process, thus resulting in a disparate impact on protected classes. We then demonstrate how it is possible to repair the data so that any algorithm trained on that data would make non-discriminatory decisions. Since this data modification is done before decisions are applied to any individuals, this process can be applied without requiring the reversal of decisions. We make the legal argument that such data modifications should be mandated as an anti-discriminatory measure. And akin to Professor Ayres’ and Professor Gerarda’s Fair Employment Mark, such data repair that is preventative of disparate impact would be certifiable by teams of lawyers working in tandem with software engineers and data scientists. Finally, we anticipate the business necessity defense that such data modifications could degrade the accuracy of algorithmic decision-making. While we find evidence for this trade-off, we also found that on one data set it was possible to modify the data so that despite previous decisions having had a disparate impact under the four-fifths standard, any subsequent decision-making algorithm was necessarily non-discriminatory while retaining essentially the same accuracy. Such an algorithmic “repair” could be used to refute a business necessity defense by showing that algorithms trained on modified data can still make decisions consistent with their previous outcomes.
Report Number
ID 2746078
Short Title
Hiring by Algorithm