Biased Programmers? Or Biased Data? A Field Experiment in Operationalizing AI Ethics by Bo Cowgill, Fabrizio Dell'Acqua, Sam Deng, Daniel Hsu, Nakul Verma, Augustin Chaintreau :: SSRN

amarashar's bookmarks 2020-12-14

Summary:

Why do biased predictions arise about human capital? What interventions can prevent them? We evaluate 8.2 million algorithmic predictions of math skill from ~400 AI engineers, each of whom developed an algorithm under a randomly assigned experimental condition. Our treatment arms modified programmers' incentives, training data, awareness, and/or technical knowledge of AI ethics. We then assess out-of-sample predictions from their algorithms using randomized audit manipulations of algorithm inputs and ground-truth math performance for 20K subjects. We find that biased predictions are mostly caused by biased training data. However, one-third of the benefit of better training data comes through a novel economic mechanism: Engineers exert greater effort and are more responsive to incentives when given better training data. We also assess how performance varies with programmers' demographic characteristics, and their performance on a psychological test of implicit bias (IAT) concerning gender and careers. We find no evidence that female, minority and low-IAT engineers exhibit lower bias or discrimination in their code. However we do find that prediction errors are correlated within demographic groups, which creates performance improvements through cross-demographic averaging. Finally, we quantify the benefits and tradeoffs of practical managerial or policy interventions such as technical advice, simple reminders and improved incentives for decreasing algorithmic bias.

Link:

https://privpapers.ssrn.com/sol3/papers.cfm?abstract_id=3615404&dgcid=ejournal_htmlemail_social:personality:psychology:ejournal_abstractlink

From feeds:

Ethics/Gov of AI ยป amarashar's bookmarks

Tags:

Date tagged:

12/14/2020, 08:21

Date published:

12/14/2020, 03:21