The case for fairer algorithms – DeepMind Ethics & Society – Medium

amarashar's bookmarks 2018-06-04

Summary:

From a technical point of view, we’ve found that even when explicit information about race, gender, age and socioeconomic status is withheld from models, part of the remaining data often continues to correlate with these categories, serving as a proxy for them. A person’s postal code, for instance, tends to reveal much about their protected characteristics. Directly removing information about protected attributes therefore does little to shield people from discrimination — and may even make things worse. Commenting on this problem, Silvia Chiappa, a research scientist here at DeepMind, observes that ‘information about group membership is often needed to disentangle complex patterns of causation and to protect people from indirect discrimination.’

Link:

https://medium.com/@Ethics_Society/the-case-for-fairer-algorithms-c008a12126f8

From feeds:

Ethics/Gov of AI » amarashar's bookmarks

Tags:

Date tagged:

06/04/2018, 11:27

Date published:

06/04/2018, 07:27