The case for fairer algorithms – DeepMind Ethics & Society – Medium
amarashar's bookmarks 2018-06-04
Summary:
From a technical point of view, we’ve found that even when explicit information about race, gender, age and socioeconomic status is withheld from models, part of the remaining data often continues to correlate with these categories, serving as a proxy for them. A person’s postal code, for instance, tends to reveal much about their protected characteristics. Directly removing information about protected attributes therefore does little to shield people from discrimination — and may even make things worse. Commenting on this problem, Silvia Chiappa, a research scientist here at DeepMind, observes that ‘information about group membership is often needed to disentangle complex patterns of causation and to protect people from indirect discrimination.’