Formalising trade-offs beyond algorithmic fairness: lessons from ethical philosophy and welfare economics
Zotero / D&S Group / Top-Level Items 2021-08-14
Type
Journal Article
Author
Michelle Seng Ah Lee
Author
Luciano Floridi
Author
Jatinder Singh
URL
https://doi.org/10.1007/s43681-021-00067-y
Publication
AI and Ethics
ISSN
2730-5961
Date
2021-06-12
Journal Abbr
AI Ethics
DOI
10.1007/s43681-021-00067-y
Accessed
2021-08-14 00:45:26
Library Catalog
Springer Link
Language
en
Abstract
There is growing concern that decision-making informed by machine learning (ML) algorithms may unfairly discriminate based on personal demographic attributes, such as race and gender. Scholars have responded by introducing numerous mathematical definitions of fairness to test the algorithm, many of which are in conflict with one another. However, these reductionist representations of fairness often bear little resemblance to real-life fairness considerations, which in practice are highly contextual. Moreover, fairness metrics tend to be implemented within narrow and targeted fairness toolkits for algorithm assessments that are difficult to integrate into an algorithm’s broader ethical assessment. In this paper, we derive lessons from ethical philosophy and welfare economics as they relate to the contextual factors relevant for fairness. In particular we highlight the debate around the acceptability of particular inequalities and the inextricable links between fairness, welfare and autonomy. We propose Key Ethics Indicators (KEIs) as a way towards providing a more holistic understanding of whether or not an algorithm is aligned to the decision-maker’s ethical values.
Short Title
Formalising trade-offs beyond algorithmic fairness