“The Limits of Ethical AI”
Statistical Modeling, Causal Inference, and Social Science 2025-11-22
Aleks Jakulin writes:
This is such a (didactically) beautiful piece of investigation.
Maybe hunting for imagined “bias” is a folly, and we should be maximizing the bias in favor of better outcomes.
I don’t get why Aleks refers to bias as being “imagined,” but I agree with his general point, which is that the focus should be on outcomes. Most simply, you’d want to assign a positive utility to each good outcome and a negative utility to each bad outcome. Given that this AI system is being implemented at all, the goal has got to be to do better than whatever was the existing procedure, so the net outcome will be positive. I’d think the best approach would be to maximize utility, as defined based on individual and aggregate outcomes, and then use some sort of side payments to compensate people who have been inappropriately classified.
That said, there’s nothing wrong with estimating various aggregate measures of disparity as well, although I’d recommend against using evocative terms such “fairness” which then get associated with various mathematical measures of asymmetry.
To put it another way, “ethical AI” has two limitations here:
1. According to the linked report, it doesn’t work so well at its stated goals.
2. Various definitions of algorithmic ethics, fairness, and bias contradict each other, and they seem to be based on a false intuition that it should be possible for all measures of disparity to be zero.