Algorithms can be more accountable than people
Freedom to Tinker 2014-03-19
At an academic meeting recently, I was surprised to hear some social scientists accept as obviously correct the claim that involving “algorithms” in decision-making, instead of sticking with good old-fashioned human decision-making, necessarily reduces accountability and increases the risk of bias. I tend to believe the opposite, that making processes algorithmic improves our ability to understand why they give the results they do. Let me explain why.
Consider a process to decide who receives some award or benefit; and suppose we want to make sure the process is not biased against some disadvantaged group, which I’ll call Group G. If a person just makes the decision, we can ask them whether they were fair to members of Group G. Or we can ask them why decided the way they did. Either way, they can simply lie about their true motivation and process, to construct a story that is consistent with non-discrimination; or they might honestly believe their decision was fair even though it reflected unconscious bias. At the risk of massive understatement: history teaches that this kind of bias in human decision-making is difficult to prevent.
An algorithm, by contrast, cannot hide from everyone the details of how it reached its decision. If you want to know that an algorithm didn’t use information about a person’s Group G status, you can verify that the Group G status wasn’t provided to the algorithm. Or, if you prefer, you can re-run the algorithm with the Group G status field changed, to see if the result would have been different. Or you can collect statistics on whether certain parts of the algorithm have a disparate impact on Group G members as compared to the rest of the population.
This is not to say that everything about algorithms is easy. There are plenty of hard problems in understanding algorithms, both in theory and in practice. My point is merely that if you want to understand how a decision was made, or you want to build in protections to make sure the decision process has certain desirable properties, you’re better off working with an algorithm than with a human decision, because the algorithm can tell you how it got from inputs to outputs.
When people complain that algorithms aren’t transparent, the real problem is usually that someone is keeping the algorithm or its input data secret. What makes the process non-transparent is that the result is emitted without explanation—which is a non-transparent approach no matter what is behind the curtain, a person or a machine.
Of course, a company might be justified legally in keeping their algorithm secret from you; and it might be good business for them to do so. Regardless, it’s important to recognize that non-transparency is a choice they are making and not a consequence of the fact that they’re using computation.
If accountability is important to us—and I think it should be—then we should be developing ways to reconcile transparency with partial secrecy, so that a company or government agency can keep some aspects of their process secret when that is justified, while making other aspects transparent. Transparency needn’t be an all-or-nothing choice.