Implicitly denying the controversy associated with the Implicit Association Test. (Whassup with the American Association of Arts & Sciences?)
Statistical Modeling, Causal Inference, and Social Science 2024-08-20
One day in the mail this issue of Daedalus arrived. The title was Understanding Implicit Bias: Insights & Innovations, and it had a bunch of articles, 35 authors in total. One of the articles was called The Science of Implicit Race Bias: Evidence from the Implicit Association Test; another was The Implicit Association Test; and there were others on the effects of implicit bias, implicit bias science, etc.
I was curious about this, because it was my impression that the Implicit Association Test didn’t actually work. See here and, for more background, here.
I looked at the articles in this new Daedalus issue and I didn’t see any dissenting views on the Implicit Association Test. A couple of the articles briefly mentioned and cited dissenting takes but only very briefly and in a context where the IAT was treated as legit.
On the other hand, I guess that not everybody thinks the Implicit Association Test was a mistake. A quick google turns up this site at Harvard, for example. OK, Harvard’s not perfect (just for example, see here, here, here), but, still, the point is that, yeah, there are people out there who still believe in the IAT.
Given the level of controversy on this topic, I was surprised that all 18 of the articles in this journal issue took the same position. Shouldn’t they have had a few saying, “Yeah, sure, implicit bias is a thing, it’s just not a thing that you should try to measure using the Implicit Association Test,” or something like that? Or shouldn’t the introduction have found space to say something like, “The Implicit Association Test is controversial and has been subjected to serious criticisms (refs.); nonetheless we include some papers that use it because . . .”?
It’s their journal, they can do whatever they want; still, it bothered me.
But then this got me thinking about the more general question: When is it appropriate to give a one-sided perspective? Sometimes, right? When I write a book about Bayesian statistics, or Brad Efron writes a book about the bootstrap, nobody’s expecting or demanding that we give time to “the other side.” Sure, we should talk about the limitations and weaknesses of our methods, but there’s no expectation that we include a chapter written by somebody else presenting an opposing view. And if somebody publishes a journal issue on progress on cold fusion, it doesn’t make sense to include any articles by the many physicists who think it’s bogus. After all, a simple google will reveal that skeptical consensus on that topic.
This Implicit Association Test thing seemed different, somehow. Maybe because I can’t see why the American Association of Arts & Sciences (the publisher of Daedalus) should be full-throatedly endorsing the IAT, any more then I’d expect them to endorse cold fusion or any other idea that’s so disputed. I’m a member of the American Association of Arts & Sciences so I feel some responsibility to express my discomfort when they do something that seems wrong.
I’m not really sure—I’m no expert on the IAT—maybe the editors of the volume weren’t aware of the controversy? It says that the journal articles come from a workshop that they held on the topic a few years ago.
I guess this is a general problem in policy research, and in science more generally: on one hand, we make progress by working in a focused way with colleagues who share our general research direction; on the other hand, we also make progress through criticism. I can see why they didn’t want to produce a 50/50 volume called “The controversy over implicit bias” or whatever; that said, I’m still uncomfortable that the volume they did release was 100/0.