All the things we have to do that we don’t really need to do: The social cost of junk science
Statistical Modeling, Causal Inference, and Social Science 2017-06-22
I’ve been thinking a lot about junk science lately. Some people have said it’s counterproductive or rude of me to keep talking about the same few examples (actually I think we have about 15 or so examples that come up again and again), so let me just speak generically about the sort of scientific claim that: – is presented as having an empirical basis— – but where the empirical support comes from a series of statistical analyses—that is, no clear pattern in any individual case but only in averages— – where the evidence is a set of p-values that are subject to forking paths— – and where a design analysis suggests large type M and type S errors— – where replications are nonexisting, or equivocal, or clearly negative— – where theory is general enough that it can support empirical claims from any direction— – and the theory has some real-world implication on how we do or should live our lives— – and the result is one or more publications in prestigious general-science or field journals— – with respectful publicity by the likes of NPR and the New York Times— – and out-and-out hype by the likes of Ted, Gladwell, and Freakonomics.
Before going on, let me emphasize that science is not a linear process, and mistakes will slip in, even under best practices. There can always be problems with data collection and analysis, and conditions can change, so that an effect can be present today but not next year. And even a claim that is supported by poor research can still happen to be correct. Also there’s no clean distinction between good science and junk science.
So here I’m talking about junk science that, however it was viewed by its practitioners when it was being done, can in retrospect be viewed as flawed, with those flaws being inherent in the original design and analysis. I’m not just talking about good ideas that happened not to work out, and I recognize that even bad ideas can stimulate later work of higher quality.
In our earlier discussions of the consequences of junk science, we’ve talked about the waste of resources as researchers pursue blind alleys, going in random directions based on their overinterpretation of chance patterns in data; we’ve talked about the waste of effort of peer reviewers, replicators, etc.; we’ve talked about the harm done by people trying therapies that don’t work, and also the opportunity cost of various good ideas that don’t get tried out because they’re lost in the noise, crowded out by the latest miracle claim.
On the other side, there’s the idea that bad science could still have some positive effects: fake miracle cures can still give people hope; advice on topics such as “mindfulness” could still motivate people to focus on their goals, even if the particular treatments being tried are no better than placebo; and, more generally, the openness to possible bad ideas also allows openness to unproven but good new ideas.
So it’s hard to say.
But more recently I was thinking about a different cost of junk science, which is as a drag on the economy.
The example I was thinking about in particular was an argument by Ujjval Vyas, which seemed plausible to me, that there’s this thing called “evidence-based design” in architecture which, despite its name, doesn’t seem to have much to do with evidence. Vyas writes:
The field is at such a low level that it is not worth mentioning in many ways except that it is deeply embedded in a $1T industry for building and construction as well as codes and regulations based on this junk. . . .
And here’s from the wikipedia page on evidence-based design:
The Evidence Based Design Accreditation and Certification (EDAC) program was introduced in 2009 by The Center for Health Design to provide internationally recognized accreditation and promote the use of EBD in healthcare building projects, making EBD an accepted and credible approach to improving healthcare outcomes. The EDAC identifies those experienced in EBD and teaches about the research process: identifying, hypothesizing, implementing, gathering and reporting data associated with a healthcare project.
So this is a different cost from those discussed before. It’s a sort of tax that goes into every hospital that gets built. Somebody, somewhere, has to pay for these Evidence Based Design consultants, and somebody has to pay extra to build the buildings just so.
For any particular case, the advice probably seems innocuous. For example, “People recover from surgery faster if they have a view of nature out the window.” Sure, why not? Nature’s a good thing. But add up this and various other rules and recommendations and requirements, and it sounds like you’re driving up the cost of the building, not to mention the money and effort that gets spent filling out forms, paying the salaries and hotel bills of consultants, etc. It’s a kind of rent-seeking bureaucracy, even if all or many of the people involved are completely sincere.
OK, I don’t know anything about evidence-based design in architecture so maybe I’m missing a few tricks in this particular example. I still think the general point holds, that one hidden cost of junk science is that it fills the world with a bureaucracy of consultants and payoffs and requirements.
The post All the things we have to do that we don’t really need to do: The social cost of junk science appeared first on Statistical Modeling, Causal Inference, and Social Science.