8 arguments against polling (some are good arguments, some are bad)
Statistical Modeling, Causal Inference, and Social Science 2026-01-09
1. Background
A few months ago we had a post explaining what pre-election polls can and cannot do. I’ll repeat the first part because I think it’s important, and it’s a story that can confuse people:
I think there’s too much political polling. If you want to forecast the election, you get most of the way using the economic and political “fundamentals.” Polls, when analyzed carefully, provide some additional information, but not enough to justify the saturation coverage they receive in the news media and on social media. Even the most dramatic polls, when interpreted in a careful Bayesian way, don’t tell us much.
Despite that, I think I understand why there’s so much reporting of polls on news and social media. Survey organizations make money doing polls for commercial sponsors, and if you’re doing a national or state poll anyway, you might as well throw in some political questions at the beginning so you can get some press for your org. Also, when the election is coming up, a lot of avid newsreaders want the latest information, so news media will commission their own polls to get some traffic. It all makes sense. But, in a world with a zillion polls, you don’t get much more from the zillion-and-first poll; indeed you don’t get much from the next zillion polls; indeed you don’t get much from the first zillion polls.
From our 1993 paper, Why are American Presidential election campaign polls so variable when votes are so predictable? (incidentally, polls aren’t so variable anymore, but that’s another story), here’s a quick summary:
Our claim is not that fundamentals-based forecasts will always be within 0.3% of the national vote—for one thing, there are many different fundamentals-based forecasts out there; for another, they were off by a couple percentage points in 2000—but rather that, as I said above, they get you most of the way there. I think that a world in which the news media focused on fundamentals-based forecasts and then reported on the occasional poll (recognizing the general level of nonsampling error) would be better than the pre-election world we have now. It wouldn’t change who wins the election; it would just provide a saner basis for reporting during the campaign period.
Poll aggregation got huge in 2008, and then it got overhyped, and then this has led to annoyance at polls and their aggregators. In 2024 there was annoyance that forecast were so close to 50-50, and then because the point forecasts were off.
In the run-up to the election, forecasters were very open about their uncertainty. For example, Elliott Morris of 538 wrote, “Trump and Harris are both a normal polling error away from a blowout. The race is uncertain, but that doesn’t mean the outcome will be close,” and Nate Silver wrote, “One thing that might be counterintuitive is that even a normal-sized polling error — polls are typically off by around 3 points in one direction or the other — could lead to one candidate sweeping all 7 key battleground states. . . . the baseline assumption of the Silver Bulletin model is that while the polls could be wrong again — and in fact, they probably will be wrong to some degree — it’s extremely hard to predict the direction of the error”–but people didn’t always want to hear this.
2. Update
I’m writing this current post in response to a recent post by economist Dan Davies, who thinks opinion polls are really bad: he likens them to dubious financial products, states that “it might be the case that this means that survey research is no longer a viable way to find things out,” and recommends “prohibition” (but “not legal prohibition”) of polls. He also wrote, “I really don’t see how you could see this as communicating that a Trump landslide was a significant probability.” But Trump did not win in a landslide. He won by 2% of the vote. When Reagan won 59% of the two-party vote in 1984, that was a landslide. When Obama won 54% in 2008, that was a decisive victory, but not a landslide. Trump won less than 51% of the two-party vote. It was a close election, and that’s why the Economist, Fivethirtyeight, and Nate Silver repeatedly emphasized that the election could go either way.
OK, that’s fine. People make mistakes. Davies is an expert on finance and I’m an expert on polling. I can have strong opinions on finance (for example, “Bitcoin is a scam”) and I might even be right, but mostly I’m outsourcing my views on such topics to third parties. Similarly, Davies makes strong statements about polling, but he realizes these are just his opinions.
What was more interesting to me was the discussion in the comments to that blog post. When I came across the post, I went to the trouble of responding to many of the comments, and I was struck by how much fury there was at the polls. Lots of people seemed to believe that polls were useless, or destructive, or both, and there were various uninformed sideswipes (for example, Davies referring to “efforts to reweight nonrandom samples by subjective brute force” or a commenter suggesting that pollsters “thought that people were like lab mice”) which indicated a general animus toward survey research. Which, yeah, I kind of understand–see my comment at the very top of this post that I think there’s too much political polling, and I’ve been saying that for a long time.
3. Eight different arguments, all tangled up
I feel like these commenters who are so mad at polls have several different arguments which get mixed up:
1. Forecasters (including us) communicated uncertainly poorly.
2. Forecasters (including us) did this on purpose because we benefited from people thinking our forecasts are more certain than they are.
3. Forecasting and poll aggregation have been oversold.
4. There’s too much polling.
5. If a forecaster gives 50/50 odds, that’s equivalent to giving up.
6. For the purpose of national election forecasting, a forecast that can be off by 2 percentage points is useless.
7. For the purpose of national election forecasting, a forecast that can be off by 2 percentage points is worse than useless if users interpret it deterministically.
8. All of polling is useless because response rates are low and it’s not random sampling.
Some of these arguments make sense and some don’t:
1. I hate to admit it, but maybe they’re right on this, that we should’ve done more to emphasize uncertainty. But see item 7 below.
2. That’s ridiculous in light of the many open statements that we made, emphasizing uncertainty. We certainly weren’t hiding it!
3. Agreed. After 2008 in particular, poll-based forecasting got too good a reputation because nonsampling errors happened to be close to zero that year. None of us tried to oversell what we were doing, but the news media and the public got sucked in by the hype.
4. Agreed. I’ve written this in public many times.
5. False. Not all elections are close. It is informative to learn that an election could go either way.
6. False. Not all national elections are so close (recall 2008), also it’s informative that the election could likely be close.
7. This is possible . . . but I actually don’t think that many readers interpreted the forecasts deterministically! What I see is a lot of commenters saying that other people interpreted the forecasts deterministically. It seems to me that news coverage was pretty clear that the election was a toss-up and that either candidate could sweep the swing states. So I kinda feel that lots of the discourse about polls being wrong etc. is meta. For example, did Dan Davies think, a week before the election, that there was no plausible chance that Harris could lose all the swing states? I’m guessing no, he’s just concerned that other people could make that mistake. But I don’t recall seeing any pundits–or any normies–making that mistake at the time. I guess there must be some cases, and there are some particular cases where people overreacted to polls (as in the pre-election Iowa poll), but in those cases the forecasts were actually a voice of reason.
8. Again, being off by a couple percentage points is not bad. Also, polls have never been random samples, and polling accuracy is as good now as it was decades ago when response rates were higher. To me, saying that polling is useless (or, as some commenters said, that it should be “prohibited” or “banned”) makes about as much sense as shutting down the Bureau of Labor Statistics because their measurements are imperfect and their estimates need to be adjusted, or shutting down the National Weather Service because somebody somewhere might not take an umbrella to work one day when the forecast probability of rain is only 46%.
Again, all this is tricky. It’s easy for me to write about polling uncertainty because it’s a problem I’ve been thinking about for a long time. Ask me to write about finance, and all I can offer you is at worst my hot take and at best some social science analogies.
4. Summary
If you frame this as a debate between one side (Elliott Morris, Nate Silver, me, etc.) who are pro-polling and another side who are polling skeptics, then it makes sense to take the side of the polling skeptics. After all, polls aren’t all that. Poll aggregation has been hyped like evolutionary psychology has been hyped, like bitcoin has been hyped, etc., and it’s natural to want to take the under on polling at this point
But when you get to the specifics, it’s another story. Items 3 and 4, and arguably item 1, above are legitimate, serious criticisms. The others, not so much. It should be possible to be bothered by hyping of polls, to think they’ve been oversold and even to think they have been a net minus for modern society, without grabbing on to flat-out false arguments such as 2, 5, and 6, and without being so sure about questionable arguments such as 7 and 8.
To put it another way, instead of throwing out 8 anti-poll arguments and then saying how polling is so horrible, how about starting with saying that polls make you uncomfortable, you think they’ve been overhyped, and here are some concerns. Get your position out there first, and then you can evaluate each argument in turn without feeling the need to build a case.
Jessica Hullman’s thoughts
I sent the above to my computer science colleague Jessica Hullman, who added:
Fwiw, as someone who has watched the evolution of uncertainty communication carefully over the last few election cycles, I would say there has been marked improvement between 2016 and 2024 on how forecasters communicated uncertainty. Not to mention, anyone who was old enough to be shocked in 2016 had that experience behind them, and inevitably brought it to the following couple elections. People hate being surprised when it goes against their preferences. So while there may be more room to improve uncertainty communication further, I also think it’s a much smaller gap than it used to be, and much of the audience who was very surprised in 2016 was less naive this time.
And so, I agree with your response to point 7, that these concerns often come up about other people taking polls too seriously, but it’s hard to imagine that those complaining so loudly now were truly that shocked.
Overall I don’t think that the fact that people can misinterpret uncertainty easily is a good reason to withhold information. Lots of people are interested in learning from the forecasts and all the info about the process that comes with them. People who don’t find them helpful can ignore them.
Beyond the knee jerk reaction to suppress uncertainty, I wonder how ambiguity about the prediction target contributes to people getting confused, or people believing that everyone else is confused – i.e., are we predicting what will happen today if the election were held, or what will happen the day it’s actually held? How differently should the reader interpret the forecast in each case? Maybe there should be more clarity around how to think about the target of the forecast and what uncertainty gets baked in because of it.
I like your point that it’s hard to separate this from arguing that the BLS should stop producing estimates, because they aren’t perfect.
People vehemently turning on polling and forecasting does not make a lot of sense to me. But it does seem to be a thing.
