Prediction markets in 2024 and poll aggregation in 2008
Statistical Modeling, Causal Inference, and Social Science 2024-11-10
With news items such How the Trump Whale Correctly Called the Election and Prediction markets got Trump’s victory right; Betting markets predicted a Trump victory, while traditional polls were showing a tossup, prediction markets are having their coming-out party.
Before going on, let me emphasize that the headline, “Prediction markets got Trump’s victory right; Betting markets predicted a Trump victory, while traditional polls were showing a tossup,” is not actually correct. From Rajiv Sethi, here are the election-eve prediction-market prices:
The markets were giving Trump between 50% and 56% of winning the electoral college, very close to the polling-based forecasts which were pretty much right at 50-50. Given the outcome, yes, 56% is a better prediction than 50%, but it’s not very different: to say that the markets “got Trump’s victory right” is incorrect, just as it would be incorrect to say that, had Harris won, they would’ve “got it wrong.” In either case, what the markets were conveying was high uncertainty. As Elliott Morris of the prediction site Fivethirtyeight wrote, “you should not expect polls in presidential races to be perfectly accurate. You should expect them to be as imperfect as they have been historically. And in a race with very narrow advantages for the leader in each key state, that means there’s a wide range of potential outcomes in the election. . . . Trump and Harris, our model says, are both a normal polling error away from an Electoral College blowout. If we shift the polls by 4 points toward Harris, she would win the election with 319 Electoral College votes . . . Meanwhile, Trump would win with 312 electoral votes if the polls underestimate him by the same amount,” which is what happened. From a historical perspective, 312 or 319 electoral votes is not a “blowout”—it’s actually a narrow victory compared to the distribution of past elections—and in retrospect we can say that it was a mistake to consider errors in both directions to be equally likely.
In 2024, the prediction markets were just marginally closer than the forecasts based on poll aggregates. As has been discussed elsewhere, one election—or even several national elections—does not supply enough data to distinguish the performance of probabilistic forecasts that are so similar to each other.
And it’s no surprise that prediction markets were close to the poll-based forecasts. The forecasts were public, and bettors were well aware of them. The poll-based forecasts served as a sort of anchor or stabilizing force constraining the markets.
Poll aggregation in 2008
Poll aggregation became a big deal in 2008, thanks in large part to the work of Nate Silver, first in the primaries and then in the general election campaign. Once the election was over, it turned out that Nate, and other poll aggregators, had forecast something like 49 out of 50 states correctly. Now, most of these states are freebies—nobody’s gonna give you credit for predicting that California will go for the Democrats—still, it was a strong performance, just as good if not better than various news media pundits whose job was to handicap the elections.
In retrospect, what made the poll aggregations work was not so much the aggregation as the polls. Polling errors were low in 2008. In 2012, the polls overstated Republican support, but not enough to change the predicted election outcome. Then in 2016, 2020, and 2024 (but not in the off-year elections) the polls overestimated support for the Democrats.
The point is that the success in 2008 led to poll aggregation being a regular part of campaign reporting, and rightly so. There were poll aggregates before 2008, but 2008 was their coming-out party. And, despite the problems with polling, aggregation isn’t going away; there’s just more pressure to improve our aggregation models and appropriately account for uncertainty.
Prediction markets in 2024 and going forward
Similarly, prediction markets have been around for awhile, but 2024 is the year they broke through, because they outperformed the polls and the fundamentals-based forecasts. This was an N=1 performance and could be considered a lucky bounce in the same way as the polls’ accuracy in 2008. It’s not pure luck, though. When you execute a plan and it comes out as you’d like, you get credit for the good luck, just as you’d get blamed when things go wrong. The pollsters really were doing things right in 2008, at least for that election year, and, similarly, bettors who put their money on the Republicans in 2024 were making the right call.
In the reporting of future elections, prediction markets will be part of the mix, along with poll aggregates, fundamentals-based models, and other forecasts, not to mention the usual array of pundits. And that’s how it should be.
In the elections after 2008, many people became overconfident about poll-aggregation models, but that didn’t make those methods useful; they just needed to be interpreted with an appropriate level of uncertainty. I anticipate a similar trajectory with prediction markets: the increased attention will lead to improvements, smoothing out some of the overreactions (as with the Iowa prices shown here), and some missteps will occur, leading to a better understanding of the information contained in these markets beyond what’s available in model-based forecasts. As with poll aggregation, I expect there to be some overconfidence in what markets can do (as in that erroneous news headline quoted near the beginning of this post), eventually settling to a more realistic appraisal.