THE US PRE-ELECTION POLLS

3 December

Here we go again. Commentary on the 2024 pro-election polls reveals some truths about American politics and polls, but also exposes the ups and downs of poll misunderstandings – as well as some very real issues with American elections.

8 min read
8 min read

If you read or saw commentary in the immediate aftermath of the election, you were greeted with stories that asked “Why were the polls wrong?” by reputable media, including these links to articles in The Economist and The Times:

https://www.economist.com/united-states/2024/11/07/opinion-polls-underestimated-donald-trump-again

https://www.thetimes.com/world/us-world/article/why-were-polls-wrong-about-donald-trump-once-again-9kls6fqv7

Those immediate reactions (and many others) were based on very incomplete results.  The day or so after the election, we knew that Donald Trump had achieved an Electoral College majority, but his apparently sizable lead in the vote count was far from the last word on what had happened.

Were the pollsters’ assessments of a close national vote wrong? 

The U.S. electoral system, with its 51 separate elections, has shown its weaknesses in the last three elections. That means its polling weaknesses, too. When there are close elections, the national vote estimate no longer will always predict the outcome.  A close vote nationally can produce a sizable Electoral College win – and even a victory for the popular vote loser, which happened in 2000, when George W. Bush defeated Al Gore, and in 2016 when Trump ran against Hillary Clinton.

Those 51 electoral units (whose results determine the Electoral College vote) are spread over five time zones, and each sets its own rules for poll closing, vote casting and vote counting. Partisan distributions vary by location and by voting method and states that close early and count quickly usually over-represent Republican support and under-represent Democratic votes.  Think of California, the largest state (and one of the most Democratic). Along with Oregon and Washington State, California can take days and even weeks to count votes. All votes may still not be counted. Of course, that’s embarrassingly slow, but it is the way it is.

A lot of the early criticism of polling came as the total national vote suggested a massive Trump lead of at least four points.  We now see a margin of 1.6 percentage points, which could continue to narrow. The final results are not yet in.

SO HOW DID THE POLLS DO?

Trump did get more votes than Harris, and while the polls showed a close race, most were in the margin of error. But when final pre-election polls showed a candidate ahead, it was more likely to be Harris than Trump by a point or two. See: https://projects.fivethirtyeight.com/polls/president-general/2024/national/

But most of those “final” polls did not extend into the final days. We know from the exit polls (https://www.cnn.com/election/2024/exit-polls/national-results/general/president/0) that late deciders gravitated towards Trump.  4% of voters decided how to vote in the last three days before November 5, and they broke 47%-41% towards Trump; 3% decided earlier in the final week, and they split 54%-42% in favour of Trump.  

So did the occasional and first-time voters, who would have been missed in some polls that model “likely voters.” Those polls sometimes leave out or downweight those who have not voted recently. Trump carried first-time voters 56% to 43%, as well as those who may have voted before but did not vote in 2020 (Trump 49%-45%).   That group is probably more likely to include those from Trump’s base: less educated and rural voters, as well as what turned out to be his apparent strength among young men.

Claire Durand, a former WAPOR President and a Professor at the University of Montreal, has evaluated pre-election polls in multiple countries.  Here is her abbreviated analysis from this year in the U.S.:

https://ahlessondages.blogspot.com/2024/11/so-did-polls-win-finally.html

And here is an article from The Washington Post:

https://www.washingtonpost.com/politics/2024/11/14/elections-polls-trump-harris/

These reports, written several days after the first blast of negative reviews of polls, may not be enough to satisfy the critics, but they do reflect a more real sense of what happened in the election.  

Does that mean the polls were right? Sort of, but there are truths contained in the immediate post-election criticisms, even though the criticisms were misguided.

A thread on WAPORnet (a listserv for members of the World Association for Public Opinion Research) in the days after the election raised serious concerns with polls. Researchers worried about American polling -- again.  Their concerns, while based on incomplete results, suggestserious questions.  What are our expectations of accuracy? Is a one-point lead a lead? Is it necessary to mention all the issues with a country’s election rules – a critical factor in the last three U.S. presidential elections? What about vote-counting timetables?

Some of these concerns that follow suggest unrealistic expectations about how accurate polls can be, which current reporting practices may encourage. Other writers focused on how the polls were reported and the expectations they created.  Must a one-point lead be right?

Here is one point of view, from a Central American researcher, written before the final votes were in and the margin shrank. The researcher proclaimed this:  

“The surveys that had Harris winning have no way out.  They were simply wrong.  Being close ‘does not win the cigar’ as the old saying in Montreal goes. Polls must get the winner right…being close is meaningful only if you get the right person as the winner.  There is no way out of this – sloppy or ill-designed work.  Gives us all a bad name.”

It’s worth noting that at the end of the campaign, those pollsters and media outlets that provided probabilities for the final outcome (integrating national and state polling) often gave Trump a greater than 50% probability of winning. The final YouGov projection of the Electoral College, based on close national and state polling, offered two scenarios: one model favouring Harris (using 2022 results as a guide) and one favouring Trump (using 2020). the Electoral College, using voting. One of those estimates was very close to the final Electoral College vote:https://business.yougov.com/content/50836-election-2024-yougov-final-projection-event-recording  [NOTE: This is a recording of the livestream, which begins with some discussion of other election polling.]

The popularity of non-probability polling again was considered as a potential culprit, in the words of the first who posted:

“The consensus is that the polls were wrong. I know it is not that simple, but that is the story being cemented. The popular vote in the US MAY be a method experiment, but it really is a bad defence showing that the polls should be trusted. On the contrary, it sounds probably to most journalists/politicians and voters like an excuse.

“I always have to defend good polls, and after every US election, I have to start again once more. And make sure I differentiate my work from the published political polls in US. Or for that matter several of the polls commissioned by US news media. There are so many horrifying examples being based on non-prob polls, that really is not either true or should be considered newsworthy.”

There was also an attack on the news media.  According to one post, a 2024 study found Trump’s name was mentioned twice as often on Fox News and CNN as Harris’ name was:

“News media did set the tone on ‘the polls,’ and as I have stated before, they do not care about anything else than having something to speculate about….we have to work even harder to protect the integrity of the science behind what we do. The independence of polls, and also making sure that the distrust in “polls” do not harm other countries….It is not “only” political polling that is in danger. It is not only commercial polls that are in danger. It is also possible to get grants to do good academic research. Every time, bad polls are the foundation of a paper that estimates population, and they serve as “proof” that there is no need to pay for good data. Every four years, companies appear and do national surveys which by their design are methodologically wrong.  Yes, they give polling a bad name.”

WHAT CAN WE DO? WHAT SHOULD WE DO?

ESOMAR, along with other associations, has done a lot to educate journalists and set rules about how to report polls. But that formula is no longer enough – not when pre-election analyses are dominated by polling aggregators and not when the outcome is complicated by vote counting delays.

Nearly every time I was interviewed on the BBC and other non-U.S. outlets this year, I had to explain the mechanism of US elections.  The Electoral College and our voting and vote-counting rules may make little intuitive sense today, but they are the rules.  These centuries-old mechanisms have become more visible and more often criticised as American elections have become closer: the last three presidential elections highlighted the impact of the Electoral College (Hillary Clinton’s popular vote win but Electoral College loss, Biden’s election win dependent on a fraction of the vote in three states despite his larger popular vote margin, and now a small national vote lead transformed into what some term an Electoral College landslide).

There is a great deal of misinformation about polls and higher than reasonable expectations for what they can do. AAPOR (the American Association for Public Opinion Research) will conduct a methodological study of the American election just as it did in 2016 (https://aapor.org/wp-content/uploads/2022/11/AAPOR-2016-Election-Polling-Report.pdf) and 2020 (https://aapor.org/wp-content/uploads/2022/11/AAPOR-Task-Force-on-2020-Pre-Election-Polling_Report-FNL.pdf). That analysis will take a long time.

Meanwhile, what about ESOMAR? 

It is not clear that ESOMAR can solve the problem.  But I have some suggestions which could be incorporated into this year’s ESOMAR/WAPOR Polling Guide revision :

■ A recommendation that when reporting polls, any information that will affect a difference between a pre-election poll and the outcome be mentioned: election rules, the likelihood of slow counting, late decisions, missing voters, etc.

■ For countries like the United States, that would mean noting the Electoral College specifically. 

■ We need a better way of talking about close election polls.  When even other professionals believe polls must predict the winner, we need to figure how to provide that standard.

■ We should proactively remind people of polling’s limits.

I welcome any other thoughts.

Kathleen Frankovic
Consultant at YouGov

Related

17 January 2023 6 min read