Mr. Bui is the deputy graphics director for Opinion. From 2015 to 2022, he was a graphics editor for the Upshot.
Pollsters are holding their breath. Their time-tested method of randomly dialing up people isn’t working like it used to. Voter turnout in the last two national elections was a blowout compared to years past. Donald Trump’s most enthusiastic supporters seem to be shunning calls to participate in polls.
But what’s really troubling pollsters going into this election is that it’s unclear how much more error these problems will add during this cycle. In fact, many think it’s unknowable.
To be fair to pollsters, many Americans demand more certainty and precision from political polls than they do from other disciplines of social science. Just a couple of percentage points can make all the difference in an election.
I talked to 10 of the country’s leading pollsters to discuss the midterm elections and what worries them the most about polling. Most understand the public’s frustrations. Some are experimenting with new approaches. Others are concerned that the problems are deeper than what their current toolkit can fix. Spend several hours talking to them, and there’s only one conclusion you can reach: the same cross-currents of mistrust, misinformation and polarization that divide our nation are also weakening our ability to see it for what it is. The stronger those forces grow, the worse our polling gets.
And like you, pollsters are anxiously waiting for Nov. 8. Not necessarily to see if a specific candidate wins, but to see if election polling will live another day.
What follows is analysis and edited excerpts from our interviews.
Turnout troubles
Changes in voter turnout drive one major source of error in polls. To accurately survey the electorate, most pollsters have to make an educated guess about who is going to show up on Election Day. Some use voter lists; others use algorithms, and still others rate people on their likelihood to vote.
But if voter turnout is much higher — like the record-breaking years of 2018 and 2020 — then there’s a good chance their polls will be off, particularly if the surge in new voters are from demographic groups that pollsters didn’t expect to vote.
Call waiting
Another problem pollsters face is actually reaching voters. The irony, of course, is that while we live in a world where our personal data seems to be online everywhere, actually getting in touch with us has gotten much more difficult.
The gold-standard approach has historically been random digit dialing, where a pollsters’ staff calls randomly generated phone numbers off a list. But the explosion of unused numbers, cell phone numbers (I live in New York but frequently receive calls from pollsters in Wisconsin) and people’s greater reluctance to answer their phones has driven response rates off a cliff. In a recent poll for the New York Times, Nate Cohn mentioned that it took two hours of dialing just to complete one interview.
As a result, pollsters are looking at other ways to get in touch. Some are sending surveys to people through a link via text message. Others rely on an online stable of people who are paid to take surveys. Some survey people through an automated voice messaging system.
But most admit that they are still in the early stages of figuring out how well these new approaches will work for elections. Some pollsters, like Ann Selzer, aren’t convinced that any of these methods are the right way forward.
Methods that Emerson College Polling Institute is experimenting with
Method | What is it? | Trade-offs |
---|---|---|
Text-to-web | Sending people links to an online survey through a text message. | Might the most promising method because it seems to be able to capture younger and older people. |
Interactive voice response | People respond to an automated message with their keypad. Limited to landlines due to F.C.C. rules. | Better at reaching older people, people with a high school degree or less and people living in rural places. |
Online panels | A preselected group of people who take online surveys. | Biased towards urban, more educated and younger people. Not necessarily representative of the electorate as a whole. And respondents are paid or given “incentives” to participate. |
The Trump effect
One of the trickiest problems pollsters have had to reckon with over the last few elections is that Donald Trump’s most dedicated supporters won’t respond to their surveys, an error pollsters’ call non-response bias. The key issue here is that demographically similar Republicans who are talking to pollsters haven’t voted the same way as the ones who aren’t.
This effect was particularly acute in the states like Wisconsin, Ohio and Pennsylvania in 2020, where the polls were much more biased toward the Democrats.
A blurry snapshot
These are the kinds of issues that are on pollsters’ minds when they release their polls, leading them to read polling results far less literally than the average news reader: seeing polls less like a studio portrait and more like a blurry Polaroid.
It’s true that polls include a margin of error, which is essentially the mathematical error inherent in using a small, randomly selected group of people to describe a much larger group of people. But that is just one source of error. This and the problems mentioned earlier all add up to a broader category known as total survey error. According to an analysis of polls from 1998 to 2014, it’s roughly twice as large as the published margin of error.
That was before the rise of Donald Trump, the further erosion of response rates and recent record turnout. It’s probably higher now. Which is why some pollsters, like Patrick Ruffini, are more circumspect about how their work is read and shared.
Pollsters that work for political campaigns, like Robert Blizzard, aren’t as worried. Their job is to help their candidate win. For them, it’s less important for their polling to be right on Election Day than it is to be right about messaging, policy and spending. They claim to have much more data than what’s available to the public, but of course we’ll never know.
What “true” error could look like
Total survey error is much larger than people may assume and there’s not a good way to quantify exactly how large it might be.
How to read the polls
What should we make of all this? Is there a good way to read polls without having spent decades in the field? The pollsters shared a few rules of thumb: Look at the context around the poll results. How large is the share of independent voters? Who is the incumbent? What kinds of people are the pollster surveying?
Source: Elections - nytimes.com