Polling error is not inevitable, and there are signs that 2022 may be different.
Last Monday, I wrote about the early “warning signs” in this year’s Senate polling. Then three days later, I helped write up the results of a new national survey: a New York Times/Siena College poll showing Democrats up by two percentage points in the generic ballot among registered voters.
If you thought that was a touch contradictory, you’re not alone. One person tweeted: “Didn’t you just write that the polls are an illusion? Why should we believe this one?”
It’s a perfectly fair question, and it’s one I encounter regularly: Should I believe these polls — or not? It’s also a surprisingly complicated question to answer.
This may sound obvious, but I’ll say it anyway: We believe our polls provide valuable information about voters and the state of the election. These polls are time-consuming, expensive and stressful, with considerable mental health and reputational costs when they wind up “wrong.” We would not be making this effort if we believed they were doomed to be useless.
There are warning signs, yes — and we’ll track them in this newsletter. I do not think you should unquestioningly “believe” polls, at least not in the sense that you “believe” figures from an encyclopedia. Polls are not exact measurements, like the diameter of the Earth or the speed of light. They are imprecise estimates — and even the classic margin of error seriously understates the actual degree of uncertainty.
But despite all the limitations, a 2020 polling error is not inevitable, either. In fact, our most recent survey showed some small signs of encouragement that 2022 polls may not be like those of 2020.
Polling errors at sea
I spent a week on a lake in Ontario last month, so forgive me for analogizing polling failures to a different kind of catastrophe: a boat capsizing in a violent storm.
Like a rickety boat in a storm, the polling error in the last cycle required both difficult conditions — whether it’s rough waves or the lower likelihood that Trump voters participate in surveys — and a polling vessel that simply couldn’t handle the adversity.
Nonetheless, we’re setting sail yet again this year.
Is it a little scary? Yes. We pollsters are stuck with the same boats that flipped last time. We would like to buy something sturdier, but there isn’t anything better on the market. That’s not to say that we aren’t making changes, but many of our best ideas are the equivalent of tightening the screws and patching holes. Even with such changes, we can’t be confident that our ships today will withstand the storm we experienced in 2020.
But are we doomed? No. For example, there’s a credible theory that the pandemic contributed to polling error, as safety-conscious liberals were more likely to be home during lockdowns (and answered telephone calls) while conservatives went out and lived their lives. With the lockdowns over, those tendencies may be, too.
There’s also a credible theory that Donald J. Trump himself is an important factor. If so, polling could be more “normal” with him off the ballot.
What you’ve read to this point is the nautical version of an analysis written by Nate Silver of FiveThirtyEight, who argued that the polls may not be wrong this fall. Although the piece is sort of framed as a rebuttal to mine, I don’t really disagree with any of it. Consider it recommended reading.
A favorable data point from our most recent poll: response rates by party
There’s one big difference between polling after 2020 and setting sail the day after a storm: The sailor can probably find a weather forecast.
We don’t get poll error warnings, but last week’s newsletter pointed to something about as close as it gets: surprising Democratic strength in exactly the same places where the polls overestimated Democrats last time. If you’ll permit this metaphor to continue, it’s an ominous cloud on the horizon. Dark clouds don’t necessarily mean there will be a fierce storm, but if there were going to be a storm, we’d see some dark clouds first. Similarly, this pattern in the state polling is exactly what we would expect to see if the polls were going to err in the same way they did two years ago.
This week, I can report a new measurement of the conditions facing pollsters: whether Democrats or Republicans were likelier to respond to our latest Times/Siena survey.
I wasn’t systematically tracking this in fall 2020 — this data is not always easy to collect and process, especially with everything else going on. But if I had been tracking the response by party, it would have been another warning sign.
Looking back at our data from September and October 2020, white Democrats were 20 percent likelier to respond to Times/Siena polls than white Republicans. This disparity most likely betrayed a deeper problem: Trump voters, regardless of party, were less likely to respond to our polls.
On this front, I have good news: The response rate by party is more balanced so far this cycle. In the national poll we released last week, white Democrats were only 5 percent likelier to complete the survey than white Republicans. That’s back down near the level of our October 2019 polling, when our early survey of a projected Biden-Trump contest came eerily close to the final result among likely voters. In those battleground state polls, white Democrats were 6 percent likelier to respond than Republicans, compared with a 23 percent gap in the fall 2020 polling of those same states.
This is a good sign. Maybe — just maybe — our poll last week was closer to the mark than polls have been in recent cycles.
But what if it’s wrong anyway?
Still, it’s worth imagining what might happen if the polls are off again by the same 2020 margin.
Republicans would have been the ones leading our poll last week by two points among registered voters, instead of Democrats. It would certainly be a different race, but the story might not read very differently. We’d probably still characterize the contest as fairly competitive at the outset of a general campaign. That’s a little different than in 2020, when a four-point error made the difference between a Biden landslide and a fairly competitive race.
If you think about it, there are a lot of cases when a 2020-like polling error can be quite tolerable. Does it make a huge difference whether 46 percent or 50 percent of Americans think the economy is good or excellent rather than poor or bad? What about whether Mr. Trump is at 54 percent or 58 percent in an early test of the 2024 primary? In contrast, polls of close partisan elections can be extremely sensitive: whether Mr. Biden has 46 percent or 50 percent of the vote could be the difference between a decisive eight-point victory or a clear victory for Mr. Trump, given the recent skew of the Electoral College.
And it’s also worth noting that the polls contain valuable information, even when they miss on the horse race. In 2016, for instance, the pre-election polls showed Mr. Trump’s huge gains among white voters without a college degree.
And last cycle, they showed Mr. Trump’s gains among Hispanics. These trends uncovered by polls continue to have import. If you missed it over the weekend, my colleagues analyzed the results of our poll that focused on Hispanics, in this article, based on these cross-tabs.
Source: Elections - nytimes.com