Should polling change or stay the same? It doesn’t seem a hard call.
It’s been nearly a decade since I first attended the annual conference of pollsters, known as AAPOR.
Back then, it was a very different place. It was dominated by traditional pollsters who knew change was inevitable but who appeared uncomfortable with the sacrifices required to accommodate new people, methods and ideas.
At the time, that gathering reminded me of the Republican Party, which was then grappling with how to deal with demographic change and Hispanic voters in the wake of Barack Obama’s re-election. There are obvious differences, but the AAPOR crowd’s talk about reaching out to new groups and ideas was animated by similar senses of threat that the Republicans were facing then — the concern posed by long-term trends, the status threat from newcomers, and the sense that traditional values would be threatened by accommodating new ideas.
But if Donald J. Trump showed that Republicans didn’t have to support immigration reform to win, he most certainly showed pollsters they would have to innovate. A decade and two historically significant poor cycles later, AAPOR is a very different place. The old guard is still around, but presentation after presentation employs methods that would have been scorned a decade ago. This year’s Innovators Award went to someone who referred to AAPOR as an association of “Buggy-Whip Manufacturers” back in 2014, the year I first attended.
The innovative turn in the polling community is very real, including in public political polling. Today, virtually no pollsters are using the methods they did a decade ago. The ABC/Post poll is perhaps the only major exception, with its live-interview, random-digit-dialing telephone surveys. But to this point, innovation and change hasn’t been enough to solve the problems facing the industry. It has been enough only to keep it afloat, if still struggling to keep its head above water.
Heading into 2024, pollsters still don’t know if they can successfully reach Trump voters. They still struggle with rising costs. And they really did lose something they had a decade ago: the belief that a well-designed survey would yield a representative sample. Today, a well-designed survey isn’t enough: The most theoretically sound surveys tended to produce the worst results of 2020.
To this point, innovation in polling has occurred on two parallel tracks: one to find new ways of sampling voters in an era of low response rates; another intended to improve unrepresentative samples through statistical adjustments. If there’s an underlying theory of the Times/Siena poll, it’s to try to get the best of both worlds: high-quality sampling with sophisticated statistical adjustment. There are surprisingly few public polls that can make a similar case: There’s bad sampling with fancy statistical modeling, and there’s some good sampling with simple demographic adjustment, but not much of both.
Because of the pandemic, it has been a few years since I’ve attended AAPOR in person. But from my vantage point, this was the first time that these two parallel tracks looked as if they were getting closer to merging. They haven’t merged — the old guard remains reluctant to make some of the sacrifices needed to improve its methods of adjustment; costs will prevent the upstarts from matching the old guard’s expensive sampling. But they’re getting closer, as researchers on either track realize their own efforts are insufficient and dabble a bit more in the ideas of the other side.
One early theme, for instance, was a recognition that even the most sophisticated survey designs still struggle to reach less engaged voters, who tend to be less educated and perhaps likelier to back Mr. Trump as well. This problem may never be perfectly addressed, and so it benefits from both the best of traditional and nontraditional methods.
For our part, I promise we’ll have more on our Wisconsin experiment — which had parallel telephone and high-incentive mail surveys ahead of the 2022 election — in the weeks ahead. In the last week or so, we received the final data necessary to begin this analysis, and I’ve started to dig in over the last two days. It’s early in the analysis, but there’s some interesting stuff. Stay tuned.
Source: Elections - nytimes.com