The Skeptical Pollster site is run by David Moore, a former Gallup Poll editor. Moore asserts that Gallup’s current definition of “swing” voters grossly underestimates the level of indecision in the electorate. Or more precisely, the poll overstates the number of voters who are truly committed to a candidate in this year’s presidential contest.
Moore points to an experiment Gallup did 12 years ago that directly asked voters whether they had made up their minds about who to support. This was asked prior to the typical horse race question (e.g. “If the election were held today, who would you vote for…”). Using this format in a September 1996 poll, they found that 40% of voters said they were not settled on a candidate.
Moore uses this finding as evidence that the typical horse race question forces people to make a choice they haven’t yet considered making. I’m not so sure.
After February’s New Jersey primary, the Monmouth University Polling Institute re-contacted voters from our last two pre-election polls (conducted Jan. 9-13 and Jan.30-Feb.1) to see what these voters actually did on primary day. In those polls, the number of likely voters who told us they were undecided was 17% for Republicans and 21% for Democrats the week before the primary, and 23% for Republicans and 26% for Democrats the month before. However, about one-third of these undecided voters admitted to “leaning” to a candidate when prompted. So the “pure” undecided number reported in our final polls ranged between 11% and 17%.
On the other hand, our post-election interviews found that, among those who actually voted on February 5th, 23% said they made up their minds in the final three days and another 37% decided within the month prior to primary day. This seems to support Moore’s assertion that many, if not most, voters in pre-election polls “have not yet even decided whom to support” rather than being nominally decided voters who “could” change their minds.
But there’s more to this story. Let’s take a closer look at the nearly 1-in-4 New Jersey primary voters who told us they made up their minds in the final three days. Among those voters, 77% had actually expressed a candidate preference in our pre-election polls. Among this small group of voters, only one-third changed their mind in the end. In other words, most of those voters who supposedly did not decide until the final days were able to predict their voting behavior a week or even a month prior to the primary. Moreover, among all voters who named a candidate preference in our pre-election polls, 5-in-6 stuck by that choice on primary day.
Of course, many “likely” voters that a poll measures decide to sit out an election – and this group could arguably be included as part of the voter indecision phenomenon. So if we start with our total base of likely voters and add together vote switchers (12%), undecided voters who would not state a preference until election day (6%), and voters who stayed home (20%), we find that 38% of our pre-election voter sample was volatile. That’s pretty close to the 40% number cited by Moore.
Do pollsters err when they press initially undecided voters to name a preference? To answer that, you need to consider the purpose of pre-election polls. These polls, by design, give us a picture of the electorate in the aggregate at a specific time point. They are not designed to track individual-level changes. That’s what survey researchers call a panel survey.
The evidence suggests that, for the most part, individual voter volatility is statistically random – in other words, these vacillations tend to cancel each other out. That’s what we found among the vote switchers in our primary polling. In addition, about half of the likely voters in our pre-election polls who did not turn out on primary were pure undecided voters anyway. So their decision not to participate had little impact on the accuracy of our pre-election horse race numbers.
Certainly many voters are not entirely committed to a candidate at any point in an election and we should remain aware of that volatility. Pre-election polls may not fully tap the amount of individual-level churning that goes on within the electorate, but the polls do give us a decent picture of the aggregate effects of that churning. In the end, pollsters probably learn more by measuring voters’ leanings than if we let all undecided voters simply stay on the fence.
Track those aggregate changes over time (e.g. see Pollster.com), and you have a more complete story about how the candidates and issues are resonating with the electorate over the course of a campaign. It’s certainly a more interesting story than having poll after poll tell us that 4-in-10 voters are not completely certain of what they’ll do on election day. Heck, I’m not totally sure what I’ll be doing tomorrow.
No comments:
Post a Comment