Thursday, January 17, 2008

Polls on Tolls

I wasn't sure whether to call this entry "Taking a Poll on Tolls" or "Taking a Toll on Polls".

A Monmouth University/Gannett New Jersey Poll on the toll road plan proposed by Governor Corzine was released today.

Our findings paint a slightly different picture from the poll released by the Bergen Record on Monday (see story and poll results). This follows presidential primary polls where our polls differed as well. Herb Jackson did a pretty good job summing up the differences in methodology on his election blog, so I’ll just focus on the toll road polls here.

The Record poll, which is conducted by Research 2000 in Maryland, found 42% in favor of the plan, 52% opposed, and 6% with no opinion. The Monmouth/Gannett poll found only 15% in favor, 56% opposed, and 29% with no opinion. So, what’s up with that?

One of the key differences in assessing public reaction to the toll road plan is the way the questions were worded in the two polls. The Record’s poll described the plan as “reining in New Jersey’s public debt load by imposing a series of 50 percent toll hikes…Proceeds of bonds backed by the future revenue increases would be used to retire debt and fund new road improvements.” Our poll described it as a “plan to raise tolls about 50 percent every four years over the next 14 years in order to reduce state debt and fund transportation projects.”

The Record wording started off by emphasizing “reining in debt” while ours started with the cost issue. We also spelled out a time frame for the toll hikes. The Record’s poll asked respondents whether they “strongly favor, favor, oppose, or strongly oppose.” Our poll asked “do you favor or oppose this plan, or do you have no opinion?”

This may all seem a bit esoteric for the casual poll watcher, but in the short time frame after the toll plan was released (although it has been spoken about for months), question wording and response option choices can matter. Each poll started interviewing the day after Corzine’s State of the State address, but the Record completed interviewing in two days, while the Monmouth/Gannett poll interviewed for five days.

Differences in question wording, as in this case, are valid choices made by pollsters as ways to tap what the public is really thinking and it is incumbent upon us not to “create” opinion by phrasing questions that are far removed from the experiences and discussions of the typical resident.

And we have to be fair in the way we word the questions. Most “favor or oppose” questions are just that. The pollster will only record a “no opinion” response if the respondent insists. On this issues, I heard from a number of plan supporters that they believed many members of the public wouldn’t care about this plan since they don’t drive the toll roads. So, it made sense to explicitly include the “no opinion” option in the question we used.

But that alone doesn’t explain the differences between the two polls’ results. There are some interesting demographic differences in the poll breakdowns. In the Record poll, Democrats support the plan by a 62%-32% margin while Republicans reject it by 81%-17%. Independents reject it by a slimmer 55%-39% margin. In our poll, residents of all partisan stripes rejected the plan, including Democrats (48%-19%), Republicans (68%-9%) and independents (57%-16%).

There appears to be some serious difference in the Democrats interviewed by the Record and those interviewed by us. The Record poll’s sample was comprised of 600 New Jerseyans who reported they generally vote in state elections and are likely to vote this November. Our poll was composed of 804 New Jersey adult residents. [Side note: As a matter of standard procedure, the Monmouth/Gannett poll prefers full population samples when polling about issues that affect all residents, whether they vote or not.]

Does this mean that Governor Corzine does better among people who will go out to vote when (and if) he runs for re-election? Well, not necessarily. When we drilled down our sample to the most likely group of voters, we found that opinion of the plan stood at 16% favor, 58% oppose, and 26% no opinion – nearly identical to our results for the general population.

Aside from the type of sample used by each poll (likely voter vs. adult population), there are some key differences in how the surveys are weighted. We use weighting techniques make sure our surveys are representative of the population on region of state, gender, age, education and race.

One thing we don’t use in our weighting is party identification. Party ID is an attitude that is subject to change, unlike hard demographic data (unless of course you’re planning on a sex change operation). Most media pollsters ask something along the lines of: “In politics today, do you consider yourself a Democrat, Republican or independent?”.

A major difference between the two polls is our party breakdowns. Our full population sample in this poll identified itself as 37% Dem, 22% Rep – we are pretty a blue state – and 41% independent. The Bergen Record poll’s party-self identification is 28% Dem, 14% Rep, and 58% independent.

While New Jersey’s electorate is pretty fickle, it’s not that independent. Interestingly, the Bergen Poll party numbers roughly correspond to the party registration figures on the state’s official election rolls. However, as anyone of who has run a campaign in New Jersey knows, a good number of those “unaffiliated” voters consistently vote either Dem or Rep in general elections. You unaffiliateds who are party-line voters know who you are! They are only unaffiliated because they haven’t bothered to vote in one of New Jersey’s typically non-competitive primaries (making turnout projections for this year’s presidential primary that much more interesting).

The problem is if you weight the party preference question (“what do you consider yourself today?”) to the party split in the voter registration books for New Jersey – and I’m not saying this is what Research 2000 did – you’re mixing apples and oranges.

I have some more thoughts (and data) on weighting poll results by party ID. But it’s been a long week, so I’ll leave that for another post.

Drive safely.

Monday, January 14, 2008

What Happened in New Hampshire?

So what went on with those Democratic pre-election primary polls in New Hampshire. My take on it is that we don’t yet know what happened. But the fact that it involved all polls (including both the Clinton and Obama campaigns by all reports) and those same pollsters tabbed the Republican outcome correctly, points to something occurring on the ground Tuesday.

Maybe the polling methodology was universally off-base or perhaps pollsters simply stopped polling too soon to catch an amazing Clinton surge in the final day (both are plausible given the turnout and vagaries of the New Hampshire primary electorate; see this report from the networks’ chief exit pollster.

The post-mortems have begun (for example, here from ABC News and Gallup) and at least one pollster appears to be revising history for his New Hampshire tracking poll. I’m not sure how someone can claim that his data showed Clinton behind by only 2 points on the last day of polling while his rolling average actually increased Obama’s lead from 10 to 13 points. (And I’m pretty sure my New Jersey high school had a decent math program.)

A number of observers have focused on the potential race factor (specifically, white respondents telling a pollster they would vote for a black candidate but casting their ballots otherwise). That may be partially the case, but at present there is little supporting evidence. First of all, in past instances where a candidate’s race has been a factor in polling miscalls, it was in general elections, where white Democrats OVERstated their propensity to support their party’s black nominee. In New Hampshire, we are only considering Democratic or Democratic-leaning independents voting in a primary. Furthermore, Obama’s support was not over-stated in the polls. Clinton’s was understated.

So for race to play a significant factor in the New Hampshire polls’ universal failure, you would have to accept the premise that white, less-educated, likely Democratic primary voters who chose NOT to speak to pollsters were significantly more racist than white, less-educated, likely Democratic primary voters who did answer the pre-election polls. And moreover, that this same group was significantly LESS sexist, because they overwhelmingly voted for Hillary Clinton.

Unlike some pundits who are currently hitting the airwaves and print media with the certainty of their speculations, I’ll withhold judgment on what really happened until we have time to sift through the empirical evidence and get some REAL NUMBERS.

Regardless, the New Hampshire experience is another opportunity to remind poll watchers that pre-election preference polls are just that – polls which measure voters’ preferences prior to an election. The fact that they are generally good predictors of the eventual outcome is in part a testament to the fact that change is usually gradual. Or at least slow enough to be caught the day or two before an election … fickle electorates like New Jersey’s notwithstanding.

Wednesday, January 2, 2008

Welcome

Welcome to the first installment of the “Real Numbers” blog.

Numbers are knowledge. Numbers are power. They influence everything from media coverage to policy decisions. Unfortunately, not all numbers are created equal. This venture will cast a critical eye on the use of numbers in the public domain – to sort out how “real” numbers are. This blog will generally focus on New Jersey issues, and occasionally venture into the national arena.

The objective is to foster a keener awareness of how numbers are used and misused in the pursuit of political gain or notoriety. As creators of some of those numbers, we at the Monmouth University Polling Institute understand the impact “statistics” can have. Polling numbers will certainly be fodder for many of the entries on this blog, but it will also examine other “real numbers” that gain public currency (e.g. funding formulas, health care stats, voting turnout, etc.).

In some cases, the blog will turn its attention to numbers built on faulty methodology. But in many more cases, readers can expect a discussion of how justifiable variations in methodology can be used to reach different conclusions.

For example, what does the New Jersey public consider to be a “significant” property tax cut? In a poll we conducted in July 2006 most homeowners selected a dollar amount that averaged about 15% of their property tax bill. But we also found that their perception of “significant” depended on how the question was asked. If we started off the questioning at $250 and worked up to $2,000, most respondents settled for a lower amount. But if we started the suggested level at $2000, many respondents would not accept the lower boundary as being significant.

In any event, after the final “caps and credit” deal was signed last year providing a property tax credit of 20% for most homeowners, our February 2007 poll found that only 1-in-10 who had heard of the plan believed it would deliver significant, long-term relief. That opinion seemed to have more to do with the plan’s lack of systemic reform than with the dollar amount saved in the first year. Indeed, a Quinnipiac Poll released around the same time found that a large majority of New Jersey voters approved of the intention to lower property taxes by 20%, but disapproved of how the governor and legislature had handled the issue.

In the end, the main objective of this blog is to increase accountability for the dissemination of numbers in the public domain, including any numbers that appear on this site. So when you see a post that is off base, let me know by posting a comment.

In the meantime, for those of you, especially journalists, who would like to learn a little more about interpreting polls for the non-pollster, spend a little time with this free News University online training, developed by the American Association for Public Opinion Research.