How Reliable are Political Polls?

 

 

Pollsters are reluctant to call even their final polls before an election a “prediction” of voting results. Most of these surveys concluded their interviews 7-10 days before election day.  Pollsters quite properly point out that a disturbing revelation about a candidate during the last week of a campaign could render even the most careful survey moot. They also argue that a last-minute gust of advertising sometimes blows the numbers around. And trends from previous polls need to be taken into account -- if one side appears to be rising or falling.

 

But as the venerable Mervin Field Poll asserts in a self-assessment of its accuracy, “no polling organization can stay in operation long if there are frequent or wide variations between poll findings and election results.”

 

How the analysis was done

 

Grade the News conducted a Lexis-Nexis search through the courtesy of Santa Clara University’s Markkula Center for Applied Ethics. We looked at press reports in Bay Area newspapers contained in the Lexis-Nexis online library. We compared the results of the final poll before the San Francisco city elections in November and December’s run-off and the final poll before the spring California primary with actual voting results.

 

We focused only on races the Bay Area press reported on in papers indexed by Lexis-Nexis. These races presumably were most important to the local public. Note that Lexis-Nexis does not catalogue the Contra Costa Times or other smaller newspapers, nor does it contain all San Jose Mercury News stories. Likewise, local TV news transcripts were not available.

 

To examine the predictive power of these polls, we relied primarily on a method proposed by Warren Mitofsky, one of the nation’s most respected and veteran pollsters. Director of elections and surveys for CBS News for 27 years, Mitofsky now runs his own international polling organization.

 

We chose his method because of his reputation both among commercial and academic pollsters, because the method is the same for propositions and candidates, and because he was not involved in any of the surveys we analyzed.

 

Mitofsky suggested we assess “accuracy” by comparing the margin between the two leading candidates or two sides in a proposition in the final poll with the actual vote margin. If the difference between the winning and losing sides that is predicted in the poll is within two times the margin of error of the sample, then the poll is accurate.

 

For example, the KTVU (Channel 2)/San Francisco Examiner Poll showed the Yes’ ahead of the No’s on Prop 22 (limiting legal marriage to heterosexual couples) by a margin of 55 to 38 with 7% undecided. The poll reached 634 likely California voters, so the margin of error was calculated at +/- 4%.

 

Mitofsky argues that we can’t know what the undecided voters will do, so we can only work with those who express an opinion. The margin or gap between the Yes’ and No’s is 17 (55 minus 38). Given the margin of error, however, the gap really could be as wide as 59 Yes’ (55+4) and 34 (38-4) No’s -- a range of 25 points. On election day, Prop 22 won 61.4 to 38.6. The actual margin, 22.8 percentage points, lay within the range predicted. So the poll predicted this race accurately. 

 

The results

 

In the California open primary, held March 7, the side leading in the final poll before the election won about 8 times in 10. However, the actual vote fell within the margins of error only about 4 times in 10, even though all but one race the press focused on was a landslide. (Prop 26 was decided by a 2.4 point margin; the next closest was decided by 23 points.) The more public opinion falls on one side of a contest, the more likely even a haphazard survey is to predict the winner.

 

 

March 2000 California Primary

 

Pollster

Race

Leader wins?

Lead in Poll

Lead in election

Margin

of Error

Result

Within Margin?

Field

Pres.-R

Yes

Bush by 20

25.8

+/- 5

Yes

R-2000

Yes

Bush by 18

+/- 6

Yes

Field

Pres.-D

Yes

Gore by 38

63.2

+/- 4.5

No

R-2000

Yes

Gore by 38

+/- 5.6

No

Field

Senate

Yes

Feinstein by 33

28.3

+/- 3.2

Yes

R-2000

Yes

Feinstein by 38

+/- 4

No

Field

Prop 12

Yes

Y ahead by 41

26.2

+/- 3.2

No

Field

Prop 21

Yes

Y ahead by 23

24

+/- 3.2

Yes

Field

Prop 22

Yes

Y ahead by 13

22.8

+/- 3.2

No

R-2000

Yes

Y ahead by 17

+/- 4

Yes

Field

Prop 23

Yes

Y behind by 32

26.8

+/- 3.2

Yes

Field

Prop 25

Yes

Y behind by 10

Fails by 29

+/- 3.2

No

R-2000

No

Y ahead by 5

+/- 4

No

Field

Prop 26

No

Y ahead by 10

Fails by 2.4

+/- 3.2

No

R-2000

No

Y ahead by 2

+/- 4

Yes

Field

Prop 28

Yes

Behind by 27

Fails by 42.4

+/- 3.2

No

R-2000

Yes

Behind by 20

+/- 4

No

 

Total

82%

 

 

 

41%

 

Field Poll based on 1,048 likely California voters; survey completed on Feb. 27 and sponsored by major state newspapers, including the Chronicle and Mercury News.

 

Research 2000 (R-2000) Poll based on 634 likely California voters; survey completed on Feb. 27 and sponsored by KTVU and the Examiner.

 

The San Francisco elections, held Nov. 2 with a run-off Dec. 14, were closer than the spring primary contests. With fewer one-sided voter preferences, the surveys local media relied on predicted the winner in their final poll in 5 of 9 races. The actual vote fell within the margins of error only 4 times in 9 races.

 

November and December San Francisco City Elections

 

Pollster

Race

Leader wins?

Lead in Poll

Lead in election

Margin

of Error

Result

Within Margin?

R-2000

Mayor 1

Yes

Brown by 23

13.5

+/- 4

No

Bald.

Yes

Brown by 21

+/- 4

No

R-2000

DA 1

No

Fazio by 10

Hallinan by 0.7

+/- 4

No

Bald.

No

Fazio by 9

+/- 4

No

R-2000

Mayor 2

Yes

Brown by 14

19.3

+/- 4

Yes

Bald.

Yes

Brown by 20

+/- 4

Yes

R-2000

DA 2

No

Fazio by 7

Hallinan by 0.9

+/- 4

Yes

Bald.

No

Fazio by 18

+/- 4

No

Bald.

Prop A

Yes

Y ahead by 41

Wins by 46.5

+/- 4

Yes

 

Total

56%

 

 

 

44%

 

Research 2000 polls based on 630 likely San Francisco voters; surveys completed on Oct. 27 and Dec. 8, 1999 and sponsored by KTVU and the Examiner.

 

Baldassare Associates polls based on 600 likely San Francisco voters; surveys completed on Oct. 23 and Dec. 4 and sponsored by the Chronicle.

Margin of error conveys false sense of precision

 

In any given race, last-minute shifts of public opinion could explain a result outside the margin of error of a scientifically conducted final poll. For aggregated results, such as these, it’s possible, but unlikely, that in every case where results exceeded margins of error, fickle voters were the cause. Regardless of the method used to estimate accuracy*, voting percentages fell within the margin of error only about half the time. In fact, even doubling the margin of error still wouldn’t have resulted in an accurate prediction for about a third of all the races analyzed.

 

“The industry has an interest in giving a sense of over-precision and underestimating imprecision,” says academic pollster Steven Chaffee, the Rupe chair in the social effects of mass communication at the University of California, Santa Barbara.

 

Not the only source of error

 

In each poll, the margin of error was the only indicator of the poll’s accuracy reported by newspapers (but not necessarily by the pollster). The margin of error, however, may not mean what most people think. It’s not the total deviation one might expect from the true public opinion. It’s only a measure of sampling error. And sampling error may be the least of a pollster’s worries. 

 

Sampling error measures only the chance that if you repeated the poll again and again with the same size sample -- and did everything perfectly -- you’d get a result within a known number of percentage points from the true opinion in the population at large. To be technical, only in 5 surveys in 100, would you expect to get a poll result outside the margin of error. The margin of error is taken from a table; the bigger the random sample, the smaller the margin of error.

 

Many assumptions violated

 

In fact, pollsters concede, the margin of error sits atop a stack of assumptions that are never met in the real world. Academic pollsters say the biggest assumption that’s violated is the response rate. The margins of error in polls are based on 100% of the individuals randomly picked in the sample cooperating fully.

 

That means that everyone in the population of interest -- those who will cast a ballot in these surveys -- has an equal chance of being included. For that to be true every voter, at the least, would have to: 1) speak the same languages as the interviewer; 2) be at home and not using the phone when the pollster calls; 3) agree to do the interview; 4) give an honest, considered  opinion (rather than a politically correct one).

 

Pollsters would also have to be able to identify voters. Separating those who don’t vote, or only say they intend to vote, from those who really will can only be estimated by pollsters.  

 

Through experience, pollsters have learned that if the response rate is 60% or better, surveys are usually relatively accurate. But these days pollsters say they are lucky to get a 40% response rate, even when they call respondents back six times.

 

Field and Baldassare Associates use a variation of a response rate called a “cooperation rate.” It’s generally defined as the number of those who completed the interview, divided by all of those in the appropriate group whom the interviewers reached who did or didn’t complete the questionnaire.

 

Del Ali, director of Research 2000, a Maryland firm which conducted the Examiner/KTVU polls, discounts the importance of the response rate: “As long as you get the [agreed upon] number of registered voters in your poll, it’s still going to be valid.”

 

Former CBS pollster Mitofsy, however, questions whether a pollster should even publish a margin of error when response rates are very low. “All the statistical theory assumes there’s no error from response rate. It assumes everybody responds.”

 

Philip Meyer, the Knight professor at the University of North Carolina and author of the book Precision Journalism, notes that a “cooperation rate” still leaves out those who aren’t home to answer the phone, screen out pollsters’calls with answering machines, those whose phones are busy every time the pollsters call, and those with caller ID who don’t respond to unfamiliar numbers. Depending on how the cooperation rate is figured, it might also exclude those who don’t speak the same language.

 

Polling is getting more difficult

 

“Polling is a lot harder than it used to be,” Meyer says. “Polls are having a harder time with response rates.” The seven academic experts and commercial pollsters, we interviewed, listed a number or reasons why conducting accurate polls is becoming more difficult:

 

·        Language. Interviewing only in English eliminates from consideration those who don’t speak it or are uncomfortable with it. The Bay Area is home to many who speak only Spanish, or Vietnamese or a dialect of Chinese. Mark Baldassare, president of Baldassare Associates and a senior fellow at the Public Policy Institute of California, says “you have to weigh whether the size of the group is large enough to justify the additional cost.” Fortunately, he adds, polls of likely voters are more likely to encounter English speakers than surveys of the wider population.

 

Mark DiCamillo, director of the Field Poll, says Field’s statewide polls are now always conducted in both English and Spanish. Baldassare said his statewide polls also include Spanish. The San Francisco poll, however, was in English only, according to his wife, Cheryl Katz.

 

·        More calls per completed interview. Most pollsters say they used to stop after four additional calls failed to elicit a response. Now many have moved to six “call-backs”. “People’s lives are too busy,” Baldassare explains. “They’re on the Internet, or shopping, or in traffic, or watching television.”

 

People are also working more. When both husband and wife are working long hours, they are home less to receive calls. The same problem often exists for single-parent households.

 

“We don’t treat people calling us on the phone with the same respect as we used to,” says Professor Meyer. Pollsters asking for political opinions are competing with “people raising funds or selling things under the guise of polling.” The result: people are less willing to be surveyed. Of all types of surveys, however, people cooperate most when the subject is politics, he adds. 

 

Pollsters using random-digit dialing to be sure to include those not listed in phone books or other records are also encountering many more empty numbers as phone companies grab big blocks of numbers they hope to assign to people seeking additional lines for cell phones and internet connections. “You might need 8,000 telephone numbers to get 1,000 usable responses,” says the Field Poll’s DiCamillo.

 

The additional effort needed to complete polls has raised the price, DiCamillo adds. To conduct a 15-20 minute interview, statewide with 1,000 Californians costs about $50,000, up 25 to 40% from 10 years earlier, he adds.

 

·        Political apathy. “People often don’t have opinions,” says Professor Chaffee. Commercial polls, which often hire companies called “field services” to actually make the calls, may not to take the time to get a respondent thinking enough about an issue to give more than a top-of-the-head response.

 

“I feel antsy when we’re making the phone calls and catching people on the fly,” he explains, “I try to get them thinking and conversing. But commercial polls are more quick-hit. It would be too expensive [to do otherwise].”

 

Opinions vary on the impact of these trends. DiCamillo says the ‘90’s were the most accurate decade in the Field Poll’s 52-year history. Mitofsky analyzed poll accuracy in an academic article published in 1996. He doesn’t see it getting worse. But Meyer says, “the whole polling industry is in a state of anxiety. This [November] could be the election that it messes up.”

 

* Alternate calculation: Repercentage poll estimates of eventual winner’s support based just on those expressing an opinion and compare it to actual election support of winner. If within the margin of error, the poll accurately predicted the outcome.

 

-- John McManus

 

Email this article to someone

What do you think?