Polls’ Precision is exaggerated

Speaking for the people

In a democracy, perhaps the most powerful claim anyone can make is what the public, or voters, think about an issue. A great deal rides on poll results, even those released months before an election.

Surveys have become central to the political process, according to Steven Chaffee, the Rupe chair in the social effects of mass communication at the University of California, Santa Barbara. “A lot of people want that news [of poll results] to make their best bet on endorsements, or to tie their campaign to a particular proposition, [to decide] whom you will support with a campaign contribution. The powerful want to know which way the wind is blowing. If they think a particular candidate can’t win, they won’t put money on them.”

When a claim about public opinion is presented as a percentage of public support -- say 55 percent -- and then qualified with a margin of error of “plus or minus 3.2 percentage points,” it conveys great precision.

But in a previous analysis, Grade The News found actual voting results failed to fall within the margins of error of polls conducted only a week or 10 days before the election in about half the races measured. In some of those contests last-minute opinion changes may have pushed voting percentages beyond pollsters estimates and error margins. But when so many races defy pollsters, it’s likely their margins of error are seriously understated.

Polling is art as well as science

Even at its best, polling provides an imperfect lens through which to determine public opinion. As Philip Meyer, the Knight Professor of Journalism at the University of North Carolina and the author of Precision Journalism, points out, polling may be based in science, “but it’s still part art.”

Consider how even a small change in question wording can affect results. Field Poll interviewers used a short and long form of a question about Proposition 21 (Juvenile Crime) just before the March primary.

When interviewers said “Proposition 21 provides changes for juvenile felonies -- increasing penalties, changing trial procedures and required reporting” likely voters said they opposed it by a 47-30 margin.

But when told “the Juvenile Crime Initiative increases punishment for gang-related felonies, home invasion robbery, carjacking, witness intimidations and drive-by shootings, and creates a crime of gang-recruitment activities” likely voters favored it by a 55-32 margin.  

A simple expansion of the wording made a 40 point difference! The first version was taken from ballots used in seven counties; the second from ballots used in the rest of the state.

Many sources of error

In addition to wording, the order of questions can make a difference. So can tone of voice, even the interviewer’s accent. In theory, every interview should be conducted exactly the same way, says Mark DiCamillo, director of the Field Poll. But that’s not possible when several thousand calls must be made.

Prof. Meyer says the whole context of polling can lead to distorted views of what the public really wants. “Poll choices rarely force a choice among alternative uses for [public] money,” he explains. “So people tend to look like they’re for more things than they will actually support.”

In addition, some political decisions aren’t made by individuals, says Prof. Chaffee, but by couples going through the ballot just before the election. If a wife is well-informed on one race, her husband may bow to her expertise and conform his opinion to hers. On another race, it may go the other way. Because polls target individual’s opinions, they may not reflect the actual vote.

There are also problems of identifying the people you want to poll. When the San Jose Mercury News recently surveyed Latinos it relied on whether the residents’ last named sounded Hispanic. Nadine Selden, the Mercury’s senior research analyst, readily concedes that left out Latino marriage partners who took non-Hispanic last names. It also loses people of Latin heritage with non-traditional names.

These errors are possible even if the poll has met other requirements such as giving every person who will cast a vote an equal chance of being included in the survey, catching everyone at home, speaking their language, and gaining their cooperation. Conditions, as we saw in the first analysis, that are never fully met, and sometimes not even approached.

Most error sources are ignored

Then and only then does the margin of error account for all the variation between true public opinion and a poll’s measure of it. Yet of scores of articles reviewed in this analysis, only one mentioned any source of error other than sampling error.

“The only thing they [news organizations] are really concerned with is the margin of error,” says Del Ali, director of Research 2000, the Maryland firm that conducts polls for Channel 2 and the San Francisco Examiner.

“I think it’s a journalistic problem,” Prof. Chaffee says. “ I do feel journalists by and large feel at a loss when it comes to numbers. They don’t know much about …random sampling. They really don’t understand margins of error. The journalists have tended not to take charge of their stories.”

Error margin balloons for small groups in sample

Survey stories often report interesting differences between the opinions of ethnic, religious, age- and income groups. For example, the San Francisco Chronicle reported on Feb. 29 that “Slightly more Latinos than whites favor the proposition [22, on limiting marriage to heterosexual couples]: 55 percent to 53 percent.”

The original Field Report press release upon which the story was based carried that comparison in its tables. But only 13 percent of the1,048 likely voters Field reported interviewing were Latinos. If 136 Latinos were interviewed the margin of error for a random sample that small is approximately plus or minus 9 percentage points.

That means the true percentage of Latinos in favor of Prop. 22 lay between 46 and 64. In other words, the newspaper had no idea whether the true percentage of Latinos favoring Prop. 22 was larger or smaller than the percentage of whites backing it. The Chronicle reporter who wrote the article did not respond to e-mailed questions.

Reporters do not always take into account that the margin of error for the entire sample does not apply to groups within the sample. Sampling error rises as the size of the sample falls, yet rarely was more than one margin of error reported -- that for the whole sample.

“A lot of reporters don’t understand the subgroups have larger margins of error,” says Research 2000’s Ali. Cheryl Katz, director of Baldassare Associates which conducts polling for the Chronicle, agrees: “reporters often don’t adjust margins of error for sub-samples.”

 John Wildermuth, who has reported on polls for the Chronicle says the paper limits details of polling to avoid turning off readers. “Field generally provides a subgroup breakdown,” he explains, “but we typically don't use it in the stories. Again, we're not hiding anything, but polling internals and mechanics aren't what we're concentrating on in those stories. It's the results, not how we got them.”

Taking others’ polls at face value

In June, a front page article in the Mercury reported on a survey conducted by a local trade association. It concerned the willingness of South Bay voters to extend a sales tax to fund  transportation measures, including an extension of BART from Fremont to San Jose estimated to cost about $4 billion.

Readers learned that 62% of “likely voters” said they would be willing to extend the sales tax for two decades; 86% said they supported spending that money to extend BART to San Jose.

The article told us that the poll reached 800 likely voters and that it was conducted for the Silicon Valley Manufacturing Group. But no response rate, margin of error, or even number of undecideds (assuming there were some) were reported. The reporter said it was an oversight, but past Manufacturing Group polls had been accurate.

When the Mercury could be sure professional standards were met in its own poll a few weeks later, however, it included the wording of questions, pie charts showing percentages for, against and undecided, and a margin of error. And when the Mercury joined with several other newspapers to poll Latino voters in 12 states and even invited several academic researchers to oversee the process, it provided readers no fewer than 10 separate margins of errors for each sub-group (June 30).

Subsequent  references to the Mercury poll on the editorial page dropped the margin of error entirely. A single precise number was substituted for the range of support the poll indicated.

Responding to the inconsistency, the Mercury’s Selden said: “We’re just in the process of formulating policies now [for poll reporting].”

-- John McManus

Email this article to someone

What do you think?