How we did the study

 

            With the help of Greg McWilliams, an undergraduate at St. Mary’s College of California, I taped the four most popular Bay Area newscasts and collected editions of the two most popular newspapers for 13 days prior to the June 2 California Primary.

 

            We chose the hour-long evening newscast at each station: Channel 2 and Channel 5 at 10 p.m., Channels 4 and 7 at 6 p.m. We used the Peninsula edition of the Mercury News and the Contra Costa edition of the Chronicle, because that’s where our coders live. Technical glitches and unannounced changes in TV schedules (largely due to basketball playoffs) prevented us from taping 100% of the newscasts. We were able to tape 95% of Channel 2’s newscasts; 81 % of Channel 4’s; 79% of Channel 5’s and 86% of Channel 7’s.

 

            These sample size differences would only affect our analysis if a station produced a very different newscast on the few days we missed compared to the majority we coded. We have no evidence that this was so. McWilliams and I shared coding duties. A random sample we both coded showed 80% agreement on the qualities of news in this analysis, an acceptable level for social science publication.

 

            To compensate for the difference in volume between the two media, we coded the entire premier evening newscast (which sometimes was only 30 minutes on weekends) versus only the front page and local/metro front for the newspapers. We did not analyze extremely brief print (under 5 sq. inches) or broadcast (under 25 second) stories, focusing only on those the journalists thought worthy of greater display.

 

            We created four indices: 1) How much time or space was devoted to state and local political campaigns or political news? We added up each story and divided the total by the available news time or space (what was left after subtracting ads and teases). There was no measure of absolute space or time, just the proportion editors decided to devote to informing the electorate. Thus, television was not at a disadvantage because of a smaller “news hole.”

 

            2) We determined whether a political story contained at least one independent expert source to help viewers make sense of claims and counterclaims made by candidates and advocates of ballot measures. For example, a professor or think-tank scholar who had written a book or conducted a study might be quoted about the validity of one side’s charges or another’s. Given the contentious nature of politics, we estimated that including such an independent source would indicate careful reporting. For this and the remaining three indices, we combined closely related stories (sidebars) into one single story. Thus breaking a story into parts—as television frequently does on major stories-- did not reduce scores.

           

            3) We also counted the percentage of stories where there was some political controversy and asked whether at least one other side was represented or offered the opportunity to comment. If the whole story constituted an interview with one candidate and the others were interviewed on other days, the reporting was still considered fair.

 

            4) Finally, we measured the number of specific sources per political story. Specific means the person was identified. He or she did not have to appear on camera or even be quoted directly. Only human sources (not documents) were counted. This was the only measure where the greater volume of newspapers gave them an advantage.