Nutrition Report Card

Who Provides the Bay Area’s Most Nutritious News?


You are what you eat, they say. 

Salty, fatty, nutritionally-empty junk food may make your mouth water, but it’s bad for your health. Same for news. Emotion-drenched, visually exciting, but informationally- barren news may turn your head. Junk journalism, however, degrades the civic health of the entire community. 


What we don’t know can harm us. We make our most serious mistakes when we don’t know, but think we do. 

All that the sharpest critics of democracy have alleged is true if there is no steady supply of trustworthy and relevant news.... No one can manage anything on pap. Neither can a people.

--Walter Lippmann



What does junk journalism look like? It was the biggest story on the front page of the Chronicle recently¾about “Disney’s Virtual California,” opening on a former parking lot at the southern California amusement park. “Theme park makes actual travel superfluous” the headline claimed. A day later the Mercury News followed suit with its own full-color photo and front-page-dominating story of the made-for-media event.


Channel 4 broadcast its weather report from “Virtual California.” Channel 7, the Bay Area’s Disney-owned station broadcast a promotional tape on its evening newscast and promised “live” coverage from its parent company’s new attraction in its morning newscast. (To its credit, KGO’s story on the park did acknowledge the conflict of interest. But far from evading the conflict between journalism and advertising, the morning coverage from the park embraced it.)


More routinely, junk journalism is an ambulance window view of the Bay Area¾the context-stripped, cookie-cutter stories of shootings, stabbings and sex crimes. Trimmed in yellow police tape, they throb with angry or tearful soundbites.


It’s also the twisted steel of a traffic accident and the ‘copter video of a constipated freeway, the live shot from a scorched building and “pity ‘bites” for the displaced family. It’s the 4-minute feature on “high tech hunks.” And front page stories about whatever editors think has “buzz,” regardless of its importance.



Junk journalism is a method of reporting


Were the tragic episodes treated systematically, we’d learn a lot from them. As it is, we can’t see the forest for the trees. How about a longer story, or a series, on the causes of certain kinds of crime or fires or wrecks, another on solutions, another on the costs to all of us¾in fear, in racial and social mistrust, in rising medical bills, in property insurance, in taxes to pay police, lawyers and judges, not to mention prisons? How have other cities or nations coped with these core problems?


Crimes are serious events, not junk. It’s the reporting that trivializes them¾marketing tragedy for its audience-building fear or pity value while ignoring the conditions that create it.


Not all front-page or top-of-the-newscast stories need be serious. Content that’s more interesting than important draws people into the news tent. But when the sensational largely displaces the substantive, the tent risks becoming a circus big top.


Grade the News has now analyzed more than 2,200 stories produced by the Bay Area’s largest newspapers and most popular newscasts, a social science based survey of almost a year of news. We’ve fashioned a measuring stick using the definition of news contained in most of journalism’s codes of ethics¾news is information about current issues and events that helps people make sense of their environment. (We’d welcome your suggestions for improvements.)


The most important index


Newsworthiness is the most important index of quality of the seven that comprise Grade the News’ overall index. That’s because if the topic and treatment of news is geared toward diversion, it doesn’t matter how many sources speak, whether all sides are represented, or even that it’s accurate.


The good news is that three major local news providers put on more wholesome newscasts over the last nine months. Channel 2 showed the greatest improvement, but the Mercury News and Chronicle also were better.


Here are the grades:


News Provider                                              Grade                                      Trend

Washington Post



San Jose Mercury News



San Francisco Chronicle



Contra Costa Times



Channel 2 (KTVU)



Channel 4 (KRON)



Channel 7 (KGO)



Channel 5 (KPIX)




The Bay Area’s best still trail strong newspapers like the Washington Post. The Post covers stories on new amusement parks, but not on the front or local news pages. The Post also rarely confuses the front  page with the sports page. Post editors appear to hold the now-quaint notion that the front page ought to reflect what people need to know more than whatever might cheaply draw the widest attention. Despite a more market-driven selection philosophy, Grade the News heartily endorses the Mercury, Chronicle, as well as the Contra Costa Times and Channel 2. Channels 4, 5 and 7 may be good entertainment, but they don’t take news seriously.


Here’s how we graded the news.


Each story was rated on three characteristics:

·        Topic, the subject of the story;

·        Episodic vs. thematic reporting¾ was the focus a single event or an issue or theme;

·        Impact, how many people were likely to be affected in a non-trivial and lasting way by what was reported.


Stories were coded by two experienced journalists, one a former newspaper reporter, the other a former Bay Area television news director.  A sub-sample of stories was coded by both with overall agreement of 84 percent on those questions requiring some judgment. (Scott’s Pi, a chance-corrected measure of agreement ranged from 1.0 to .56.)




Stories on core topics each received two points. Core topics include just about everything but celebrity news, minor fires and accidents, sports, promotions¾such as Channel 5 “Survivor” stories and Channel 7 “Millionaire” stories, and stories primarily focused on human interest¾e.g., Casey the sewer pipe-loving dog. Non-core stories each received one point.


Stories were also coded based on whether they were covered episodically¾as unconnected events¾or thematically¾as connected events or dealt with issues. Stories in which half or more of the content was thematic or about an issue received an additional 2 points for the explanatory value of treating the story within a broader context. Simple event stories, received no additional points. For example, a story about gun safety, or patterns of violence, or an attempt one city was making to reduce it, would score 2 more points while a story limited to a particular shooting, would gain 0.


Finally, stories likely to affect a significant number of people more than momentarily (6 months or more) received an additional 2 points, while stories likely to affect relatively few gained no additional points.


A story about a fire, for example, has a direct and lasting effect on the people who used or owned the building and may have some impact on those who live within a few blocks, but not on many others. That’s a few thousand people in a region of 6 million. Likewise, a shooting in San Jose is unlikely to make a lot of difference in the lives of people in Walnut Creek or Santa Rosa. But a story about state-mandated high school graduation tests affects a huge number from Gilroy to Guerneville. Stories affecting 10,000 or more people earned the 2 points.


All told, stories could earn from 1 to 6 points. In the analysis, each story was weighted by its size. It wouldn’t be fair to equate a 3-minute story with a 30 second “brief.”


A system that equalizes print and television


Grades were based on the percent of total newscast time or newspaper space that was spent on high quality journalism. We could compare print and broadcast equally, because absolute volume didn’t matter. To keep the playing field even, we only examined top stories¾those on the front page and local news front page of newspapers and in the first 30 minutes¾the front half¾of premier evening newscasts. (Channel 5 only runs half-hour newscasts, so we analyzed the entire show.)


We used a gentler scoring system than in school, however, because it may not be realistic to think journalists can always make what’s important interesting. Thus a station or newspaper could indulge almost a quarter of its news time or space in cotton candy stories and still earn an “A.” (Each story was multiplied by its newsworthiness score and added into a grand total for the station or paper. That amount was divided by a perfect score--the sum if all stories rated a 6. Overall scores of 80 percent or higher received A’s; scores below 50 percent failed.)


Stories were randomly selected over 11-months based on a system guaranteeing equal numbers of each day of the week. That balances traditionally lean news days with fuller ones. In almost all cases, stations and newspapers were compared on the same news-gathering cycle¾so all had access to the same set of events. In other words, all stations were sampled on the same evening and all papers on the following morning.




Even though two independent coders graded the stories approximately the same way, the scoring conventions¾such as point values and separation of core and non-core stories¾are subjective. Just as a teacher makes judgments about what level of achievement rates an “A” and what’s “C” work, we have tried to measure journalistic performance. 


Further, we’ve based our yardstick on a consensus of journalism’s codes of ethics. Most of these are derived from a “social responsibility” theory of journalism’s purpose. This is the idea that journalism receives special privileges from government¾such as low postal rates, freedom from content regulation, exemptions from anti-trust and child labor laws, and free access to air waves¾which are unavailable to other businesses, in exchange for doing its best to build an informed citizenry.




As with much social science, there were occasional problems with data gathering. Equipment sometimes failed, stations would change broadcast schedules at the last minute (usually for sports playoff games), and paper deliveries were missed. Sometimes we just plain forgot to tape. For the most part, however, the method used was a systematic sample¾every 10 days¾with a random start. We believe this gave us a generally accurate picture of newsworthiness.


One of the categories¾whether the story was episodic or thematic¾only reached 78 percent agreement between coders. When chance agreements are statistically subtracted, reliability, as measured by Scott’s Pi, fell to .56. We will work on a clearer distinction in our coding manual. Ideally, the reliability analysis would also have been randomly sampled. Unfortunately the timing of our funding did not permit this.


We believe in transparency and welcome criticism as a means of becoming better. For further details about the grading system and its assumptions or the complete coding manual we used to evaluate stories, click here. To comment, click here.


--John McManus


Email this article to someone

What do you think?



Next up? Which topics got the most attention, and which the least?