Which is Really the Bay Area's Best?

Which is more valuable to you? Changes and challenges in our public schools, or seeing helicopter-camera video of a dump in Southern Washington where a search is underway for a little girl’s body?

Even if the chopper recorded those first grisly images (the search was in vain) would they justify sending a crew from a newsroom already stretched to the limit, and ignoring other news of the day?

How much can you learn from a 30-second story about “consumer abuse” by a major insurer, State Farm, when all you see are two un-named sources speaking in an un-identified forum in an un-specified place making charges against the company?

How about video of protesters scuffling with police, without explaining the issues for which they are willing to sacrifice their bodies and liberty?

Or stories of health “breakthroughs” that answer no questions about availability, cost, side-effects, or types of conditions that might be helped?

Or advertisements masquerading as news that tout services offered by the company that owns the station? Or a consistent failure to report the other side of controversial stories—particularly when the other side is a person accused of a street crime.

This kind of news decision-making came to light in the first comprehensive consumer rating of the Bay Area’s seven most popular news organizations. 

Grade the News randomly selected 10 days from October to January, making sure to include every day of the week. On each day GTN analyzed the top stories—those in the first 30 minutes of the premier evening newscast or on the front and local front pages of the newspaper. To provide a reference point, a smaller sample, but on some of the same days, of the Washington Post was conducted. In all, 829 stories were measured six ways for quality.             

To compute the overall grade, GTN averaged the grades in each of the indices. The grade for newsworthiness counted twice, however, because Grade the News’ advisors believe it is more fundamental than the others. A story about a local celebrity divorce, for example, might score high on fairness, context and local relevance, but still fail to perform the fundamental function of news—to help citizens make sense of the world around them. In our view, such a story wouldn’t deserve top billing. (But as you’ll see, a few such stories wouldn’t lower grades.)

The local winners in this first report of an on-going analysis are the San Francisco Chronicle (and Sunday Examiner), followed closely by the San Jose Mercury News and Contra Costa Times, and Channel 2 in Oakland. They often chose broad issues over isolated events and reported in enough depth to illuminate complexity. The Washington Post was at the top, or off the scales, in all categories but local relevance—due to its emphasis on national and international news.

Despite its location in a region somewhat smaller, less educated and less wealthy than the Bay Area, the Post was much more likely than local media to take on big issues and report them with regional significance.

In the Bay Area, the winners in this analysis set their own news agenda more than a third of the time, rather than passively responding to press releases or sending a reporter to intercept ambulances and police cars. The winners often tackled subjects that affect nearly everyone, such as the records of political candidates, changes in social security, hidden toxic wastes, the quality of schools and teachers.

They investigated welfare reform and patterns of housing availability and transportation headaches. They examined differences in effectiveness of popular drugs across race and gender. And frequently they reported the big picture. Those who watched and read enjoyed the opportunity to learn a great deal about what’s going on in the Bay Area, and where we’re headed.

Much less informative were Channels 4, 5 and 7. These three stations suffered from a selection priority skewed toward “9-1-1 stories”--isolated fires, accidents, recovered bodies, missing persons, and acts of mayhem learned from listening to emergency scanner radios. These stations also raced through so many stories in their newscasts that events were stripped both of context and more than a single point of view.

In their rush to cram pictures of “trees” on the screen, Channels 4, 5 and 7 often lost sight of “forests.” Protesters scuffled with cops, but we rarely learned more about the issues involved than their topic--“WTO” or “fur.”

One station announced an endorsement of a candidate for San Francisco mayor, but failed to mention the endorsing group or its size or influence. Another teased viewers with a story about a break in the rise of housing prices, then delivered a 15-second report on national trends with no connection to the local market.

The content was designed more to entertain consumers than inform active citizens. The deliberations and actions of government—state, county and city--were virtually ignored. So were the environment, education and social trends.  

Despite consistent patterns within newsrooms, however, every Bay Area news organization produced some excellent journalism. Outside of its routine newscasts, Channel 4 broadcast two mayoral debates; Channel 7 aired one. Channel 5 conducted a revealing investigation of bad attorneys. And Channel 4’s coverage of threats by major banks to close their ATMs to non-customers after San Francisco’s vote to ban certain ATM fees, was the only one—print or broadcast—to offer an expert analysis of the banks’ claim that the fees were necessary.

No shortage of talent exists within these newsrooms. But management’s commitment to journalism differs sharply.

Is it fair to compare newspapers with newscasts?

The two technologies differ in important ways. But the rules of journalism are the same for both.

We tried to level the field. For example, we did not measure the sheer volume of news—that would be unfair to TV. We graded how each newsroom used the time and space it had. We also only examined the top stories.

Television sometimes breaks stories into parts, with different reporters contributing from different locales. In analysis, we combined related stories, adding their sources so they scored higher. Even measures of context—number and kinds of sources—were capped; more than 5 didn’t count. And we excluded stories shorter than 20 seconds or 10 square inches. As a result, despite stories with many fewer words than newspaper articles, Channel 2 rated an A- in context.

Length didn’t matter in any of the remaining categories.  The measure of fairness, for example, simply required reporters to offer the most obvious “other side” a chance to comment in controversial stories. “No comment” or a single quote sufficed. Civic contribution meant only that government’s decision-making occasionally made the news.

Because the analysis was designed to rate local television news head-to-head with newspapers, it’s important to note two important omissions in this research—volume and comprehensiveness. Particularly the Post, Mercury and Chronicle contain vastly more local reporting, across more subjects, than even the best local newscast. If you want to know what happens in your local community, there is no substitute for the newspaper.

How tough was the grading?

All of the news departments rated here are parts of very large commercial enterprises. They earn all or most of their money from advertising—even newspapers. They must earn a reasonable profit to provide any public service.

So grading standards were designed with the understanding that news media are businesses, rather than pure public servants. Up to 20% of the top half of the newscast or front pages of the paper could be exclusively entertaining and still earn an “A” in this analysis. As much as 15% could lack even a single source and still rate an “A.” The remainder could average four named sources and receive a “B+”, or only three sources if one were an expert.

All grades are based the percentage of news space or news time. A 30-second story contributed only half as much to the average as a minute-story.

Advertising space, teases, banter between anchors--none of these figured in the analysis. We graded only actual news space or time.

A random sample of news misses lots of reporting. Some excellent journalism wasn’t analyzed. And perhaps some awful work too. But the chance of a news organization just happening to do wonderful or terrible journalism on all 10 sample days, which included every day of the week, is small. Nevertheless, these grades should be seen as estimates of a news organization’s adherence to the basics of journalism, not a precise measure of a news organization’s quality.

By its nature a content analysis sticks to relatively obvious distinctions—the size of a story (in seconds or square inches), its topic, number of sources, its location, etc. That makes it more objective. But it ignores differences in the quality of what’s counted. In reality, a story with two well-chosen sources may explain more than another with five sources culled from a crowd.

Judgments about creativity, clarity of writing, and the power of the visuals are, of course, highly subjective. They do not figure in these grades.

Finally, because all of the stories were coded by one person, the study also may have un-scientific levels of subjectivity. We hope to check the study’s reliability as the analysis progresses. The grading plan was offered to every Bay Area news organization for suggestions and reviewed by appropriate members of Grade the News’ Advisory Board.

                                                     -- John McManus

What do you think?

Here's what others say 


An in-depth look at the analysis

How the study was done

Responses from Bay Area Editors