Guest commentary

The College Board's new math, and how it confounded the newspapers

By Patrick Mattimore
Posted Jan. 31, 2005

Daniel Okrent, the public editor for the New York Times, wrote in his Sunday column that, "Numbers without context ... are about as intelligible as vowels without consonants." This past week Bay Area newspapers served up an indigestible bowl of alphabet soup.

Patrick Mattimore

Last Wednesday, the San Francisco Chronicle and the San Jose Mercury News reported that the College Board had released statistics on the percentages of seniors who had taken and passed (gotten a 3 or above) on an Advanced Placement exam sometime during high school ( "California ranking on AP exams on the rise"; "Schools get college-prep honors"). The Contra Costa Times ran an Associated Press story that appeared Tuesday ( "Advanced Placement results up in every state; Florida gains most"). The newspapers played it up big declaring California fifth (more on that in a minute) in overall state rankings. The normally staid New York Times proclaimed that New York was No. 1 ( "New York Tops Advanced Placement Tests."

Nowhere in the Times article or in any of the Bay Area newspapers do the reporters mention that the College Board changed its method of reporting this year.

The New York Times wrote that "New York State led the nation in the percentage of high school students who passed the Advanced Placement tests at the level of mastery last year ..." The reporter appropriately defined mastery as achieving a score of 3 or above on the test. In the fifth paragraph she wrote that, "In New York, 21 percent of students reached at least level 3." Later in the article we learned that 75 percent of New York high schools offered AP courses, compared with 55 percent of schools on average in other states.

Nowhere in the Times article or in any of the Bay Area newspapers do the reporters mention that the College Board changed its method of reporting this year, so that the 21 percent figure in New York, for example, is based not upon the number of test takers, but on the total number of students in the state. Because New York has such a high comparative percentage of schools offering AP classes, they are naturally going to have many more students taking the AP exams. In fact, New York has the second-highest percentage of students (32.4%) who took an AP exam in high school, a fact left out of the Times article.

Instead of sending the statistics to states as a percentage of people taking the test who scored a 3 on at least one AP exam during high school (which nationally is around 63%), the College Board changed its reporting method and sent passing (mastery) statistics to states as a percentage of the entire state's population of public high school seniors (whether they had taken the test or not). States are rewarded therefore for having large numbers of students take the AP exam and students who flunk (get 1's or 2's) are not counted any differently from students who never took the test.

What went unreported in the College Board's new math reports is the fact that state leaders such as New York also had the highest percentages of students flunking (receiving below mastery scores of 1 or 2) the AP exam. For example, New York had the third highest percentage (behind only the District of Columbia and Florida) of students who flunked. (The College Board report can be found at

California tied with Maryland for having the seventh greatest percentage of students flunking the test.

This new reporting method is being driven by the College Board. It gets about $85 per AP test, runs countless workshops for AP teachers, produces materials, etc. It is in the company's interest to get as many students taking the AP exam and high schools offering AP classes, as possible. The Board's explanation is that it is promoting equity, since offering more and more AP classes benefits everyone, particularly underserved populations.

The biggest problem from my perspective, other than the fact that I don't think high school policy should be driven by the College Board, is that the newspapers so badly misplayed this story. The San Francisco Chronicle, for example, confused the statistics and reported the California percentage as a function of the number of test takers. This was particularly egregious since the reporter, Tanya Schevitz, also reported statistics for the San Francisco School District that were based upon the number of test takers. Ms. Schevitz should have seen that it was extremely unlikely that San Francisco would have a 69.3% pass rate, while the state's was 18.7%. Ms. Schevitz told me that the information coming from the College Board had been confusing. I did verify that the 69.3% figure is correct with Lorna Ho, a special assistant to the superintendent of schools in San Francisco.

The Associated Press report in the Contra Costa Times said that "in every state ... the percentage of public school students who passed at least one AP test was up in 2004, compared with the graduating class of 2000." The problem again is the change in the College Board's method of reporting, which the newspaper didn't mention. Under the old method the percentage was determined thus:

test passers / test takers = %

Under the new reporting method the formula is:

test passers / total # of seniors in the state = %

The new method of reporting doesn't allow for a profitable comparison with 2000 statistics because many more students in every state are now taking the test.

In fact, if one compares the percentage of students taking and passing the test in 2004 to the numbers taking and passing the test in 2000, both California and New York, as well as many other states, saw a decline in the 2004 pass rates.

In championing California’s fifth-place finish, The Mercury News reported that "Across the country, 13.2 percent of AP test takers in 2004 demonstrated mastery of at least one AP course," and in a box highlight wrote that 18.7% of AP California students scored 3 or better on the exam.

Both the Chronicle and the Mercury News made some of the corrections I suggested to them on their corrections pages on Thursday. The Chronicle, for example, had misstated that students needed to take an AP class in order to take the AP test, which they do not. On the whole, however, it was unhelpful to run the corrections without the context of the news stories themselves, and without explaining how the College Board's reporting changes were relevant to interpreting the new data.

The Oakland Tribune reported accurately that the 18.7% figure was based upon the entire class of 2004, but again without supplying a meaningful context to interpret that data ( "State students hold their own on AP testing"). In addition, the newspaper incorrectly reported that students who pass the test earn college credit. Students may earn college credit based upon their AP exams but that is discretionary with the colleges and departments within the colleges, and many will only accept grades of 4 or 5.

The consequences of the College Board's change in reporting methods are not benign. Anita Shepherd, the Social Studies Department chair at Patuxent High School, a high-achieving school in Maryland, e-mailed me that the school is using "bribery" in an attempt to get 100% of students in classes to take the test. If all the students in a class attempt the test, then the class gets to go on a cruise on a boat around the harbor. Ms. Shepherd told me that it creates unbearable pressure on one girl, for example, whose mother is a PhD working for the U.S. Department of Education who doesn't want the girl to take the test because of the stress, and who doesn't want her daughter to test out of any introductory college courses anyway.

There is, as well, the question of quality. The Advanced Placement classes and tests were originally intended to benefit the extraordinary high school student capable of college-level work. As the tests have been expanded to include more and more subjects, the philosophy of the test has changed as well. What was once viewed as a useful adjunct to a high school student's application has become a necessity for students applying to many colleges. And not just one AP class. At many universities, students are expected to have challenged themselves with many AP classes.

Whether the high-pressure challenges to the highest-achieving high school students are appropriate is debatable. Few are disputing, though, what happens when the college-level courses and tests are offered to less qualified students -- the rates of students struggling in the classes and failing the tests goes up.

But my debate here is not ultimately with the College Board. It is with the manner in which the newspapers uncritically presented data that, at least in some cases, they did not understand. How then might the newspapers have better reported the figures as news?

First, the newspapers should have explained that the College Board had changed its method of reporting the AP statistics. The College Board clearly spelled out in its report that it was changing the way in which it measured success on the AP exam.

Second, the newspapers should have published not only the percentages of students passing the test as a proportion of the entire class of 2004, but the percentages of students who passed the test based upon the numbers of students who took the test. Although that information was not made readily available to the newspapers because of the Board's new method of reporting, it was an easy calculation based upon information the Board supplied in Appendix C to the report. In the future, newspapers should insist that the College Board supply them with both sets of statistics.

Third, reporters need to use examples to clarify the math. An obvious idea here would be to explain that the statistical apples from 2000 should not be compared to the new oranges in 2004. In reporting this story, journalists would not need to have brought readers into the political debate that is propelling the College Board's change -- but as a boxed supplement to the story, they certainly could.

Finally, we had a maxim when I taught high school: If you don't get it, don't try and teach it. The same applies to statistical reporting. If a writer cannot clarify something he or she doesn't understand, then don't put it in the story.