Local TV News Project 2000

Time of Peril for TV News
Methodology

Coding

Market Selection

This year we followed the pattern of station selection that we set in 1999. In half the markets we analyzed newscasts in the highest rated time slots. In the other half we studied newscasts in other time slots. As in 1999, about half the markets were new to the study this year and half were follow-ups from previous years. New markets were randomly selected within each quartile so that we can be sure to represent a range of market sizes.

This year our sample of newscasts in the "highest rated time slot" included three markets that we have studied every year for the past three years - New York, Boston, and Minneapolis-St. Paul. The trends in these markets will allow us to see the correlation between quality scores and ratings over time. To balance out these relatively large markets we looked at newscasts in the "highest rated time slot" in the following smaller markets - Phoenix, Denver, Tucson, and Sioux Falls.

The time slots that we selected for special attention this year were the prime time one-hour newscasts (9:00 or 10:00 PM) and local early morning news (6:00 AM). An increasing number of stations are airing news at these hours. Prime news is usually aired on cable and independent stations. Our prime news markets were New York, Los Angles, Atlanta, and Minneapolis-St. Paul. We expected that an hour-long newscast would offer a new benchmark for quality. Early morning news is the fastest growing segment of local news programming. In our station survey this year, all of the stations that told us they added news hours did so in the early morning. The markets chosen for our study of early morning news included Detroit, Birmingham and Portland (Maine). These markets provide a variation in size and region.

Last year we compared New York's highest rated 11:00 PM time slot with its evening broadcasts and found what news directors expected - a higher quality newscast at the earlier time slot. This year we compared New York's 11:00 PM news with one-hour prime news. In addition we compared one-hour prime news with evenings news in two other follow-up markets - Los Angeles and Atlanta. These comparisons illustrate how much length and time slot influence news choices in these important markets.

Finally we added two more stations to the study this year - WBBM in Chicago and KTVU in Oakland. At WBBM the embattled news director, Carol Marine, is engaged in an effort to find a market niche for serious news. KTVU in Oakland, California requested analysis to test their continuing efforts to upgrade news quality.

Taping, Screening, and Inclusion

Research associates in 13 of the 15 markets taped newscasts for the following 2000 Monday-Friday time periods: February 7 - February 11 (sweeps primary) and February 14 - February 18 (sweeps secondary); March 6 - March 10 (non-sweeps primary) and March 2 - March 3, March 15 - March 17 (non-sweeps secondary). The non-sweeps secondary taping periods reflected Thursday - Friday of the week preceding the non-sweeps primary week, and the Monday - Wednesday of the week following the non-sweeps primary week. This was dictated by the disruptive nature NCAA Basketball Tournament coverage re: local news on CBS affiliates.

For both monitoring periods, primary days were used, unless unavailable due to preemption or taping error. In those cases, broadcasts from the secondary taping period were substituted, making every effort to match the appropriate day of the week. (Note: for WNYW/New York taping error required the substitution of Monday, February 24 in the sweeps period. For KMGH/Denver, it was necessary to use one May sweeps date - Thursday, May 4 - because of taping errors in February.)

For one of the stations added after taping began, tapes were acquired directly from the station (WBBM/Chicago). It was not possible to duplicate the main sample's exact timeframe non-sweeps timeframe. Thus, non-sweeps week for WBBM's was based on broadcasts airing Monday, April 17 through Friday, April 21, 2000.

Precoding

Each broadcast was initially screened and precoded in its entirety by a single coder. The precoding process confirmed the date/timeslot of each broadcast and identified and timed individual stories. Per the instructions of the design team, recurring sports and weather spots were merely classified and timed; regular sports and weather segments were not part of any additional coding and are not reflected in any of the analysis or totals presented in this study.

Story Coding and Scoring

Broadcasts were coded in their entirety by a second coder, via multiple story viewings. Working with a standardized codebook and coding rules, the process began with inventory variables, capturing information about broadcast date, market, station, network affiliation, etc. The second part of the coding scheme consisted of recordable variables, including story length, actors, and topics. The final section of the coding scheme contained the rateable variables. These were the measurements identified by the design team as quality indicators. The range in maximum possible points reflects the hierarchical significance each value as per quantitative analysis of the design team's input. Each rateable variable was assigned both a code and a point score. Here are the variables and their maximum possible points per story:

Focus 10
Enterprise 8
Source Expertise 9
Balance Via # of Sources 5
Balance Via Viewpoints 5
Sensationalism 3
Presentation 2
Community Relevance 8

The score-per-story represents points earned via rateable variables.

Topic Diversity

Per the design teams directives, no story points were earned for topics; that is, no one topic was considered more important than another. Instead, the score-per-broadcast was calculated to reward stations for topic diversity, taking into account both the number of stories presented, and allowing for the additional minutes often added in post-prime timeslots or for the occasional broadcast where taping error occurred. For each news broadcast, a story:topic ratio was calculated by dividing the number of stories by the number of topics. (NOTES - 1. One hour broadcasts were adjusted at a 75% story total so that their ratios would not be disproportionately penalized, while still acknowledging their advantage over half-hour broadcasts re: additional time in which to achieve story topic diversity. 2. Some stations present one-hour broadcasts Monday - Thursday, and an abbreviated broadcast on Friday. Adjustments were made to compensate for these differences. 3. The timeframes selected were weeks when broadcasts could justifiably concentrate on the presidential primary season; thus, adjustments were made to avoid penalizing stations that presented extensive coverage of that topic.)

The story:topic ratio was then converted to a broadcast multiplier. Broadcast scores-per-story were totaled, then divided by the number of stories, to reach an average score-per-story. The appropriate multiplier was then applied to the average score-per-story to reach the daily broadcast score. Finally, each station's 10 daily broadcast scores were totaled to reach the aggregate station score.

The data collected, analyzed, and presented in this study are drawn from the Nielsen National Station Index (NSI) produced by Nielsen Media Research. The NSI measures television viewership for all U.S. television markets quarterly. The quarterly measurement periods are known as "Sweeps Months," and include February, May, July and November. The data presented in this report are based on Nielsen estimates of the weekday average household rating for each of 12 sweeps periods ranging from May, 1997 to February, 2000 for 50 local news telecasts in 15 Designated Market Areas (DMAs).

Ordinary least-squares regression was used to determine the slope (ratings trend over the three-year period) for each newscast, with time as the independent variable and rating as the dependent variable. It should be noted that trendline distribution reflects Windsorized data. Due to an inordinate amount of missing data, KGUN (Tucson) was an outlier and was removed from the analysis. From the distribution of slopes, a five-point coding scheme was used to assign a trend value for each newscast. The five-point scheme is curved, and based on the mean and standard deviation of the sample.

In four markets, New York, Boston, Minneapolis and Wichita, data for late-evening, major affiliate newscasts were collected and analyzed over a five-year period (May 1995 through February 2000). From the distribution of slopes, another five-point coding scheme was used to assign a trend value for each newscast. As in the coding above, the value assignment was curved, reflecting the means and standard deviations of the trend data obtained throughout the past three years of this study (coinciding with the five years in the analysis).

For 29 newscasts in 12 markets, ratings data were obtained for the 1/2 hour period prior to each newscast. These data were used to determine the trend in lead-in retention for each newscast. Lead-in retention is defined as the percentage of the rating value attained by the newscast as compared to the lead-in period. For example, if a newscast received a household rating of 8 while the lead-in half hour received a rating of 10, this would be calculated as 80% retention. Ordinary least-squares regression was used to determine the slope (percent retention trend over the three-year period) for each newscast, with time as the independent variable and lead-in retention (percentage) as the dependent variable. This type of analysis is a fairer way of assessing lead-in retention than a simple average as it is not influenced by the potential magnitude in ratings difference between the lead-in program and the newscast. From the distribution of slopes, a five-point coding scheme was employed in a method similar to those for the ratings trends.

The aggregate score was then matched with ratings information to arrive at the final letter grade for each station.

Intercoder Reliablity

Intercoder reliability measures the extent to which two coders, operating individually, reach the same coding decisions. For this project, the principal coding team was comprised of 4 individuals, who were trained as a group. One coder was designated as the control coder, and worked off-site for the duration of the project. At the completion of the general coding process, the on-site coders, working alone and without access to the control coder's work, recoded 40% of the broadcasts completed by the control coder. Daily scores were found to be reliable within +/- 0.78 points per day, as per the comparative daily broadcast scores of general coders vs. the control coder.