August 22, 2005

Box Scores and Bylines

Methodology



SAMPLING AND INCLUSION

The content analysis study of sports pages was part of a broader study that looked at both text-based media (including newspapers and Internet news sites) and electronic media (including broadcast network and cable network news). The complete study can be found at www.stateofthemedia.org.

In all, the sports study examined 2,081 stories from the front pages of 16 different sports sections. Princeton Survey Research Associates International conducted coding for the sports stories. Esther Thorson of the University of Missouri School of Journalism conducted the statistical and methodological work for the report.

Front-page of Newspapers

Newspaper Selection

Individual newspapers were selected to present a meaningful assessment of the content that is widely available to the public. Selections were made on both a geographic and a demographic basis, as well as diversity of ownership.

First, newspapers were divided into four groups based on daily circulation: Over 750,000; 300,001 to 750,000; 100,001 to 300,000, and 100,000 and under.

We included four newspapers over 750,000: USA Today, The Los Angeles Times, The New York Times, and The Washington Post. (The Wall Street Journal, which also falls in this category, was excluded as a specialty publication.)

Four newspapers were chosen in each of the remaining three categories. To ensure geographical diversity, each of the four newspapers within a circulation category was selected from a different geographic region of the U.S. Regions were defined according to the parameters established by the U.S. Census Bureau.

The newspapers in circulation groups two through four were selected through the following process:

First, using the Editor and Publisher Yearbook, we created a list of every daily newspaper in the U.S. Within each category, newspapers were selected at random until all categories were filled. To be eligible for selection, a newspaper was required to a) have a Sunday section, b) have a daily sports section, c) have its stories indexed in a news database, to be available to coders, and d) not be a tabloid. Newspapers not meeting those criteria were skipped over. In addition, an effort was made to ensure diversity in ownership.

Circulation Group 1

Los Angeles Times, New York Times , USA Today, Washington Post

Circulation Group 2

Cleveland Plain Dealer, Dallas Morning News, Philadelphia Inquirer, Sacramento Bee

Circulation Group 3

Albuquerque Journal, Asbury Park Press, Kansas City Star, San Antonio Express-News

Circulation Group 4

Bloomington ( Illinois ) Pantagraph , Hanover (Pennsylvania ) Evening Sun, McAllen (Texas) Monitor, Vacaville (California) Reporter

Newspaper Study Operative Dates, 2004

Random sampling was used to select a sample of individual days for the study. By choosing individual days rather than weeks, we hoped to provide a broader look at news coverage that more accurately represented the entire year. To account for variations related to the different days of the week, the 28 days that were sampled included 4 of each day of the week. Dates were chosen from January 1 to October 13, a span of 286 days. October 13 was made the cutoff date to allow time for coding. Omitted dates included those of the Olympics and the Republican and Democratic National Conventions.

The following dates were generated and make up the 2004 sample.

January- 13, 16, 23
February- 2, 13, 23, 29
March- 8, 12, 13, 14, 19, 24
April- 8, 15
May- 1, 4, 20
June- 8, 9, 16
July- 19, 25
August- 10, 12
September- 4, 22, 26

Story Procurement, Selection, and Inclusion

Stories were procured via hard copies of daily publications, supplemented by a combination of electronic databases (DIALOG, FACTIVA, and NEXIS).

All stories with distinct bylines that appeared on a particular newspaper's front page (Page A1), on the first page of the Local/Metro section, or on the first page of the sports section were selected for analysis.

CODING PROCEDURES

 

General practice called for a coder to work through no more than seven days/issues from any newspaper outlet during a coding session. After completing up to seven days/issues from one publication, coders switched to another text-based-media outlet, and continued to code up to seven days/issues.

All coding personnel rotated through all circulation groups, publications/sites, with the exception of the designated control publications. A control publication was chosen in each category of text media. The designated control publication/date was initially handled by only one coder. That work was then over-sampled during intercoder reliability testing.

Working with a standardized codebook and coding rules, coders generally worked through each story in its entirety, beginning with the Inventory Variables – publication date, story length, placement, and origination. Next, they recorded the codes for each story's "content variables" – topics, recurring leads/big stories, newsmakers, tone, sourcing levels, and frame. Additional variables for Internet outlets measured links to graphics, audio, video, and photo galleries; and for the five multiple-download days, an additional variable measured story freshness.

Intercoder Reliability Testing

Intercoder reliability measures the extent to which two coders, operating individually, reach the same coding decisions. The principal coding team for text media comprised four people who were trained as a group. One coder was designated as a general control coder, and worked off-site for the duration of the project. In addition, one newspaper was designated as a control source.

At the completion of the general coding process, each coder, working alone and without access to the initial coding decisions, re-coded publications originally completed by another coder. Intercoding tests were performed on 5% of all cases in connection with inventory variables. This included all print stories–A1, Metro and sports front pages. Agreement rates exceeded 98% for those variables. For the more difficult content variables, 20% of all publications/sites were re-coded, and intercoder agreement rates were as follows:

Trigger: 93%

Politics Trigger: 97%

Big Story: 96%

Campaign Trigger: 98%

Topic: 92%

Newsmaker: 90%

Tone: 96%

Source Transparency: 95%

Anonymous Sources: 98%

Data: 97%

Female Sources: 98%

Male Sources: 97%

Mix of Viewpoints: 92%

Stakeholders: 90%

Jnlst. Opinion/Speculation: 90%

Dominant Frame: 88%

Additional Frame: 87%

No significant differences were found to exist on a recurring basis.