New tabloid breed is more than screaming headlines but could they be blueprint to the future?

Methodology

The print media—tabloids and broadsheets—were subject to a specific methodological approach regarding sampling and selection and coding. In all, the study examined some 2,321 stories. This included 981 newspaper-owned tabloid stories, 634 independent-owned tabloids stories and 706 broadsheet stories.

Sample Design

Seven different newspapers were monitored for 10 days. This included four free, daily tabloids and three daily broadsheet newspapers. The papers came from three cities across the U.S. Individual cities were selected to present a meaningful assessment of the content and an ability to compare tabloid and broadsheet coverage. The city selection was based on several factors. First, it needed to house both a daily broadsheet and a free commuter tabloid. Next, we looked for a diversity of tabloid owners. Third, back-issues of the paper needed to be available either on hard copy or through Lexis-Nexis.

Operative Dates

Random sampling was used to select a sample of individual days for the study. By choosing individual days rather than weeks, we hoped to provide a broader look at news coverage that more accurately represented coverage over time. To account for variations related to the different days of the week, the 10 days that were sampled included 2 of each day of the week Monday – Friday. Dates were chosen from April 1, 2004 to August 15, 2005 . One issue of the Examiner (May 13) and one issue of Quick (May 26) are missing because back issues were not available from the news organization.

The following dates were generated and make up the 2004 sample.

Thursday, April 28

Friday, May 13

Wednesday, May 25

Thursday, May 26

Monday, June 6

Tuesday, June 14

Monday, July 18

Tuesday, July 19

Wednesday, July 20

Friday, August 5

Story Procurement, Selection, and Inclusion

Stories were procured via hard copies of daily publications, supplemented by Lexis-Nexis.

For tabloids: All stories that were two paragraphs or longer were selected for analysis. Calendar, job and other such listings were omitted from inclusion as were letters to the editor.

For broadsheets: All stories that were two paragraphs or longer and appeared on a particular newspaper's front page (Page A1), on the first page of the Local/Metro section, on the first page of the Business section, on the first page of the Style/Culture section or on the first page of the Sports sections were selected for analysis. The stories were coded in their entirety, including any jumps to inside pages.

Coding Process

General practice called for a coder to work through no more than seven days/issues from any newspaper outlet during a coding session. After completing up to seven days/issues from one publication, coders switched to another text-based media outlet, and continued to code up to seven days/issues.

Working with a standardized codebook and coding rules, coders generally worked through each story in its entirety, beginning with the Inventory Variables—source, dateline, length, etc. Then, stories were coded for content variables—topic, big story, number of sources, young demographic impact, principal newsmaker age group, journalist opinion, range of viewpoint, background, and future implication and impact. In all cases, coders worked with a defined set of rules per variable.

Of particular note:

Journalist Opinion: This measures the presence of journalist’s unsupported opinion and speculation, in a story. Columns, editorials and reviews and other opinion-based items are coded non-applicable.

Journalist’s speculation/opinion is a statement of relationships that have no source or reporting as the basis for the opinion/speculation. If a journalist has first-hand knowledge about an event, a statement about that event would not be opinion (e.g., a reporter saw the aftermath of a car bomb). However, it must be clear that the reporter had first-hand knowledge and the statement should be something that could be confirmed. Predicting the future, for example, would be speculation because it cannot be confirmed at this point in time.

Easily verifiable factual statements are neither opinion nor analysis. This includes statements about such things as addresses, ages, publicly-revealed agreements and statements, and historical happenings. If it is unclear whether an unsourced assertion/paragraph is opinion or a statement of fact that needs no verification, the decision is based on whether the assertion of fact could be easily refuted or verified by a person or reference material easily available to the journalist or audience members.

Range of Viewpoint: Coders were instructed along the following lines:

Examine the story to see if it has explicit disagreement or conflict over an event or issue. Disagreement involves representatives from at least one position explicitly (not implied) stating that representatives of other positions are incorrect/wrong, acted improperly/inappropriately, or acted immorally. If not, code it “non-applicable/non-controversial” below.

If there is disagreement or conflict, identify the number of paragraphs or assertions for various sides. Then apply the proportions in the subcategories below.

Many of the paragraphs or assertions stories may not take sides. When measuring % of opinions, only consider those portions of the story where opinions are being expressed; do not automatically credit the reporting to one side or the other.

Some stories may include more than one example of conflict or disagreement (e.g., a story summarizing multiple events in Iraq). In such stories, a particular side of one of the conflicts must reach 66% to be coded “mostly one opinion.”

“Refused to Comment Rule” in effect: if, in a given news story, the “other side” refused to be quoted, that is, the reporter explicitly states that the other side refused to be quoted, then that attempt should be quantified/coded as a source at the appropriate level.

Intercoder Reliability

Intercoder reliability measures the extent to which individual coders, operating independently of one another, reach the same coding decision. Tests were performed throughout the project: no systematic errors were found. Senior project staff made all final decisions on both the content and intent variables.

At the completion of the general coding process, each coder, working alone and without access to the initial coding decisions, re-coded publications originally completed by another coder. Intercoding tests were performed on 8% of all cases in connection with inventory variables, and agreement rates exceeded 99% for those variables. For the more difficult content variables, 8% of all publications/sites were re-coded, and intercoder agreement rates were as follows:

Story Origination: 93%

Geographic Focus: 83%

Story Topic: 89%

Number of Sources: 84%

Young Demographic Impact: 87%

Principal Newsmaker Age Group: 85%

Journalist Opinion: 89%

Range of Viewpoint : 85%

Background Information: 96%

Future Implications or Impact: 82%