Numbers, Facts and Trends Shaping Your World

Non-Profit News

About the study

The study, Non-Profit News: Assessing a New Landscape in Journalism, involved several phases, all of which were performed in-house by PEJ researchers.

The primary PEJ staff members conducting the research, analysis and writing included: Tricia Sartor, Weekly News Index manager; Kevin Caldwell, researcher/coder; Nancy Vogt, researcher/coder; Jesse Holcomb, research associate; Amy Mitchell, deputy director; Tom Rosenstiel, director.

Other staff members who made substantial contributions to the report were: Christine Bhutta, research associate; Paul Hitlin, senior researcher; Dana Page, communications and creative design manager.

Copy-editing was done by Molly Rohal, communications coordinator for Pew Research Center. Number-checking was done by Steve Adams, researcher/coder.

The following Pew Research Center staff provided design assistance on the interactive tool: Russell Heimlich, web developer; Michael Piccorossi, director of digital strategy and IT; Michael Keegan, graphics director for Pew Social & Demographic Trends and Pew Hispanic Center; Carrie Madigan, informational graphic designer for Pew Social & Demographic Trends and Pew Hispanic Center.

The first phase of research was to identify the media universe. This occurred in May-June 2010. The second phase was conducting audits of the online sites (June-August 2010). The third phase was the content capture and analysis (September 2010-January 2011). Finally, once the coding was complete, several scales were developed as a means of presenting the specific findings on ideology, transparency and productivity.

Details of each of these phases follow.

Defining the Media Universe

Researchers took several steps to define the universe of non-profit news organizations.

Researchers compiled the universe of sites by using three different techniques. First, researchers consulted lists of such news outlets that had already been compiled by academics and journalists who monitor this emerging field. These sources include:

Michele McClellan’s (Reynolds Journalism Institute) list of promising community news sites Brant Houston’s (University of Illinois) list of investigative news consortia, discussed in the 2010 report “Ethics for the New Investigative Newsroom.Knight Digital Media Center The Poynter Institute (including David Sheddon’s Transformation Tracker/Josh Stearns Groundswell) Newseum/Online News Association’s Online Journalism Awards lists Online Journalism Review Online News Association Nieman Journalism Lab

Second, Researchers searched the web for news organizations by using the following terms:

[state]

Third, researchers identified and contacted each state’s press association and asked for a list of credentialed online media outlets. In addition, researchers asked the associations if they were aware of any additional online journalism startups in their state.

The field was narrowed according to the following criteria:

Outlets must produce at least one original story per week Outlets must exist entirely online Outlets must have launched no earlier than 2005 Outlets must list editorial/reportorial staff Content must consist primarily of reported news, as opposed to opinion and analysis Outlets may not be the creation of a larger, established media company (i.e., The Nation Institute) The mission of the outlet must be primarily to produce content, as opposed to treating content creation as an ancillary item, subservient to a different mission. Outlets must be based in the U.S. Outlets must focus on news at the state or national level, not the local or hyperlocal level Outlets must not be dedicated to niche topic areas such as health or finance news, but instead must focus on a broader range of subjects

While the focus of the study is non-profit news outlets, researchers did not limit their searches to non-profit sites alone; commercial or for-profit sites that met all the criteria above were included in the study to serve as a point of comparison.

The final universe for the study consisted of the following 46 online news outlets (in alphabetical order):

Alaska Dispatch Alaska Watchdog Cal Watchdog California Watch Colorado Independent Colorado News Agency Connecticut Mirror CT News Junkie Idaho Reporter Illinois Statehouse News Iowa Independent Kansas Watchdog Maine Watchdog Maryland Reporter Michigan Messenger MinnPost Missouri News Horizon Missouri Watchdog Montana Watchdog Nebraska Watchdog Nevada News Bureau New Hampshire Watchdog New Jersey Newsroom New Jersey Spotlight New Mexico Watchdog New West Oklahoma Watchdog Pennsylvania Independent Progress Illinois ProPublica Sunshine State News Tennessee Report Tennessee Watchdog Texas Tribune Texas Watchdog The Daily Caller The Florida Independent The Minnesota Independent The Nerve The New Mexico Independent The North Carolina Independent News (now closed) The Texas Independent The Washington Independent (now closed) Virginia Statehouse News Virginia Watchdog (Old Dominion Watchdog) West Virginia Watchdog

Site Audits

Once the online news sites were selected, researchers conducted an audit of each site and the primary funding or underwriting organizations behind it, using a codebook specifically developed for this phase of the study. Researchers primarily limited themselves to the information that was readily available on the site, so if particular pieces of information could not be found, this was noted. In some instances-particularly relating to underwriters-researchers reached beyond the site to try to obtain the missing information by conducting additional web searches and consulting other investigative news reports about the sites.

The audit assessed the following variables:

Name of outlet Year launched Does the outlet contain editorial/reporting staff (yes/no) Staff makeup Number of editorial/reporting staff listed Intended geographic scope Stated general focus Frequency of original reporting Frequency of blog updates Tax status Revenue streams Transparency of news site: How transparent is the outlet in describing its mission? Does the outlet provide full contact information? Do stories include bylines? How transparent is the outlet in revealing its funding sources? Underwriting structure About the primary funder(s): If foundation/non-profit (describe) If private company/firm (describe) If individual/family (describe) Transparency of funder(s): How transparent is the funder in describing its mission? How transparent is the funder regarding its own financial information?

Capture and Content Retrieval

For one full month (September 7-October 5, 2010), researchers captured and archived all relevant news content (text and audio/video) appearing Monday through Friday on home pages of the sites. All news stories-including the occasional interview and analysis piece-were captured. If a story continued beyond the home page, the entire story was captured. Content clearly identified as blog posts or opinion pieces were not captured. News roundups and/or stories originating from other outlets were not captured.

For the majority of sites, capturing was done the day following publication. In other words, coders captured all of Monday’s stories on Tuesday morning. The exception was a select group of sites that regularly published a high volume of content. In order to ensure that none of these sites’ content disappeared by the time researchers were scheduled to capture, one staff member was designated to capture stories around 4 p.m. on the day of publication.

Determining the Coding Sample and Content Analysis

[1]

A team of five experienced PEJ coders conducted a content analysis on the universe of stories using a codebook specifically designed for the study. Coders went through an extensive training procedure, and achieved an inter-coder reliability of 82% across all variables. Coders achieved a reliability of 80% across the three variables which together measured the presence of ideology in the content-story theme, range of viewpoints, and target of exposes.

In addition to several ‘housekeeping’ variables (date coded, source, story date) the content analysis included the following key variables:

  • Story Describer – a short description of the content of the story
  • Story Topic – determines the broad topic categories addressed by a story, such as government, crime, education, labor, economy, immigration etc.
  • Range of Viewpoints – a measure of the effort of the news organization to present a balanced story by presenting multiple viewpoints on a topic or issue that involves some level of possible controversy
  • Presence of Journalist Opinion – a simple yes-or-no variable that indicates presence of journalist’s opinion
  • Target of Wrongdoing – identification of the focus of an investigation or evaluation of specific allegations of malfeasance, corruption, improper behavior, ethical breaches, etc.
  • Story Themes – an assessment of whether the total number of assertions in a story drive home a political or partisan message. The following themes were studied:
    • “The hand of government can make society fairer, better, and can bring benefits”
    • “Government regulations, oversight and taxes tend to interfere with progress and the power of the market.”
    • Democratic/liberal figure(s)- Positive steps/signs/actions
    • Democratic/liberal figure (s)- negative steps/signs/actions
    • Democratic/liberal figure (s)- is corrupt
    • Republican/conservative figure(s)- Positive steps/signs/actions
    • Republican/conservative figure(s)- negative steps/signs/actions
    • Republican/conservative figure(s)- is corrupt
    • Dem./liberal AND Repub./conservative figures- Pos. steps/signs actions
    • Dem./liberal AND Repub./conservative figures- neg. steps/signs actions
    • Dem./liberal AND Repub./conservative figures- is corrupt
    • Other figure(s)- Positive steps/signs/actions
    • Other figure(s)- negative steps/signs/actions
    • Other figure(s)- is corrupt
    • Wedge issue- liberal point of view
    • Wedge issue- conservative point of view

Data Analysis

[2]

Researchers combined the results of these three variables into a scale in order to create a summary measure of ideology that would be more reliable than any of the individual indicators alone.

The same was done for the five key transparency indicators that were measured in the audit: transparency about a site’s mission and funding sources; the site’s accessibility; and the transparency of the site’s primary funders about their mission and their own funding sources.

A third scale was created to measure the various indicators of a site’s productivity, including the size of a site’s editorial/reporting staff; its volume of news articles per week; and the amount of blog or opinion content produced by the site in a typical week.

The sites were given scores on the three 0-100 point rating scales in the following ways.

Ideology scale

Story theme

  • Very partisan: 50% or more of site’s stories tilted to one ideological point of view, and by a ratio of at least 2-to-1. (Score is 100).
  • Somewhat partisan: 40%-49% of a site’s stories tilted to one ideological POV, with at least 2-to-1 ratio. (Score is 66.7).
  • Slightly partisan: 25%-39% of a site’s stories tilted to one ideological POV, with at least 3-to-2 ratio. (Score is 33.3).
  • Non-partisan: 0%-24% of a site’s stories tilted to one ideological POV, AND there is less than a 2-to-1 ratio of one POV over another. (Score is 0).
  • Non-thematic: At least 60% of stories had no theme, and less than 25% leaned in any one direction. (Score is 0).

Target of exposes

  • Very partisan: Site targets only liberal actors or conservative actors. (Score is 100).
  • Somewhat partisan: Site targets both liberal and conservative actors, but targets one of these more often, by a ratio of at least 2-to-1. (Score is 50).
  • Non-partisan: Site does not target any liberal or conservative actors, or, site targets liberal and conservative actors in equal measure. (Score is 0).

Range of viewpoints

  • Highly one-sided: 60% or more of a site’s stories featured a single POV. (Score is 100).
  • Somewhat one-sided: A majority, but less than 60%, of a site’s stories featured a single POV. (Score is 66.7).
  • Slightly one-sided: A plurality of the site’s stories featured a single POV, but not a majority. (Score is 33.3).
  • Even-handed: A plurality or majority of the site’s stories featured two or more POVs. (Score is 0).

In addition to a content analysis, researchers evaluated the sites on transparency and productivity. Each of those items consisted of several related variables that were scaled together as follows.

Transparency scale

Transparency of mission

Mission statement

  • Very transparent: Site contains detailed ‘about us’ section, including description of mission, reason for launch, and what its objectives are. (Score is 100).
  • Somewhat transparent: Site contains brief ‘about us’ section that hints at mission. (Score is 50).
  • Not transparent: No ‘about us’ statement. (Score is 0).

Primary funder(s) mission statement

  • Very transparent: Funder provides detailed ‘about us’ section, including description of mission, reason for launch, and what its objectives are. (Score is 100).
  • Somewhat transparent: Funder provides brief ‘about us’ section that hints at mission. (Score is 50).
  • Not transparent: No ‘about us’ statement. (Score is 0).

Site availability

  • Very transparent: complete contact information (address, e-mail, phone number). (Score is 100).
  • Somewhat transparent: limited contact information (one or two of the above). (Score is 50).
  • Not transparent: no contact information. (Score is 0).

Transparency of funding

Site’s funding transparency

  • Very transparent: Site is very clear about its funding source(s). Provides a list of key funders and/or individual donors, as well as links to foundations. Outlets that provide financial documents on their sites and provide dollar amounts in connection with supporters are considered highly transparent. (Score is 100).
  • Somewhat transparent: Site is somewhat clear about its funding source(s), and provides a name but either no link or no description of those sources. (Score is 50).
  • Not transparent: no mention of where the site’s funding comes from. (Score is 0).

Primary underwriter’s funding transparency

  • Very transparent: Underwriter is very clear about its funding source(s). Provides a list of funders or a description of the source of its own financial resources, with names if applicable, as well as links to foundations and descriptions of them. (Score is 100).
  • Somewhat transparent: Underwriter is somewhat clear about its funding source(s), and provides a name but either no link or no description of those sources. (Score is 50).
  • Not transparent: no mention of where the underwriter’s funding comes from. (Score is 0).

Productivity scale

Stories per week

  • High volume: 20 or more stories per week. (Score is 100).
  • Moderate volume: 10-19.9 stories per week. (Score is 66.7)
  • Low volume: 5-9.9 stories per week. (Score is 33.3).
  • Very low volume: 0-4.9 stories per week. (Score is 0).

Blogging frequency

  • High volume: more than 5 posts/week (Score is 100).
  • Moderate volume: 3-5 posts/week (score is 66.7).
  • Low volume: 1-2 posts/week (Score is 33.3).
  • Very low volume: 0 posts/week (Score is 0).

Staffing size

  • Very large: 11 or more (Score is 100).
  • Large: 5-10 (Score is 66.7)
  • Small: 2-4 (Score is 33.3)
  • Very small: 1 (Score is 0).

The various components of each of the three scales were each given weights. The sum of those weights is equal to 100% for each scale.

For the ideology scale, the range of viewpoints component and story theme component are worth 45% each, while Target of exposes is worth 10%. For the transparency scale, each of the three components has equal weight (33.3%). For the productivity scale, the stories per week component is worth 30% in the final tally, the blogging frequency component is worth 20%, and the staffing size component is worth 50%.

Presentation of Data in Interactive Tool

Ideological scores were only presented for outlets and groups whose number of stories studied exceeded 20. Transparency and productivity scores were presented for all outlets.

Outlets and groups were labeled as highly ideological, somewhat ideological, or non-ideological, depending on their scale scores. The placement of a site or group into one of these categories was determined by evenly dividing the scores into three equal ranges between the lowest ideology score achieved (0) and the highest score achieved (70). The ranges were as follows:

0-23: non-ideological 24-47: somewhat ideological 48-70: highly ideological

Likewise outlets and groups were labeled as transparent, somewhat transparent, or slightly transparent, depending on their scale scores. Once again, the placement of a site our group into one of these categories was determined by evenly dividing the scores into three equal ranges between the lowest transparency score achieved (20) and the highest (100). The ranges were as follows:

20-46: slightly transparent 47-75: somewhat transparent 76-100: transparent

Finally, outlets and groups were labeled as having high productivity, medium productivity, or low productivity, depending on their scale scores. As with the other measures, the placement of a site or group into one of these categories was determined by evenly dividing the scores into three equal ranges between the lowest productivity score (0) and the highest (100). The ranges were as follows:

0-33: low productivity 34-67: medium productivity 68-100: high productivity


[1]

[2]

Sign up for The Briefing

Weekly updates on the world of news & information