Non-Profit NewsAbout the study
The study, Non-Profit News: Assessing a New Landscape in Journalism, involved several phases, all of which were performed in-house by PEJ researchers. The primary PEJ staff members conducting the research, analysis and writing included: Tricia Sartor, Weekly News Index manager; Kevin Caldwell, researcher/coder; Nancy Vogt, researcher/coder; Jesse Holcomb, research associate; Amy Mitchell, deputy director; Tom Rosenstiel, director. Other staff members who made substantial contributions to the report were: Christine Bhutta, research associate; Paul Hitlin, senior researcher; Dana Page, communications and creative design manager. Copy-editing was done by Molly Rohal, communications coordinator for Pew Research Center. Number-checking was done by Steve Adams, researcher/coder. The following Pew Research Center staff provided design assistance on the interactive tool: Russell Heimlich, web developer; Michael Piccorossi, director of digital strategy and IT; Michael Keegan, graphics director for Pew Social & Demographic Trends and Pew Hispanic Center; Carrie Madigan, informational graphic designer for Pew Social & Demographic Trends and Pew Hispanic Center. The first phase of research was to identify the media universe. This occurred in May-June 2010. The second phase was conducting audits of the online sites (June-August 2010). The third phase was the content capture and analysis (September 2010-January 2011). Finally, once the coding was complete, several scales were developed as a means of presenting the specific findings on ideology, transparency and productivity. Details of each of these phases follow. Defining the Media Universe Researchers took several steps to define the universe of non-profit news organizations. Researchers compiled the universe of sites by using three different techniques. First, researchers consulted lists of such news outlets that had already been compiled by academics and journalists who monitor this emerging field. These sources include:
Michele McClellan's (Reynolds Journalism Institute) list of promising community news sites Second, Researchers searched the web for news organizations by using the following terms:
"[state]" and "News organization" Third, researchers identified and contacted each state's press association and asked for a list of credentialed online media outlets. In addition, researchers asked the associations if they were aware of any additional online journalism startups in their state. The field was narrowed according to the following criteria:
Outlets must produce at least one original story per week While the focus of the study is non-profit news outlets, researchers did not limit their searches to non-profit sites alone; commercial or for-profit sites that met all the criteria above were included in the study to serve as a point of comparison. The final universe for the study consisted of the following 46 online news outlets (in alphabetical order):
Alaska Dispatch Site Audits Once the online news sites were selected, researchers conducted an audit of each site and the primary funding or underwriting organizations behind it, using a codebook specifically developed for this phase of the study. Researchers primarily limited themselves to the information that was readily available on the site, so if particular pieces of information could not be found, this was noted. In some instances-particularly relating to underwriters-researchers reached beyond the site to try to obtain the missing information by conducting additional web searches and consulting other investigative news reports about the sites. The audit assessed the following variables:
Name of outlet Capture and Content Retrieval For one full month (September 7-October 5, 2010), researchers captured and archived all relevant news content (text and audio/video) appearing Monday through Friday on home pages of the sites. All news stories-including the occasional interview and analysis piece-were captured. If a story continued beyond the home page, the entire story was captured. Content clearly identified as blog posts or opinion pieces were not captured. News roundups and/or stories originating from other outlets were not captured. For the majority of sites, capturing was done the day following publication. In other words, coders captured all of Monday's stories on Tuesday morning. The exception was a select group of sites that regularly published a high volume of content. In order to ensure that none of these sites' content disappeared by the time researchers were scheduled to capture, one staff member was designated to capture stories around 4 p.m. on the day of publication. Determining the Coding Sample and Content Analysis The capture and content retrieval phase of the project resulted in a total of 2325 stories. The chronological lists of stories contained in each of the individual subfolders were printed out, and the actual coding sample was determined by selecting every other story captured for each of the sites. Thus, 50% of all stories captured were coded, resulting in a sample of 1122 stories.[1] A team of five experienced PEJ coders conducted a content analysis on the universe of stories using a codebook specifically designed for the study. Coders went through an extensive training procedure, and achieved an inter-coder reliability of 82% across all variables. Coders achieved a reliability of 80% across the three variables which together measured the presence of ideology in the content-story theme, range of viewpoints, and target of exposes. In addition to several ‘housekeeping' variables (date coded, source, story date) the content analysis included the following key variables:
Data Analysis When it came to analyzing the content of the sites, several variables were observed in conjunction to indicate the ideological orientation of an outlet or group of outlets.[2] These included the Range of Viewpoints variable, the Target of Exposes variable, and the Story Theme variable. Researchers combined the results of these three variables into a scale in order to create a summary measure of ideology that would be more reliable than any of the individual indicators alone. The same was done for the five key transparency indicators that were measured in the audit: transparency about a site's mission and funding sources; the site's accessibility; and the transparency of the site's primary funders about their mission and their own funding sources. A third scale was created to measure the various indicators of a site's productivity, including the size of a site's editorial/reporting staff; its volume of news articles per week; and the amount of blog or opinion content produced by the site in a typical week. The sites were given scores on the three 0-100 point rating scales in the following ways.
Ideology scale Story theme
Target of exposes
Range of viewpoints
In addition to a content analysis, researchers evaluated the sites on transparency and productivity. Each of those items consisted of several related variables that were scaled together as follows.
Transparency scale
Transparency Mission statement
Primary funder(s) mission statement
Site availability
Transparency of funding Site's funding transparency
Primary underwriter's funding transparency
Productivity scale Stories per week
Blogging frequency
Staffing size
The various components of each of the three scales were each given weights. The sum of those weights is equal to 100% for each scale. For the ideology scale, the range of viewpoints component and story theme component are worth 45% each, while Target of exposes is worth 10%. For the transparency scale, each of the three components has equal weight (33.3%). For the productivity scale, the stories per week component is worth 30% in the final tally, the blogging frequency component is worth 20%, and the staffing size component is worth 50%. Presentation of Data in Interactive Tool Ideological scores were only presented for outlets and groups whose number of stories studied exceeded 20. Transparency and productivity scores were presented for all outlets. Outlets and groups were labeled as highly ideological, somewhat ideological, or non-ideological, depending on their scale scores. The placement of a site or group into one of these categories was determined by evenly dividing the scores into three equal ranges between the lowest ideology score achieved (0) and the highest score achieved (70). The ranges were as follows: 0-23: non-ideological Likewise outlets and groups were labeled as transparent, somewhat transparent, or slightly transparent, depending on their scale scores. Once again, the placement of a site our group into one of these categories was determined by evenly dividing the scores into three equal ranges between the lowest transparency score achieved (20) and the highest (100). The ranges were as follows: 20-46: slightly transparent Finally, outlets and groups were labeled as having high productivity, medium productivity, or low productivity, depending on their scale scores. As with the other measures, the placement of a site or group into one of these categories was determined by evenly dividing the scores into three equal ranges between the lowest productivity score (0) and the highest (100). The ranges were as follows: 0-33: low productivity [1]Because the sum of Watchdog.org stories was somewhat small, an exception was made for this group, and every Watchdog.org story that was captured was coded (this was done to ensure valid statistical analysis). This brought the sample of stories analyzed in the study up to 1,203. [2] Because many sites contained small samples of stories, statistical analysis could not be conducted on their content alone. Therefore, the Watchdog.org and American Independent News Network families of sites, plus the Statehouse News Online consortium, were each analyzed as a unit (as opposed to deriving statistics from the individual sites that make up those groups). The rest of the sites in the study were analyzed individually. |
|
|