April 23, 2012

How the Media Covered the 2012 Primary Campaign

Methodology

 

About this Study

A number of people at the Pew Research Center’s Project for Excellence in Journalism worked on PEJ’s “Less Horse Race Than 2008: How the Media Covered the 2012 Primary Campaign.” Director Tom Rosenstiel, Associate Director Mark Jurkowitz and Manager of the Weekly News Index Tricia Sartor wrote the report. Tricia Sartor and Senior Researcher Paul Hitlin supervised the creation of the monitors using Crimson Hexagon technology. Researchers Kevin Caldwell and Nancy Vogt developed and trained the computer coding monitors. Katarina Matsa, Steve Adams, Laura Santhanam, Monica Anderson, Heather Brown, Jeff Beattie, and Sovini Tan coded and analyzed the content data. Dana Page handles the communications for the project.

Methodology

The report issued by the Pew Research Center’s Project for Excellence in Journalism, “Less Horse Race than 2008: How the Media Covered the 2012 Primary Campaign,” uses content analysis data from several sources.

Data regarding the quantity and frame of coverage in the mainstream press is derived from the Project for Excellence in Journalism’s in-house coding operation. (Click here for details on how that project, also known as PEJ’s News Coverage Index, is conducted.)

To arrive at the results regarding the tone of coverage, PEJ employed a combination of traditional media research methods, based on long-standing rules regarding content analysis, along with computer coding software developed by Crimson Hexagon. That software is able to analyze the textual content from millions of web-based articles from news sites. Crimson Hexagon (CH) classifies online content by identifying statistical patterns in words.

Quantity of Candidate Attention in the Mainstream Press

During PEJ’s weekly coding for the News Coverage Index, an examination of almost 1,000 news stories every week, human coders determine which stories are focused primarily on the 2012 campaign. A story is considered a campaign story if 50% of the time or space allotted to that story is about the campaign or any of the Republican candidates. For an in-depth methodology regarding PEJ’s News Coverage Index, click here.

During the same process, coders identify stories where each of the candidates is a "significant newsmaker" in the story. To be considered a "significant newsmaker," a person must be in 25% of the story or more. A story can have multiple significant newsmakers.

To determine the quantity of campaign coverage for each as a percentage, PEJ divides the number of stories where a candidate is a significant newsmaker by the total number of campaign stories.

Frame of Campaign Coverage

For the data regarding frames or subtopics of campaign coverage, PEJ again used data derived from PEJ’s weekly coding for the News Coverage Index.

Stories determined to be about the campaign, as identified using the process described above, were then coded for campaign frame. If a story contained two or more frames, the frame that was given the most attention (in terms of seconds) in the story was assigned.

This data represents campaign coverage as a percent of ‘newshole’ – the time or space given to news content. The results are determined as a percentage of the overall campaign newshole in terms of seconds or words.

Tone of Coverage

For the data regarding tone of coverage, PEJ employed a combination of traditional media research methods, based on long-standing rules regarding content analysis, along with computer coding software developed by Crimson Hexagon. This report is based on an examination of millions of web-based articles related to the 2012 primary campaign.

Crimson Hexagon is a software platform that identifies statistical patterns in words used in online texts. Researchers enter key terms using Boolean search logic so the software can identify relevant material to analyze. PEJ draws its analysis samples from more than 11,500 news sites. Then a researcher trains the software to classify documents using examples from those collected posts. Finally, the software classifies the rest of the online content according to the patterns derived during the training.

According to Crimson Hexagon: "Our technology analyzes the entire social internet…by identifying statistical patterns in the words used to express opinions on different topics."  Information on the tool itself can be found at http://www.crimsonhexagon.com/ and the in-depth methodologies can be found here http://www.crimsonhexagon.com/products/whitepapers/. You can also see a more full methodology for the Crimson Hexagon analysis here.

Universe  

Crimson Hexagon software examines online content provided by RSS feeds of millions of news outlets from the U.S. and around the world. This provides researchers with analysis of a much wider pool of content than conventional human coding can provide. Specifically, this report is based on an examination of millions of web-based articles related to the 2012 primary campaign from more than 11,500 news sites.  CH maintains a database of all texts available so content can be investigated retroactively.

While the software collects and analyzes online content, the database includes many news sites produced by television and radio outlets. Most stations do not offer exact transcripts of their broadcasted content on their sites and RSS feeds, however, those sites often include text stories that are very similar to report that were aired. For example, even though the television programs from Fox News are not in the sample directly, content from Fox News is present through the stories published on FoxNews.com.

Monitor Creation and Training

Each individual study or query related to a set of variables is referred to as a "monitor."

The process of creating a new monitor consists of four steps.

First, PEJ researchers decide what timeframe and universe of content to examine. PEJ only includes English-language content.

Second, the researchers enter key terms using Boolean search logic so the software can identify the universe of posts to analyze.

Next, researchers define categories appropriate to the parameters of the study. For a tone monitor, there would be four categories: positive, neutral, negative, and irrelevant for posts that are off-topic.

 Fourth, researchers "train" the CH platform to analyze content according to specific parameters they want to study. The PEJ researchers in this role have gone through in-depth training at two different levels. They are professional content analysts fully versed in PEJ’s existing content analysis operation and methodology. They then undergo specific training on the CH platform including multiple rounds of reliability testing.

The monitor training itself is done with a random selection of posts collected by the technology. One at a time, the software displays posts and a human coder determines which category each example best fits into. In categorizing the content, PEJ staff follows coding rules created over the many years that PEJ has been content analyzing the news media. If an example does not fit easily into a category, that specific post is skipped. The goal of this training is to feed the software with clear examples for every category.

For each new monitor, human coders categorize at least 250 distinct posts. Typically, each individual category includes 20 or more posts before the training is complete. To validate the training, PEJ has conducted numerous intercoder reliability tests (see below) and the training of every monitor is examined by a second coder in order to discover errors.

The training process consists of researchers showing the algorithm stories in their entirety that are unambiguous in tone. Once the training is complete, the algorithm analyzes content at the assertion level, to ensure that the meaning is similarly unambiguous. This makes it possible to analyze and proportion content that contains assertions of differing tone. This classification is done by applying statistical word patterns derived from posts categorized by human coders during the training process.  

The monitors are then reviewed by a second coder to ensure there is agreement. Any questionable posts are removed from the sample.

Ongoing Monitors

In the analysis of campaign coverage, PEJ uses CH to study a given period of time, and then expand the monitor for additional time going forward. In order to accomplish this, researchers first create a monitor for the original timeframe according to the method described above.

Because the tenor and content of online conversation can change over time, additional training is necessary when the timeframe gets extended. Since the specific conversation about candidates evolves all the time, the CH monitor must be trained to understand how newer posts fit into the larger categories.

Each week, researchers remove any documents which are more than three weeks old. For example, for the monitor the week of February 13-19, 2012, there will be no documents from before January 30. This ensures that older storylines no longer playing in the news cycle will be removed and the algorithm will be working with only the newest material.

Second, each week trainers add more stories to the training sample to ensure that the changes in the storyline are accurately reflected in the algorithm. PEJ researchers add, at a minimum, 10 new training documents to each category. This results in many categories receiving much more than the 10 new documents. On average, researchers will add roughly 60 new training documents each week.

How the Algorithm Works

To understand how the software recognizes and uses patterns of words to interpret texts, consider a simplified example. Imagine a study examining coverage regarding the death of Osama bin Laden that utilizes categories such as “political ramifications,” “details of raid,” and “international reaction.” As a result of the example stories categorized by a human coder during the training, the CH monitor might recognize that portions of a story with the words "Obama," "poll" and "increase" near each other are likely about the political ramifications. However, a section that includes the words "Obama," "compound" and "Navy" is likely to be about the details of the raid itself.

Unlike most human coding, CH monitors do not measure each story as a unit, but examine the entire discussion in the aggregate. To do that, the algorithm breaks up all relevant texts into subsections. Rather than dividing each story, paragraph, sentence or word, CH treats the "assertion" as the unit of measurement. Thus, posts are divided up by the computer algorithm. If 40% of a story fits into one category, and 60% fits into another, the software will divide the text accordingly. Consequently, the results are not expressed in percent of newshole or percent of stories. Instead, the results are the percent of assertions out of the entire body of stories identified by the original Boolean search terms. We refer to the entire collection of assertions as the "conversation."

Testing and Validity

Extensive testing by Crimson Hexagon has demonstrated that the tool is 97% reliable, that is, in 97% of cases analyzed, the technology’s coding has been shown to match human coding. PEJ spent more than 12 months testing CH and its own tests comparing coding by humans and the software came up with similar results.

In addition to validity tests of the platform itself, PEJ conducted separate examinations of human intercoder reliability to show that the training process for complex concepts is replicable. The first test had five researchers each code the same 30 stories which resulted in an agreement of 85%.

A second test had each of the five researchers build their own separate monitors to see how the results compared. This test involved not only testing coder agreement, but also how the algorithm handles various examinations of the same content when different human trainers are working on the same subject. The five separate monitors came up with results that were within 85% of each other.

Unlike polling data, the results from the CH tool do not have a sampling margin of error since there is no sampling involved. For the algorithmic tool, reliability tested at 97% meets the highest standards of academic rigor.

Data Regarding the Use of the Phrase “Mathematical Inevitability”

For the data regarding how often the phrase “mathematical inevitability” appeared regarding Romney, PEJ conducted a keyword frequency search using the closed captioning on the Snapstream server. Researchers searched for the term “math*,” which returns all variations on that word such as “math,” “mathematical,” “mathematics,” etc.  Researchers then identified only those stories that were about the GOP race and delegate counting. Stories about other topics were excluded. 

These results come from the same television news shows that are in PEJ’s News Coverage Index – a total of 110 broadcasts each week. This includes daytime and evening cable programs, morning and evening network news broadcasts, and PBS Newshour. For a full list of the TV shows included, click here.