Tag Archives: bias ratings

Mediate Metrics Update

My sincere apologies to those who have been following my work, but I have come to the conclusion that I must suspend my efforts to measure political bias in the media, at least for the time being. The demands of other life priorities, coupled with the challenges of getting the system to work to my satisfaction, have made this decision necessary.

Despite this unfortunate turn, the effort was highly educational and afforded me certain perspectives on political news bias —- both in how it is delivered and in how it is received — that I will share with readers over the coming days, weeks, and months. Having devoted over 60 hours a week to this task for 6 months, one cannot help but gain a few insights along the way.

Perhaps most interesting (and amusing) was the reaction I received from the blogosphere when my efforts came to light. I have often commented that so-called media “watchdog” groups are all about watching the other dogs, and therefore lose their value for those who simply want a way of handicapping the political information they gather. But the most engaged viewers ARE partisan, and the feedback I received from them suggested that they were not interested in an objective bias metric. This phenomenon parallels the media construct of the day; “slanted” news outlets are far more popular than those which tend towards the middle, particularly in cable news.

Simply put, partisan viewers tend to be engaged and participate like sports fans at a pep rally.

Not surprisingly, some media people aggressively challenged the fundamental value of measuring news bias at all. My favorite comment came from a British journalist, who starkly said that,” I’m not so into the whole impartial journalism ideal. My ideal is fealty to the truth, not to balance.” When I first read that comment, I pictured a court room in which a lawyer imperiously states that, “I don’t have facts or witnesses, but I am uniquely blessed to know the absolute TRUTH!”

Short trial.

In fairness to that commenter, my view of media objectivity is not — nor has it ever been — robotic commentators stating cold facts without passion or perspective. Rather, it is a healthy balance of thoughtful, engaging analysis that fairly presents BOTH sides of key political issues. That, and the fact that I’m an early riser, is probably why I am a fan of MSNBC’s “Morning Joe.” Viewpoints are intelligently and passionately delivered on both sides of any political topic, although the format equally exposes them to raging partisan criticism (especially when Joe Scarborough takes issue with his own). Still, for an independent like me, it’s a great way to hear a passionate, 2-sided discourse and form my own opinion, discounting for MSNBC’s over-arching liberal bias, of course.

One conclusion I could not help but come to is that those most passionate and engaged about their political views want to be affirmed by the media, not informed. Of course, those folks were not the market segment I was trying to reach, but they were the most vocal. The challenge for any media bias rating service like the one I had envisioned was reaching the next tier — those who are going about their busy daily lives, and simply grazing the news for political insights. As I have noted elsewhere, I cannot tell you how many times I have had conversations with uninitiated viewers who proudly state that, “The only news program I watch is the O’Reilly Factor … or Hardball …,” etc.

If such low engagement viewers and voters are acquiring their political insights this way … or from political news sound bites that resonate throughout our society at the speed of light … or from the deluge of Super-Pac ads sponsored by some seemingly high-minded “citizens” group …

… then we all have cause for concern.

Tagged , , , , , , , , , , ,

Mediate Metrics FAQ #1

Thanks in large part to coverage initiated by Inside Cable News, interest in our media bias/slant rating system has increased dramatically. Rather than field all questions individually, we’ve decided to post some of the most popular ones below:

How does one measure “bias” in the media without introducing bias into the system?

We were diligent in trying to maintain objectivity by adhering to very strict social science/text analytics, guidelines, working with a partner who is very experienced in this area, and engaging multiple “coders” for the sake of system integrity . After many months of incrementally refining the system — and waiting until we achieved high levels of inter-coder correlation — we released our version 1.0 classifier, and have continued to refine it every day since. Systems like ours must be constantly refined to adapt to the changing political rhetoric of the day. Fortunately, our platform is designed to do just that.

Text classification systems use Natural Language Processing elements —- basically, a progression of statistical correlation techniques —- to mimic the results of expert human coders. That being the case, the human coding process is key, since that is where bias can most readily be introduced. Some of the provisions we included to minimize coder bias include:

  • Defining VERY strict rules for identifying transcript statements which can be coded as either “Favoring Democrats/Critical of Republicans,” or “Favoring Republicans/Critical of Democrats.” For example, the experts can only code for slant if if the explicit terms or specific proxy labels for Democrats or Republicans are contained in the text.
  • Randomizing transcript statements for the human coding process so that “slant inertia” is drastically reduced. Even expert coders tend to bring outside context into their evaluations, especially when reading a narrative which has a repetitive theme. Randomizing statements helps the “man” component of this man-machine partnership to be more clinical, and enhances objectivity.
  • Regular adjudication sessions, in which the team members review their mismatches and recommend rule refinements to improve coding clarity. Having done this innumerable times, and operating under the proviso of, “When in doubt, code NEUTRAL,” I can tell you that bias is controlled rather effectively this way.
  • Partitioning statements related to the Republican Presidential primaries. This was critical to making the ratings fair and reasonable. News coverage about the Republican primaries is decidedly negative, and is often about Republican candidates bashing other Republican candidates, while we specifically target inter-party comparisons. Once again, we have VERY strict guidelines for how we treat this situation.
  • Following slant assessment templates which involve identifying the speaker, determining the object of his/her discussion, assessing inter-party comparison(s),uncovering embedded judgments, and noting factual references that clearly reflect positively-or-negatively towards a particular party.

Hopefully, you get the idea. We’ve gone to great pains to make our ratings objective, but I am not so bold as to represent that it is perfect. Even the best text analytics systems have limitations. This one is no exception.

What is the business model for this service?

Beyond the high-level slant metrics we have initially provided free-of-charge, there is additional business value to be reaped from:

  • Networks, news analysts, and interest groups, through secondary slant studies on specific topics such as health care, labor/union issues, military spending, right-to-life, tax reform, regulatory measures, etc.?
  • Watchdog agencies, via insight reports on the political views of prominent news anchors, correspondents, and contributors?
  • Various political groups desiring a deeper understanding of each network’s Republican Primary coverage and slant.
  • Commercial, governmental, and educational bodies desiring to analyze the resonance of TV News slant through social media platforms like Twitter, Facebook, and the blogosphere.
  • Media outlets, who want to certify that their content meets a specific political/informational criteria, for the purpose of differentiation

Say the President has a bad news day…something bad happens…bad job numbers, court case goes against the administration, scandal in the West Wing…whatever. How does your system handle that scenario?

A bad (or good) day by the President will influence our ratings. Slant ratings effectively “move with the market.” Therefore, our ratings are best viewed as relative measure. Said another way, you would find that certain networks and programs are more slanted than others during a “bad” news week, for Democrats or Republicans, but all will be effected by a dominant political news theme.

How does one evaluate “bias” in content that is, by design, supposed to be opinionated?

From our perspective, Op-Ed news content is absolutely valid, as long as viewers are aware that the content they are watching is indeed that. Frankly, we think that boundary between opinion pieces and straight news is often blurry for the general public. News wonks know the difference intuitively, but we have all experienced instances in which an uninitiated viewer proudly states that, “The only news program I watch is {INSERT YOUR OP-ED PROGRAM OF CHOICE}.” Furthermore, straight news programming often contains a subtle-but-consistent political tilt, despite claims to the contrary.

The fact is that TV news programs, regardless of type, often frame the political discourse of the day, which then translates into voting behavior and government policies that dramatically affect our daily lives. That being the case, don’t you think an object entity should “watch the watchers” in order to serve the greater good?

That may sound pretentious, but I don’t know how else to say it.

If Mediate Metrics had been through a rigorous process of development, which can take several months of hard work, they’d be telling us about it, because it would be a big step forward. The biggest trouble is that the initial degree of inter-annotator agreement, depending on how you define it and measure it, is likely to be spectacularly low, say around 30%.

Actually, our inter-coder reliability reached a peak of over 80% before the 1.0 classifier was released.

Our system had been in development for many months, and the supporting the code book is substantial. Still, there are many different outlets for this service, many of which are not staffed with linguistic/text analysis experts.  Knowing that, and in consideration of our limited resources, we did not publicize all of our details, but they are available with certain concessions to confidentiality.

 

Tagged , , , , , , , , ,

TV News Political Bias Impact Ratings: January 16 – 20, 2012 (Revised)

Much debate has been devoted to assessing whether there is a liberal or conservative media bias (or as we prefer to call it, political news slant) in the U.S. Most tend to focus on the source of the bias, but our view is somewhat different. At Mediate Metrics, we prefer to quantify the amount of slant (bias plus editorial influence), combined with the quantity of slant recipients, to assess the full impact of media bias.

As presented in our November 15 post, total objectivity and fairness in the news is a virtual impossibility. Still, our hypothesis is that networks will be less biased when their reputation is built upon informing viewers and being an objective resource. In contrast, news outlets which rely on affirming the political preferences of their loyal viewers will have a natural tendency to be more slanted.

Building on our previous 2 posts, we have added a Political Slant “Impact” Rating comparison for January 16th to the 20th, depicted in Chart 3 below:

CHART 3: Slant Impact Rating - January 16 to 20

As in our other charts, content favoring the Republican party is represented in red (numerically positive), while that which leans towards the Democratic Party is shown in blue (numerically negative). We have graphically indicated depth-of-coverage, or lack thereof, by lightening those colors. For example, the bars representing NBC,CBS, and to a lesser degree Fox News and ABC, were purposely made lighter to reflect the relatively small transcript coverage for those particular networks. Regardless of coverage, the basic message depicted in Chart 3 is that the slant “foundation”, depicted at the base of each pyramid, is amplified by the number of people viewing the content. Table 3 below shows the numerical analysis supporting the chart:

TABLE 3: Political Slant Impact Measures - January 16 to 20, 2012

The Composite Weekday Slant Ratings in column 2, along with the number of statements classified in column 3, were derived from data previously discussed and shown here on Tuesday, January 31st. Statement coverage and “Confidence Factors” relate directly to the color shades in Chart 3. Most importantly, we have factored in viewership data for the networks and programs under review. This is where things get interesting, given that:

  • The nightly news programs from the major broadcast networks achieve the highest ratings per program by far, but their impact is mitigated by the fact that they are only broadcasted for 30 minutes a night;
  • By definition, the 3 top cable networks broadcast a continuous line-up of news shows, each of which is 60 minutes long, and representing as much as 420 minutes of programming/network for the nightly time period (5 PM to 11 PM) under consideration.
  • Public service programming, such as the Republican Presidential debates, was purposely omitted from our calculations since it does not reflect the editorial views (slant) of the network-or-program they were broadcast on.

Some compelling notions, however preliminary, can be drawn from this analysis. While the aggregate slant of content delivered during this time period appears to favor Democrats (as depicted in light blue in the “Totals” row), the aggregate impact tilts towards the GOP (as shown in the light red cell, same row).

Admittedly, our classifier and database need further refinement, but we think these initial results are rather intriguing. Still, we’d love to know what you think. Don’t hesitate to leave a comment below, or to send one directly to: barry@mediatemetrics.com.

Tagged , , , , , , , , , , , , , ,

TV News Political Slant Report by Show: 1/16 – 1/20

Building on our previous post, today we our publishing a separate version of our TV news measurement metrics which focuses on the political slant of individual programs aired by the 3 major broadcast networks (ABC, CBS, and NBC) and the top 3 cable news channels (CNN, Fox News, and MSNBC), for shows aired from 5 PM until 11 PM eastern time, Monday through Friday. As highlighted yesterday, our analytical coverage varies by network, program, and date, but our intention is to augment it over time.

CHART 2: Slant Rating by Program - January 16 to 20, 2012

Content favoring the Republican party in Chart 2 is portrayed in red (numerically positive), while content that slants towards the Democratic Party is shown in blue (numerically negative). Those interested in the underpinnings of the Mediate Metrics slant rating system should review our January 31st post, or see our primer on Text Analytics Basics at: http://wp.me/p1MQsU-at.

Since our analytical coverage varies by network, program, and date, so does the associated confidence factor in our slant ratings. The exact amount of coverage per network is shown in the Table 2 below, but we have graphically indicated depth-of-coverage by way of color shading in Chart 1. For example, the cones representing The Five, Hannity, and On The Record were purposely made lighter to reflect the relatively small transcript coverage for those particular networks. Low transcript coverage likely accounts for certain results that may seem counter-intuitive; we expect those metrics to adapt with volume and time.

TABLE 2: Slant by Program - January 16 to 20, 2012

As mentioned yesterday, we have partitioned statements about the Republican Presidential primaries, since they tend to be disproportionately negative and often lack inter-party comparison, and have largely excluded them from these slant ratings. Similarly, the Republican Presidential debates and other such dedicated program segments have been omitted in their entirety since they do not reflect the political positions of the networks, programs, or contributors under a consideration.

We’ll publish an “impact rating” for the same January 16 – 20 time period tomorrow.

Tagged , , , , , , , , , , , , , , , , ,

Editorial Selection: Fox and MSNBC

Building on the theme of editorial selection and the news, I decided to once again use my “tag cloud” (most popular words) tool on evening and prime time broadcasts from Fox News and MSNBC on November 14th and 15th. As I highlighted yesterday, media outlets can broadcast but a tiny portion of the available news, so I decided to see what these 2 competitors decided to emphasize.

DISCLAIMER #1: I could not wait to get this out, so I’m sure I will be making additional edits and refinements.

DISCLAIMER #2: Tag clouds are not surgical instruments. That fact, combined with the knowledge that I manually culled words that did not directly relate to specific topics and messaging themes should tell the reader to view the following with a critical eye…. as you should with all interpretative journalism.

Which virtually all political news is.

Disclaimers aside, examining the content selection of Fox and MSNBC is like having box seats at a gun fight. It’s clear that MSNBC is putting Republican Presidential candidates under a microscope, taking pot shots at local Republican candidates whenever possible, and positioning themselves as the mouth-piece for the middle class. Similarly, Fox has President Obama and the 2012 election in the cross hairs, featuring topics that cast him or his administration in a negative light, with specific emphasis on job creation (or a lack thereof).

Those are the highlights — or low-lights, depending on your point of view — but there is more information in the clouds if you are willing to stare at them briefly …

**********************************************************************************************************************

MSNBC “TOP 25” TAG CLOUD:

  • Substantial Republican Primary/Candidate focus, with Herman CAIN (236 occurrences) still drawing the most attention, ROMNEY (82 occurrences) a distant second, and Perry (52 occurrences) in third.
  • Occupy Wall Street is a significant topic, as evidenced by the occurrence of the related tag words MOVEMENT, OCCUPY, and STREET. Why WALL did not make the top 25, I have no idea.
  • SCOTT is in the top 25 primarily due to parallel references to Republican governors Scott Walker (Wisconsin) and Scott Brown (Florida). Similarly, JOHN was also mentioned frequently in relation to Ohio governor John Kasich, but I removed that name because several other JOHNs were intermingled in the word count.
  • Frequent references to AMERICANS (and AMERICANS by default, since my tag cloud tool intermittently extracts root words in parallel) and the middle CLASS seems to represent a positioning theme for MSNBC
  • JUDGE generally shows up in 2 different contexts: 1.) The judge who let Penn State coach Sandusky out on reduced bail and; 2.) The impartiality Judges Scalia and Thomas related to the Supreme Court case on health care.
  • CASE shows up in several different contexts, again related to the tag cloud tools penchant to extract root words — ObamaCARE, HealthCARE, MediCARE, and are “they” sCAREd?

FOX “TOP 25” TAG CLOUD:

  • No references to the Republican Primary candidates by name in the Top 25 tag words. In contrast, PRESIDENT (65 occurrences) and OBAMA (42 occurrences) are the top 2 most popular tag words in the cloud. When viewed in relation to the MSNBC tag cloud, one cannot help but conclude that negative politics extends to these 2 networks.
  • Similar, but not exactly the same, thematic positioning around AMERICA, but not so much on CLASS.
  • BOOK was an area of focus mostly because of controversies surrounding Bill O’Reilly’s new book (“Killing Lincoln”), and Peter Schweizer’s book about alleged congressional insider trading.
  • A greater focus on activities in the SUPER COMMITTEE, and with question on whether a satisfactory DEAL can be made.
  • DEAL was also used in the context of favorable (and ethically questionable) deals made on IPOs and land, leveraging the insider trading immunity afforded to congressman.
  • CONGRESS was primarily used in 2 contexts: 1.) There were several CONGRESS persons on the prime time Fox News programs I analyzed, and; 2.) Numerable references were made along the lines of our “Do-nothing CONGRESS. ..”
  • ELECTION appeared primarily as part of 2 topics: 1.) Forward-looking statements related to the 2012 Presidential election, and; 2.) The fact that negative news related to Solyndra was allegedly throttled by administration officials.
  • FLORIDA made the top 25 based on Florida government officials on the shows whose transcripts I analyzed.
  • JOB and JOBS are in the top group because of a focus on the subject of job creation.
  • LEGAL is attached to either the constitutional rights that should or should not be afforded terrorists, as well as related to immigration issues.
  • The term SPEAKER rose to the top because of references and sound bites from House Speaker John Boehmer, as well as an interview with FORMER SPEAKER of the House Newt Gingrich.

**********************************************************************************************************************

If you would like to know more about the specific details of my process or the specific programs I included in this analysis, just email me at: barry@mediatemetrics.com.

Tagged , , , , , , , , , , , , , , , , , , , , , , ,

Who’s News? YOU Decide.

The more I study media bias, the more I realize that TV coverage flows (and often overflows) in certain directions because viewers vote with their eyeballs.

The blogosphere is crackling today with reports on the CBS internal memo which directed their debate moderators to devote fewer questions to Michelle Bachmann. The issue certainly has ignited the fanaterati. Don’t get me wrong; editorial selection bias is a very real phenomenon. Still, a thinking person should consider other possibilities.

So here is one: Perhaps we get a disproportionate amount of coverage on certain issues and people because they drive viewership. Combined with the extensive amount of news capacity that needs to be filled, media outlets are motivated to keep popular stories alive because lots of people are following them. As an unfortunate by-product, reporters and commentators fan the  flames over time by digging up all kinds of corner-cases, then sensationalizing them as “New Developments!” And that’s when we enter the realm of the absurd.

Circling back to the issue du jour, giving Michelle Bachmann more debate time does not make sense for the network in that context. It’s an inexact science, but it is a network executive’s job is to promote viewership … which drives ad revenue …which increases company profits, equity value, and personal paychecks.

It’s tempting to see a conspiracy here, and maybe there is one, but I think it is equally possible that this is just capitalism in action.

Tagged , , , , , , , , , , , , , , , , , , , , , ,

Political News: More Commonly Used Media Bias Techniques

Combing through news transcripts for bias indicators provides you with either unique insights or temporary insanity. Despite my questionable mental state, I’ve uncovered some subtler tricks-of-the-news-trade that I’d like to share with my readers.

Value Judgments: By definition, a value judgment is an assessment that reveals more about the values of the person making the assessment than about the reality of what is assessed. Value judgments can be either direct or projected.

Direct value judgments are often preceded with “I,” either explicitly or as understood. Examples are: “I don’t believe that …,” “that won’t work …” Projected value judgments are less obvious, but are used extensively by certain commentators and politicians. Speakers, often wrapping themselves in the flag or as the spokesperson for some popular group, stealthily project their personal opinions with statements like, “Americans won’t support…,”or  “People are not going to …” It doesn’t jump out at you, but the speaker is putting their view in someone else’s mouth.

Loaded Questions and Leading Questions:  A program anchor is in a position of power to determining how the news is presented while viewers sit passively, accepting that the commentator is objectively informing and moderating discussions based on years of conditioning. In the modern era of news programming that is often not the case. Dialogs are rife with loaded and leading questions.

The popular definition of a loaded question is one which contains as controversial assumption but, for the purposes of semantically evaluating bias, my definition is that it is one that contains indisputable evidence of bias. It gives a strong indication of how an anchor wants his/her respondent to answer. Guidelines for recognizing loaded questions include:

  • Embedded value-judgments by the questioner: “Don’t you think that sounds <odd/wrong/funny/strange>”?
  • Multiple questions within the same statement: “Who would support…?”, “What is the thinking….?”, “Where did they get…?”, “When …?”, “Why …?”

Leading questions are usually more subtle, and don’t have the clear indicators of loaded questions. Still, a savvy viewer can generally pick them out instinctively, particularly when considered together with succeeding responses. For the most part, news programs conform to the cardinal rule of litigation: Don’t ask a question if you don’t know how it will be answered. In the information age, commentators are rarely uninformed about the positions of their guests. In fact, most of them are regulars.

Once you are aware of these rhetorical devices, you’ll be surprised how often you will notice them while watching, “The News.”

Tagged , , , , , , , , , , , , , , , ,

FOX and MSNBC News: Messages in the Clouds

FOX and MSNBC News: Messages in the Clouds

As a follow-up to yesterday’s blog post, I have color-coded related words in the tag clouds built from recent Fox and MSNBC news transcripts. At first glance, certain words seemed obviously related in terms of the topics and message points referenced. Further scrutiny taught me that some of the tag word connections to be weak or non-existent, demonstrating the danger of using tag cloud analysis too liberally. Still, I found common themes and clear distinctions between these popular interpretive news outlets.

My analysis is as follows:

Fox News

  • REPUBLICAN, PRESIDENTIAL, CAMPAIGN, and PRIMARIES – Clearly, the Republican Presidential primaries are a topic of import and interest. As such, they are worthy of extensive coverage by any objective standard.
  • Clusivity.” In linguistics, clusivity is a distinction between inclusive and exclusive first-person pronouns. My slightly-altered definition includes any pronouns that indicate that one group is favored, and another is viewed with disfavor. It also encompasses “pronoun putdowns” — instances where a person of well-known rank and title (such as “The President”) is referred to simply as HE. In general, I viewed this as creating a subtle form of clusivity. In the Fox aggregate transcript, HE’S occurred 164 times, and often referred to President Obama or some other member of the Democratic Party. THEY’RE appeared as a more general reference to the Democratic party.
  • OBAMA/OBAMA’S/PRESIDENT/WASHINGTON – When not attributed to a direct quote or video clip from the President, these terms were often used in the same context as HE’S or THEY’RE. In this, as well as the clusivity category mentioned above, it was particularly telling when the show’s anchor uses this type of reference.
  • PEOPLE occurred 236 times, and was used in many contexts. As a tag word indication of thematic emphasis, it should probably be removed from the cloud.
  •  DON’T (261 occurrences). DOESN’T (67 Occurrences), and ISN’T (33 occurrences) – Scanning the transcripts, you see these kind of “not” words used in 2 distinct contexts: 1.) Distancing – “I don’t know …” or “We don’t believe …” and; 2.) Negative labeling – “They don’t <something accusatory>. As I reviewed the transcripts, it appeared that “they” and “don’t” often appeared together in the same statement. In fairness, though, that connection is worthy of systematic analysis.

MSNBC

  • Republican campaign coverage was substantial, as indicated by the extensive occurrence of terms like PRESIDENTIAL, CANDIDATE, CAIN, PERRY, and ROMNEY.
  • Evidence of clusivity was more subtle and complex, but present nonetheless. MSNBC’s version was wrapped around the terms AMERICA (150 occurrences), AMERICAN (250 occurrences), AMERICANS (158 occurrences), and to a lesser degree MIDDLE (98 occurrences) and CLASS (123 occurrences). I don’t claim to be a trained linguist, but the visual association that the tag cloud suggests is that MSNBC represents the best interests of: a.) America; b.) middle-class Americans, and; c.) the American way of life.
    •  Related to MSNBC’s clusivity messaging, there was an undercurrent of RICH (93 occurrences) being used as a negative. Scanning the transcripts, I repeatedly came across statements like, “Republicans favor the rich” and “the rich get richer.” Similarly, terms like TAX and TAXES (216 and 100 occurrences respectively) also seemed to be part of MSNBC’s clusivity strategy. Like Fox’s use of HE’S and THEY’RE, MSNBC’s thematic position appears to be, “Those Republicans favor the rich, and their tax situation is better than ours.”
    • Like the use of WASHINGTON by Fox, MSNBC’s use of HOUSE (83 occurrences) was generally used as a reference to the Republican-led House of Representatives, and was often wielded in a less-than-positive manner.
    • DON’T and DOESN’T were both regularly used for distancing and negative labeling, similar to how they were used by Fox.

    Not to beat this horse into glue, but I’m planning to add one more tag word blog post that removes words that are common to both clouds, and portrays the remaining top 50 terms that are unique to each channel. Like all of these exercises, the output is both subtle and revealing

Tagged , , , , , , , , , ,

Quantifying the Impact of TV News Bias – Example #1

The following example represents my core method of quantifying the impact of media bias, using only program segments from the top 3 cable news networks in this particular example. The underlying “Raw Bias Index” data I am using is in fact quite coarse, so consider this an alpha trial put forth for review and discussion.

Much debate has been devoted to assessing whether there is a liberal or conservative media bias. Qualitatively, a case can be made for both, but quantifying the effective bias is a more complex endeavor.

In my recent studies of television news programming, it occurred to me that the quantity of liberal TV outlets seemed greater than conservative channels, but their “share-of-voice” may still be lesser. The true impact of a particular TV news program can only be determined by considering both bias and reach.

In order to add a viewership variable, I used the Nielsen Cable News Ratings from September 8, first calculating the average rating of the 6 largest cable news networks for the entire day. (Source: TV by the Numbers – Zap2It website. http://tvbythenumbers.zap2it.com/2011/09/09/fox-news-leads-presidential-address-viewing-among-cable-news-ratings-for-thursday-september-8-2011/103155/ )

 NOTE: “P2+”= Viewers over the age of 2.

I then calculated a “Viewership Weighting” factor for each of the post-Presidential address programs from CNN, Fox, and MSNBC that I had previously created a Raw Bias Index for (see Sept. 11 post below), and com combined them to create a “Raw Impact Index.”

Needless to say, prime time news is viewed much more extensively than its daytime cousins, hence the large viewership weighting factors. Still, one can readily see in this crude example that viewership, not the number of TV outlets, is key to determining the overall impact of news bias.

******************************************************************************************

PLEASE NOTE that this is but an example, and is not meant in any way to be an accurate-or-comprehensive measure of TV news bias today.

********************************************************************************************

Is this methodology simplistic? You bet. I fully expect critiques from those more experienced in media measurement and proficient with survey science. Regardless, simpler is often times better.

As always, I remain open to feedback, and encourage you to leave yours in the comments section.

Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,