Category Archives: How It Works

Text Analytics Basics

Text analytics, also known as “text mining,” automates what people — researchers, writers, and all seekers of knowledge through the written word—have been doing for years[i]. Thanks to the power of the computer and advancements in the field of Natural Language Processing (NLP), interested parties can tap into the enormous amount of text-based data that is electronically archived, mining it for analysis and insight. In execution, text analytics involves a progression of linguistic and statistical techniques that identify concepts and patterns. When properly tuned, text analytics systems can efficiently extract meaning and relationships from large volumes of information.

To some degree, one can think of the process of text analytics as the evolution of the simple internet search function we use every day, but with added layers of complexity. Searching and ranking words, or even small phrases, is a relatively simple task. Extracting information from large combinations of words and punctuation — which may include elements of slang, humor, or sarcasm — is significantly more difficult. Still, text mining systems generally employ a number of layered techniques to extract meaningful units of information from unstructured text, including:

  • Word/Phrase Search Frequency Ranking – What words or “n-grams” appear most often.
  • Tokenization – Identification of distinct elements within a text.
  • Stemming – Identifying variants of word bases created by conjugation, case, pluralization, etc.
  • POS (Part of Speech) Tagging – Specifically identifying parts of speech.
  • Lexical Analysis – Reduction and statistical analysis of text and the words and multi-word terms it contains.
  • Syntactic Analysis – Evaluation of sequences of language elements, from words and punctuation, and ultimately mapping natural language into a set of grammatical patterns.

The purpose for all of this NLP processing is to compare those computational nuggets with classifications or “codings,” that trained experts have assigned to representative text samples. Interpreting language is an exceedingly complex endeavor, and one that computers and software cannot effectively do without being “trained.” As such, text classification systems are designed to compare human codings with the patterns that emerge from computational analysis, and then mimic the expert coders for all future input.

As you may expect, the quality of any custom text analysis system is largely determined by the quality of the human coders it is trained on. As such, strict rules must be enforced on the human coders, with the knowledge that software classification systems are very literal (think “Dr. Spock”). Still, once effective coding rules are established that result in discernible patterns, text analysis systems are incredibly fast and consistent. Advanced classification systems, like the one employed by Mediate Metrics, are also adaptive, constantly evolving with the ebb and flow of political issues and rhetoric.


[i] Much of the explanation contained herein was gleaned from Text Analytics Basics, Parts 1 & 2, by Seth Grimes. July 28, 2008. http://www.b-eye-network.com/view/8032.

Tagged , , , , , , , ,

Political News: More Commonly Used Media Bias Techniques

Combing through news transcripts for bias indicators provides you with either unique insights or temporary insanity. Despite my questionable mental state, I’ve uncovered some subtler tricks-of-the-news-trade that I’d like to share with my readers.

Value Judgments: By definition, a value judgment is an assessment that reveals more about the values of the person making the assessment than about the reality of what is assessed. Value judgments can be either direct or projected.

Direct value judgments are often preceded with “I,” either explicitly or as understood. Examples are: “I don’t believe that …,” “that won’t work …” Projected value judgments are less obvious, but are used extensively by certain commentators and politicians. Speakers, often wrapping themselves in the flag or as the spokesperson for some popular group, stealthily project their personal opinions with statements like, “Americans won’t support…,”or  “People are not going to …” It doesn’t jump out at you, but the speaker is putting their view in someone else’s mouth.

Loaded Questions and Leading Questions:  A program anchor is in a position of power to determining how the news is presented while viewers sit passively, accepting that the commentator is objectively informing and moderating discussions based on years of conditioning. In the modern era of news programming that is often not the case. Dialogs are rife with loaded and leading questions.

The popular definition of a loaded question is one which contains as controversial assumption but, for the purposes of semantically evaluating bias, my definition is that it is one that contains indisputable evidence of bias. It gives a strong indication of how an anchor wants his/her respondent to answer. Guidelines for recognizing loaded questions include:

  • Embedded value-judgments by the questioner: “Don’t you think that sounds <odd/wrong/funny/strange>”?
  • Multiple questions within the same statement: “Who would support…?”, “What is the thinking….?”, “Where did they get…?”, “When …?”, “Why …?”

Leading questions are usually more subtle, and don’t have the clear indicators of loaded questions. Still, a savvy viewer can generally pick them out instinctively, particularly when considered together with succeeding responses. For the most part, news programs conform to the cardinal rule of litigation: Don’t ask a question if you don’t know how it will be answered. In the information age, commentators are rarely uninformed about the positions of their guests. In fact, most of them are regulars.

Once you are aware of these rhetorical devices, you’ll be surprised how often you will notice them while watching, “The News.”

Tagged , , , , , , , , , , , , , , , ,

Politics and TV News: Commonly Used Bias Techniques

As I have dutifully trudged through TV news transcripts as part of creating my surveys, I have noticed certain bias techniques — some intuitive, and others subtle — that are employed with regularity by popular TV news channels.  My focus has been on political analysis segments wherein a news anchor/moderator is joined by one-or-more contributors, positioned as subject matter experts.

The most prevalent techniques are as follows:

  • False Balancing –TV viewers have been conditioned to expect news anchors to conduct interviews with contributing experts who present contrasting points of view. Interestingly, these expert contributors are occasionally unbalanced, and actually are on the same side the “debated” issue.

A variant on this theme occurs when complementary views are presented by experts from opposite camps. A Democratic congressman may be critical of aspects of “Obama-care” when interviewed side-by-side with a Republican senator whose disapproval applies to other areas. Both are critical, just in different ways.

On the surface, these experts represent groups who are traditionally in opposition, but their opinions are surprisingly aligned in this example. The notion that positions on a particular subject are not known in advance strains credibility. Still, that fact may be lost on passive TV viewers, who believe they have ingested a short-but- complete review of an issue when presented in this format, especially when the contributors are otherwise natural enemies.

Credit should go where credit is due, so I must recognize The Pessimistic Viewer’s September 12 blog (http://comm2302.wordpress.com/) for identifying and labeling this particular bias mechanism.

  • Time Management – This is an intuitively obvious slanting technique; the more time devoted to a particular perspective, the more weight it is given by the audience. Timing was initially my primary target for evaluating media bias, thinking it to be objective and readily quantifiable. In practice, however, it turned out to be much more difficult to do, primarily because of technique #3.
  • Flakking – In real-time, recording the specific speaking time of any particular contributor is inordinately difficult because of “flakking” — aggressive interruptions of contributors’ statements that are in conflict with those being favored on the program.
  •  Framing  & Finishing  – Even in the pseudo-debate construct of the popular anchor-plus-expert news format, the moderator has control of how an issue is initially framed (“Is the Gang of 6 Deficit Reduction Plan Bad for America?”) along with the manner in which the segment is closed. Even if the anchor does not personally deliver a closing statement, the last word generally has more impact than others, and the moderator can readily determine who gets it.
  • Anchor Affirmations – Television viewers have been conditioned to expect the news anchor/moderator, while possessing their own informed opinions, to exercise a certain amount of journalistic detachment and fairness. Implicitly, they are the ultimate arbiter.

Regardless of the historical role of the anchor/moderator, in this era of advocate journalism, strong opinions are easily discernible, and readily recognized as such by even the most passive viewer. Still, I often encountered more subtle endorsements which may slip passed a viewer’s internal bias filter. Simply have the moderator inject a, “Right,” or “Yes,” as a follow-on comment gives the preceding statement additional weight.

  • Pronoun Putdowns – Similar to moderator affirmations, news anchor can send a subtle-but-unmistakable message by the way they refer to involved parties. Groups holding conflicting views with the discussion leader are often referred to as “they” or “them.” Similarly, if a title-bearing politician, such as a Senator, is referred to as “he” or “him,” it comes across as a refusal to recognize rank-and-status, and conveys an implicit lack of respect.

In closing, some may see these slanting techniques as a normal part of Op-Ed programming.  While that is fair criticism to some degree, passive TV viewers may not make a conscious distinction between objective news and editorials.  The concept of framing applies here, but in a different context — Are these programs framed as Op-Ed segments, or overshadowed by pervasive, embedded marketing messages — “Cable News Network”  …  Fox News, Fair and Balanced” … “MSNBC, the Place for Politics” … “The No-Spin Zone?”

I welcome your comments on the matter.

Tagged , , , , , , , , , , , ,

Quantifying the Impact of TV News Bias – Example #1

The following example represents my core method of quantifying the impact of media bias, using only program segments from the top 3 cable news networks in this particular example. The underlying “Raw Bias Index” data I am using is in fact quite coarse, so consider this an alpha trial put forth for review and discussion.

Much debate has been devoted to assessing whether there is a liberal or conservative media bias. Qualitatively, a case can be made for both, but quantifying the effective bias is a more complex endeavor.

In my recent studies of television news programming, it occurred to me that the quantity of liberal TV outlets seemed greater than conservative channels, but their “share-of-voice” may still be lesser. The true impact of a particular TV news program can only be determined by considering both bias and reach.

In order to add a viewership variable, I used the Nielsen Cable News Ratings from September 8, first calculating the average rating of the 6 largest cable news networks for the entire day. (Source: TV by the Numbers – Zap2It website. http://tvbythenumbers.zap2it.com/2011/09/09/fox-news-leads-presidential-address-viewing-among-cable-news-ratings-for-thursday-september-8-2011/103155/ )

 NOTE: “P2+”= Viewers over the age of 2.

I then calculated a “Viewership Weighting” factor for each of the post-Presidential address programs from CNN, Fox, and MSNBC that I had previously created a Raw Bias Index for (see Sept. 11 post below), and com combined them to create a “Raw Impact Index.”

Needless to say, prime time news is viewed much more extensively than its daytime cousins, hence the large viewership weighting factors. Still, one can readily see in this crude example that viewership, not the number of TV outlets, is key to determining the overall impact of news bias.

******************************************************************************************

PLEASE NOTE that this is but an example, and is not meant in any way to be an accurate-or-comprehensive measure of TV news bias today.

********************************************************************************************

Is this methodology simplistic? You bet. I fully expect critiques from those more experienced in media measurement and proficient with survey science. Regardless, simpler is often times better.

As always, I remain open to feedback, and encourage you to leave yours in the comments section.

Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,