Home > News > Survey-based Reports of News Viewership: Don’t You Believe ‘Em
108 views 5 min 0 Comment

Survey-based Reports of News Viewership: Don’t You Believe ‘Em

- April 24, 2009

It’s often interesting when people claim to have done something that they really didn’t do or deny having done something that they really did do. Many social scientists have built careers around trying to account for these very phenomena. Such “overreports” and “underreports” are, however, the bane of the existence of another group of researchers. I’m speaking here, of course, of survey researchers.

In political science the classic example is the overreport of voter turnout. Survey-based estimates of voter turnout routinely run several points above the “real” percentage and – just to make things worse – the tendency to overreport can vary from one part of the electorate to another. So it’s not only the dependent variable in survey-based models of voter turnout that’s likely to be “off”; the relationships between the dependent variable and various predictors of turnout may be off as well.

If survey respondents can’t be taken at their word, then researchers either have to toss survey-based findings out the window or, short of that, find some acceptable correctives or complicated work-arounds (e.g., triangulation with other measures or more complex model-fitting). There’s only so much that researchers can do about any of this. For a long time, the turnout overreport was sort of a dirty little secret. Now it’s so well known that researchers have simply got to deal with it one way or another.

But what if survey-based estimates of some phenomenon of considerable interest weren’t off by just a few percentage points, but were grossly off? On some “sensitive” topics like sexual preference, that could well be the case, and in such instances the development of innovative measurement approaches becomes all the more urgent. But gross inaccuracy isn’t necessarily confined to topics that wouldn’t strike most researchers as highly sensitive.

And that observation brings me to the point (finally!).

“How many days in the past week did you watch the national network news on TV?” seems like a pretty darned innocuous question. Over the years, researchers interested in the effects of news exposure on attitudes and behaviors of various sorts have relied heavily on variants of that question. They often concede that, yes, people might “remember” having watched Katie Couric when they had really watched Vanna White, but the damage is probably pretty minimal. And based on that assumption, they go ahead and use the survey-based estimate.

In a study reported in the current issue of Public Opinion Quarterly, Markus Prior says the damage isn’t minimal at all: Self-reports of exposure to the evening news shows are grossly off.

That conclusion is based on Prior’s comparison between self-report data from the 2000 National Annenberg Election Survey (NAES), on the one hand, and ratings of news audiences based on Nielsen’s “people meters,” on the other. (In each of 5,000 Nielsen households, each TV set was attached to a meter that household members used to indicate the beginning and end of their viewing.)

Prior’s basic finding:

bq. The survey estimates vastly overstate the size of the network news audience. According to Nielsen, between 30 and 35 million people watched the nightly news on an average weekday. Based on NAES self-reports, that number is between 85 and 110 million for most of the year. In other words, the ANES -based estimate is somewhere around two-and-a-half to three-and-a-half times higher than the Nielsen-based estimate.

Oof.

But it gets worse. The overreport factor varies from one part of the public to another. For example. For viewers aged 55 or more, it’s about two-to-one; but for those in the 18-34 age range, it’s somewhere around six-to-one or even eight-to-one.

The bottom line? In Prior’s words, “Scholars would do well to assess media effects with research designs that do not rely on self-reported exposure at all.” This is going to require considerable ingenuity, but “Nothing … could be as damaging as a research approach that rests entirely on a variable that stubbornly defies validation.” Hear, hear.

Topics on this page