In one of my posts on the value of polls, I quoted from Howard Schuman’s 2008 book, _Method and Meaning in Polls and Surveys_ (Amazon). Having now finished the book, I highly recommend it.
The book feels a bit like a memoir. Drawing on his corpus of research, Schuman reflects on a how polls are conducted (the “method”) and how they can be interpreted (the “meaning”).
He explores the limits of thinking of polls as referenda. This chapter produces the quote from my earlier post:
bq. The tendency to take too literally single-variable distributions of responses (the “marginals”) is essentially the same as believing that answers come entirely from respondents, forgetting that they are also shaped by the questions we ask.
Schuman discusses a variety of ways to work around this problem: including open-ended question, proliferating closed-ended questions so as to capture complexities in opinion, designing survey exepimernets that test, e.g., the effects of different question wordings, and enlisting the participation of opposing sides on a contentious political debate. He describes a survey on the use of animals in medical experiments in which he consulted both with medical professionals and animal rights activists.
Schuman also discusses the benefits and costs of both open-ended and closed questions. Here a point from the first chapter becomes crucial. He writes:
bq. No it is not usually tiny changes in wording that make marginals so untrustworthy, but several other factors about questions….First, respondents feel enormously constrained to stay within the framework of the survey question. They will almost always use one of the two or three or more alternatives given by the interviewer, rather than offering a substitute of their own, even if a substitute is allowed or encouraged and would have great effect. Similarly, consistent effects on marginals occur with variations in formal aspects of questions (e.g., whether a “don’t know” response is encouraged or discouraged).
In my experience, concern about the reliability of polls almost always fixates on tiny changes in wording. (Indeed, when I teach about the effects of question wording, I focus on these sorts of examples—e.g., “aid to the poor” vs. “welfare,” “allow vs. “not forbid”). But Schuman gets at a more profound aspect of question wording. He shows clearly that what respondents volunteer in response to an open-ended question will differ dramatically from what they will choose in a closed question. In a 1986 survey, only 1% of respondents volunteered that “the invention of the computer” was the “most important” event or change over the past 50 years, but 30% chose that option when it was given as part of a closed list of options.
A more recent example concerns two polls assessing blame for the arrest of Henry Louis Gates. See this post by Gary Langer of ABC. When people were given an option for “both Gates and Crowley are to blame,” 29% chose it. When not provided this option, only 10% chose it. The further consequence is that people are more likely to blame Crowley (25%) when “both” is not offered than if it is offered (11%).
Schuman also discusses the value of “why” questions, or questions designed to probe the logic and reasoning behind opinions. Take this simple question: “do you think the United States made a mistake in sending troops to fight in Vietnam?” In two 1971 surveys, Schuman followed this question with another: “Why do you think it was a mistake?” The question provoked a range of responses having to do with the winnability of the war, the number of casualties, the “civil” nature of the conflict, and other themes. Most interesting was the difference between the two surveys, one of the general public and the other of college students. The students were far more likely to express concerns about the number of Vietnamese killed and the morals of U.S. policy. There were also differences within the general public based on sex and race. Such differences signal the value of these questions:
bq. Random probes are especially useful when respondents differ from investigators in educational and cultural terms.
Other topics in the book include:
* Survey artifacts, including some surprising findings with regard to the “Communist reporters” question, as well as Schuman’s famous “three pens experiment” — in which Nicaraguan respondents were more likely to support a particular party when the pen they used to fill out the survey was painted the colors of that party.
* The role of attitude centrality and the connection between attitudes and behaviors. For example, in a 1978 poll, Schuman found that less than 5% of gun control proponents who said the issue was their “most important” issue actually wrote a letter, donated money, or took some other action. Among gun control opponents, the comparable fraction was about 55%.
Reading Schuman’s book should locate readers at an ideal point on the continuum between what he calls “survey fundamenatlism” and “survey cynicism.” Among survey providers, the former is often the risk. So many of my complaints about polls arise because the survey organization treats its most recent results as sacrosanct. But among the general public, the latter is far more common — hence my posts in response to Conor Clarke (e.g., here). Schuman’s perspective strikes exactly the right balance, showing us not just what, but how, we can learn from survey research.