An ABC/Washington Post poll finds that more people support an individual mandate for health insurance when you tell them that the government will help lower-income Americans buy insurance. Kevin Drum writes:
bq. I have an assignment for an ambitious young PhD candidate with some free time on her hands. I’ve seen poll results like this a million times, and when you add some additional detail you always get a certain number of people to flip sides. I’m pretty sure you could quote a couple of lines from Jabberwocky, ask an “in that case” followup question, and get a fair number of people to change their minds. So what I’d like to know is: what’s the average flip rate?
I don’t know of any calculations of a “flip rate.” I can address Drum’s point that “when you add some additional detail you always get a certain number of people to flip sides.” A lot is going to depend on what is considered “detail,” and how many “a certain number is.” But on its face, I disagree with the statement: sometimes manipulations to survey questions affect response, and sometimes they do not.
For example, adding “death tax” to a question about the estate tax did not change responses very much at all.
In this article by Adam Berinsky, adding information to survey questions about Iraq — information citing the costs in lives or resources — didn’t seem to affect opinion much. (See Table 4.)
These are cases where survey respondents were randomly assigned to be asked different questions. They are not cases where survey respondents gave an answer, then were supplied with information, then were asked the initial question again. But I think the basic point holds. In the book by Howard Schuman that I previously summarized, he also notes that changes in survey questions do not always change in the distribution of opinion.
Whether people’s opinions change will depend first on the information provided. Is it new to them? Do they make a connection between the information and the issue? Are they able to counter-argue or rationalize away this information? Second, opinion change will depend on the individuals themselves. People with a larger store of cognitively consistent considerations will have much more “ballast” for their opinions than people who, for the most part, have never thought about the issue and have very little to support any particular opinion.
So those are a couple factors worth considering. We have a decent grasp of the second factor — based largely on theories of persuasion. The first factor is less well-understood. Only with a quite wide-ranging and verging-on-systematic study of information effects in polls could we begin to calculate anything approximating an average flip rate.
I would appreciate cites to other relevant studies in the comments. I’ve only scratched the surface here.