Home > News > Transparency in Polling: More on the “Republicans are Crazy” Daily Kos Poll
107 views 16 min 0 Comment

Transparency in Polling: More on the “Republicans are Crazy” Daily Kos Poll

- February 10, 2010

As one of the “goals of the Monkey Cage”:https://themonkeycage.org/2007/11/why_this_blog.html is to get political scientists and their research involved in broader discussions of contemporary politics, we were very pleased to see that Del Ali of Research 2000 “posted on Daily Kos”:http://www.dailykos.com/storyonly/2010/2/6/833471/-How-a-poll-is-conducted Saturday to explain how they conduct their research. As frequent readers of this blog may recall, Research 2000 conducted the now fairly well known “Republicans are Crazy”:http://www.dailykos.com/storyonly/2010/2/2/832988/-The-2010-Comprehensive-Daily-Kos-Research-2000-Poll-of-Self-Identified-Republicans poll for Daily Kos. And while correlation of course does not indicate causation, we did “raise some concerns”:https://themonkeycage.org/2010/02/revisiting_that_republicans_ar.html about the poll last week on the Monkey Cage in a guest post by “Andrew Therriault”:http://andrewtherriault.com/. After looking Ali’s response, Andrew and I had the following thoughts:

Overall, Ali’s post is structured as a step-by-step guide to how polls are conducted. Much of the information included is very basic and not specific to Research 2000, but might be useful to anyone who hasn’t worked with surveys in the past.

But more important (for our purposes) is the degree to which the post provides new information which can be used to interpret the original results. Ali does answer one of the questions “asked in our previous Monkey Cage post”:https://themonkeycage.org/2010/02/revisiting_that_republicans_ar.html, which concerns the screening process, or the manner in which respondents were selected to participate in the poll. Research 2000 began their interviews by asking a fairly typical party identification question, and then only proceeded with respondents who answered “Republican” immediately–so, no leaners, as we’d thought.

This is actually a pretty crucial point. If we assume that extreme 5% of voters on either the left or the right have some pretty crazy ideas, then this alone can play a big role in coloring how “crazy” a given sample of Republicans (or Democrats) looks in a poll. To give a simple example, if somehow we restrict the Republican in our sample to only the farthest right 10% of voters (e.g., “strong” republicans) , then 50% of our sample of Republicans will look crazy, just on the basis of the 5% of crazies we’ve assumed to be out there. However, if we include as Republicans the farthest right 40% of voters (e.g., include strong Republicans, weak Republicans, and Republican leaners), then the 5% of crazies end up only making 12.5% of all Republicans look crazy. So by stopping at the initial probe of “Politically, do you consider yourself to be a Democrat, a Republican, an Independent, or of another party?” and not asking any follow up questions, the poll was setting itself up to probably get a more conservative sample of Republicans than if it followed the practice of, for example, the American National Election Study, and followed that question up by asking “Do you think of yourself as closer to the Republican Party or to the Democratic Party?”. Had Research 2000 asked this question and then included Republican leaners in the survey, the poll would probably have contained more “Republicans” with moderate views. (For more on leaners, see this “previous post”:https://themonkeycage.org/2010/01/last_time_on_independents_i_pr.html by John.) However, to be fair, had the poll followed up with a question that asked if the respondent was a strong or weak Republican and then restricted the sample to strong Republicans, the sample would have likely excluded even more moderate views, and thus could have produced even more “dramatic” findings. So while it is the case, as one of the commentators on Andrew’s previous post noted, that Kos can choose to survey whatever population they like, it is also the case that as you make a political population more restrictive, you are probably going to get higher proportions of people echoing political views that look less mainstream.

All of our “other concerns with the poll”:https://themonkeycage.org/2010/02/revisiting_that_republicans_ar.html, however, still remain. Consider first particular opinion question wordings used, Ali simply restates a few of the questions and asserts that the binary response options (yes/no, favor/oppose) are “straightforward and objective.” He does not comment on whether the choice of wordings may have influenced the results or whether the binary response options oversimplified and distorted the actual distribution of public opinion. This concern is not out of the blue–the American Association for Public Opinion Research (AAPOR) suggests in its “question wording guidelines”:http://www.aapor.org/Question_Wording.htm that balanced questions have midpoints exactly to avoid the problems “we mentioned”:https://themonkeycage.org/2010/02/revisiting_that_republicans_ar.html.

As one of the “goals of the Monkey Cage”:https://themonkeycage.org/2007/11/why_this_blog.html is to get political scientists and their research involved in broader discussions of contemporary politics, we were very pleased to see that Del Ali of Research 2000 “posted on Daily Kos”:http://www.dailykos.com/storyonly/2010/2/6/833471/-How-a-poll-is-conducted Saturday to explain how they conduct their research. As frequent readers of this blog may recall, Research 2000 conducted the now fairly well known “Republicans are Crazy”:http://www.dailykos.com/storyonly/2010/2/2/832988/-The-2010-Comprehensive-Daily-Kos-Research-2000-Poll-of-Self-Identified-Republicans poll for Daily Kos.  And while correlation of course does not indicate causation, we did “raise some concerns”:https://themonkeycage.org/2010/02/revisiting_that_republicans_ar.html about the poll last week on the Monkey Cage in a guest post by “Andrew Therriault”:http://andrewtherriault.com/.  After looking Ali’s response, Andrew and I had the following thoughts:

Overall, Ali’s post is structured as a step-by-step guide to how polls are conducted. Much of the information included is very basic and not specific to Research 2000, but might be useful to anyone who hasn’t worked with surveys in the past.

But more important (for our purposes) is the degree to which the post provides new information which can be used to interpret the original results. Ali does answer one of the questions “asked in our previous Monkey Cage post”:https://themonkeycage.org/2010/02/revisiting_that_republicans_ar.html, which concerns the screening process, or the manner in which respondents were selected to participate in the poll. Research 2000 began their interviews by asking a fairly typical party identification question, and then only proceeded with respondents who answered “Republican” immediately–so, no leaners, as we’d thought.

This is actually a pretty crucial point.  If we assume that extreme 5% of voters on either the left or the right have some pretty crazy ideas, then this alone can play a big role in coloring how “crazy” a given sample of Republicans (or Democrats) looks in a poll.  To give a simple example, if somehow we restrict the Republican in our sample to only the farthest right 10% of voters (e.g., “strong” republicans) , then 50% of our sample of Republicans will look crazy, just on the basis of the 5% of crazies we’ve assumed to be out there.  However, if we include as Republicans the farthest right 40% of voters (e.g., include strong Republicans, weak Republicans, and Republican leaners), then the 5% of crazies end up only making 12.5% of all Republicans look crazy.  So by stopping at the initial probe of “Politically, do you consider yourself to be a Democrat, a Republican, an Independent, or of another party?” and not asking any follow up questions, the poll was setting itself up to probably get a more conservative sample of Republicans than if it followed the practice of, for example, the American National Election Study, and followed that question up by asking “Do you think of yourself as closer to the Republican Party or to the Democratic Party?”.  Had Research 2000 asked this question and then included Republican leaners in the survey, the poll would probably have contained more “Republicans” with moderate views. (For more on leaners, see this “previous post”:https://themonkeycage.org/2010/01/last_time_on_independents_i_pr.html by John.) However, to be fair, had the poll followed up with a question that asked if the respondent was a strong or weak Republican and then restricted the sample to strong Republicans, the sample would have likely excluded even more moderate views, and thus could have produced even more “dramatic” findings.  So while it is the case, as one of the commentators on Andrew’s previous post noted, that Kos can choose to survey whatever population they like, it is also the case that as you make a political population more restrictive, you are probably going to get higher proportions of people echoing political views that look less mainstream.

All of our “other concerns with the poll”:https://themonkeycage.org/2010/02/revisiting_that_republicans_ar.html, however, still remain. Consider first particular opinion question wordings used, Ali simply restates a few of the questions and asserts that the binary response options (yes/no, favor/oppose) are “straightforward and objective.” He does not comment on whether the choice of wordings may have influenced the results or whether the binary response options oversimplified and distorted the actual distribution of public opinion. This concern is not out of the blue–the American Association for Public Opinion Research (AAPOR) suggests in its “question wording guidelines”:http://www.aapor.org/Question_Wording.htm that balanced questions have midpoints exactly to avoid the problems “we mentioned”:https://themonkeycage.org/2010/02/revisiting_that_republicans_ar.html.

Notably, the post does not provide any additional information on the response and completion rates for the survey–the percentage of individuals contacted who agreed to participate in the survey, and how many of these completed the interview process. This information is crucial to understanding how representative this poll’s respondents may be of typical Republicans, and would be relatively easy to produce. This is not an unusual request–in fact, the AAPOR’s “disclosure guidelines”:http://www.aapor.org/Disclosure_Standards.htm suggest that researchers include this as part of their normal disclosures. (Whether this is actually happens often in practice is another story, but the request is most certainly a reasonable one.) Completion rate is particularly interesting here, because if die-hard Republicans were somehow more likely to complete the survey then moderate Republicans, it would again raise questions about the relevance of the survey’s findings for the general population of Republicans.

Finally, it’s worth pointing out some additional concerns Ali’s post raises. Ali asserts that “the first law of polling is that every single individual in a designated population must have an equal chance of being selected as part of the polling sample.” We thought that this might be a typo, but he repeats this claim again at the end of the post, this time with italics. There is absolutely no need for every individual in the population have an equal chance of being selected, and in nearly all surveys there are significant differences in probabilities across individuals. This is what post-survey weighting is used for. What is important is that these probabilities are known (or realistically, can be estimated), so that reasonable weights can be calculated.

This point is made in every basic survey methods textbook (such as “this one”:http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470465468.html), so it’s disconcerting to hear Ali’s assertion to the contrary. What’s more, Research 2000’s surveys do not come close to living up to Ali’s standard of equal probabilities. Because they only use telephones, there is zero probability of reaching the approximately 2% of Americans who do not have any telephone service, and perhaps (depending upon their list of telephone exchanges) also the 15 to 20% who only use cell phones. And within their covered population, the probability of reaching any individual depends upon the number of telephone lines they can be reached on and also the number of individuals who use each telephone number.

Both of these concerns affect most every pollster, not just Research 2000, which is why post-survey weighting has become common practice. But if Research 2000 is stating that they do not use post-survey weights, this is troublesome and may provide an explanation for why their polls (as mentioned in the last post) tend to be an outlier among major polling organizations. We hope Ali will follow up his original post in order to respond to this question, and look forward to his response.