Home > News > Do non-citizens vote in U.S. elections? A reply to our critics.
137 views 15 min 0 Comment

Do non-citizens vote in U.S. elections? A reply to our critics.

- November 2, 2014

(AP Photo/Orlin Wagner)
Do non-citizens vote in U.S. elections? Our blog post and article on non-citizen voting have reached a wide audience, and have motivated several efforts to dispute our methods and conclusions. Although the criticisms of our work speak to the inherent difficulty of studying individuals who face strong pressures to misrepresent their behaviors, we maintain that our data is the best currently available to answer the question and stand by our finding that some non-citizens have voted in recent elections.
This response articulates the reasons why (1) attempts to show that the measures we used are not valid or reliable have often actually supported our argument; (2) criticism of the CCES as a survey instrument is off target; and (3) it was appropriate for us to share our findings from the study.
Why the measures we use are valid and reliable
One line of criticism focuses on the risk of misrepresentation, response error, or click-through on the survey — the chance that individuals might err when they report that they are non-citizens, registered to vote, or a voter in U.S. elections.
In anticipation of the objection that the CCES subsample was not representative of non-citizens in the United States, our article includes an appendix that evaluates the validity of non-citizen self-reports. Michael Tesler offered a perceptive rejoinder that used the 2010-2012 panel CCES survey to generate test-rest reliability measures for the non-citizen voting response.
Tesler found that about 81 percent of self-reported non-citizens in 2012 had indicated they were non-citizens in 2010.  This opens an opportunity to focus on these most reliably estimated to be non-citizens. For the 85 respondents who said they were non-citizens in both 2010 and 2012 to actually be citizens would require them to have made the same mistake both years. Either there is systematic bias in the CCES instrument — which raises uncomfortable questions about the validity of other CCES measures, as well — or all or nearly all of these 85 respondents meant to respond that they are non-citizens.
These “consistent” non-citizens give us a special sample that can test the reliability of the data. If, as some have suggested, responses by self-reported non-citizens that they voted are the result of citizens accidentally claiming to be non-citizens, we should see almost no self-reported votes by consistent non-citizens, or that they vote at a lower level than the overall sample of self-reported non-citizens. However, 10 of the 85 consistent non-citizens indicated that “I definitely voted in the General Election” in 2012: a turnout rate of 11.7 percent. Indeed, this rate is on par with the highest point estimates reported in our article, providing support for our finding that some non-citizens report voting in U.S. elections.
Non-citizen voters have incentives to misrepresent either their citizenship status or their voting status. After all, claiming to be both a non-citizen and a voter is confessing to vote fraud, and the Federal Voter Registration Application specifically threatens non-citizens who register with a series of consequences. “If I have provided false information, I may be fined, imprisoned, or (if not a U.S. citizen) deported from or refused entry to the United States.”  This possible penalty would tend to reduce the proportion of non-citizens voters who would report having voted. Tesler also highlights the 14 individuals who said they were non-citizens in 2012 and voters in 2010. In 2012 their electoral participation rate dropped by a statistically significant 43 percent. A change in self-reported status from citizen to non-citizen predicts a significantly lower probability of voting. Although these numbers are very small, they do suggest evidence of incentives to misrepresent status.
On the subject of registration status, John Alquist and Scott Gehlbach revisit a point we discuss in the paper—that quite a few non-citizens who say they are registered don’t have a validated registration status. Although they claim that “Richman and Earnest don’t enumerate” this limitation, in fact we do. The 14 non-citizens they discuss with both self-stated registration status and a verified registration are precisely the 11 non-citizens in the second row of Table 1 of our article after our weighting process.
Concerning validated voters in the 2008 CCES survey, Tesler writes “In fact, any response error in self-reported citizenship status could have substantially altered the authors’ conclusions because they were only able to validate the votes of five respondents who claimed to be non-citizen voters in the 2008 CCES.” An alert reader pointed us to the fact that there is slightly more validated voter data available now for the 2010 CCES. There are seven verified voters among the non-citizens in the 2010 CCES, of whom two stated that they definitely voted in the election, and one indicated “I did not vote.” Three of the others were not asked if they voted, and one selected “I attempted to vote but couldn’t,” which suggests that perhaps a provisional ballot was later accepted.
Our critics are also overstating the methodological challenge of estimating the frequency of low-probability behaviors from survey items that have some degree of error. If response error accounts for all of our measured non-citizen voting, then why does the percentage who reported voting drop so dramatically from the presidential years (8 percent in 2008, 11.7 percent for consistent non-citizens in 2012) to the midterm year (3.5 percent in 2010) in our study? Why, too, does the portion of non-citizens with a validated vote drop by nearly two-thirds from 4.7 percent in 2008 to 1.3 percent in 2010? These are the patterns one would expect to see if the measures retained validity and non-citizens were a group mobilized more in presidential election years than midterms.
McCann and Jones-Correa imply that data on campaign contributions of Latino immigrants allow us to assess the construct validity of the CCES measure of voting behavior. They conclude that the presence of only three (0.9 percent) non-green-card-holding non-citizens who report making contributions casts doubt on our finding of non-citizen voting. The problem with this argument is that, even among citizens, there is only a weak association between contributions and voting. ANES data show citizens are six times more likely to report a vote than a contribution.
Thus there doesn’t appear to be much construct validity even for a large representative sample of citizens. Campaign contributions also are arguably a riskier form of political engagement for non-citizens because donations over $200 to federal campaigns are publicly disclosed. It is not at all surprising that McCann and Jones-Correa find little evidence of campaign contributions among non-citizens. But that doesn’t say anything meaningful about their voting behavior, as they acknowledge in their concluding paragraph.
Although our estimates of non-citizen registration and voting are higher than previous estimates, this should not be surprising. To our knowledge, ours is the first study to use survey data to estimate non-citizen voting, while other studies have relied upon incidents of detected vote fraud. Estimates of illegal behavior based upon survey data are frequently higher than estimates based upon detection rates. For example, survey-based estimates indicate that more than six percent of the U.S. population over age 12 uses marijuana on at least a monthly basis — a rate more than 15 times the annual arrest rate.
Why the CCES a useful survey instrument
Ahlquist and Gehlbach also argue that the CCES is an inappropriate survey to use to analyze the voting behavior of non-citizens. Obviously the CCES wasn’t designed to provide a representative sample of non-citizens. Nonetheless, we believe that we have used the data in an appropriate way.
According to the discussion of methodology provided with the 2008 CCES, the sampling frame for the study was based upon the 2006 American Community Survey, and the construction of the sampling frame did include citizenship (p.11). It is important to keep in mind, however, the sample matching methodology applied by the CCES. The sampling frame based on citizenship status was merely used to create targets to which the actual sample of panel members was matched.
As far as we can tell, all other stages of the sampling process were neutral with respect to citizens and non-citizens who had the same demographic characteristics. For example, those in the sampling frame were matched with respondents from the “YouGov/Polimetrix PollingPoint Panel and the E-Rewards and Western Wats panels, using a five-way cross-classification (age x gender x race x education x state).” (p. 11).
Similarly, citizenship appears to have been ignored (at least in the survey documentation) in the construction of sample weights. If our interpretation is correct, citizens and non-citizens in the panels had an identical chance of being sampled, conditional on having matching demographic characteristics. Furthermore, it appears that no non-citizen in the panels would have had a zero chance of being selected.
We conclude that although the CCES was not designed to measure non-citizen electoral participation, with appropriate re-weighting–as we did in our article–analyses of non-citizens are valid because the panels from which the actual respondents were drawn contained non-citizens, and non-citizens and citizens with the same characteristics appear to have had an equal probability of being sampled.
Nevertheless, we agree with critics that the limitations of inferences based upon a single survey make additional surveys desirable. As the American National Election Study (ANES) adds a larger Web survey, it may attain a large enough sample size to be useful.  Another survey that could add a vast sample of non-citizens without requiring anything more than a change in its survey script is the Current Population Survey (CPS). The CPS could easily begin asking non-citizens its current questions about registration and voting behavior.
Why it is appropriate to share our research findings
A final criticism concerns how we communicated our findings rather than the findings themselves. As our colleagues have colorfully suggested, our post “contributed to the circus” rather than made sense of it, and they question whether we intended “to provide fuel to the conspiracy theorists” who suspect widespread voter fraud. Ahlquist and Gehlbach even criticize the title of our post, which was not our proposed title.  (Editor’s note: Most guest post titles are written by whichever of the main Monkey Cage contributors handles the submitted post.) We trust that our colleagues do not mean to suggest that authors should self-censor findings that speak to contentious debates.
We acknowledge that the forthcoming midterm election afforded us an opportunity to draw attention to the study. The timing of the publication of our work explains in part the timing of our post (Electoral Studies accepted our piece on Sept. 3 and made it available online on Sept. 21). But this timing also makes clear that we could have published on The Monkey Cage weeks earlier. Because we blogged a mere 10 days before the mid-term elections, as mail-in voting is already underway in many places, our research will likely have no effect on 2014 voter rolls and regulations, and provides ample time for sober assessment and replication before 2016.
In both our article and blog post we have acknowledged the limitations of our analysis. We continue to welcome criticisms of our methodology and attempts to validate, replicate or refute our study. Knowledge emerges from debate, dialogue and critical examination of findings—processes that are intrinsically contentious. We trust that our colleagues share our appreciation of the value of this debate — and more importantly, of our willingness to engage in it.