Home > News > The Election Lab midterm forecast, explained
162 views 11 min 0 Comment

The Election Lab midterm forecast, explained

- July 17, 2014


Nate Cohn tweeted yesterday:
cohn_tweet
And he included the Upshot’s table of what the various forecasters and handicappers see.  Here’s a snapshot for the races considered most competitive.
upshottable
So it might be helpful to talk through the Election Lab forecast and why it differs from the views of others in some races.
Q: Your forecast is pessimistic for the Democrats and optimistic for Republicans?  Why?
Our forecast combines both (1) a model drawing on factors from the political landscape and (2) a polling average. The combination is simply pessimistic for Democrats.
Our forecast is pessimistic first because the political landscape tilts in Republicans’ favor. It is a midterm year, and because of the “midterm penalty” the president’s party typically loses seats in Congress.
Moreover, the president is not that popular.  In fact, based on the June polls from the 17 midterm elections in our data, Obama’s 44 percent average approval rating was the fifth lowest.  As is typical in such an environment, the GOP has recruited and nominated quality Senate candidates in many key races.  That, and the early fundraising numbers, help them further.
Finally, in many races the current polls line up with the forecast — such as in Arkansas, Kentucky and North Carolina.  That is to say, the polls suggest the same winner that the model does.
Q: But a 99 percent chance of a GOP victory in Kentucky and Georgia?  Really?  That seems way too high.  I mean, Nunn is leading in the most recent Georgia polls.
There are a couple answers to this. First, as we discussed last week, our current forecast isn’t based purely on polls, even in those races with polling data.  Our analysis of the 2008-2012 elections suggested that at this point — mid-July — the most accurate forecast relied on both a model and polls. So Nunn’s most recent polls are not, by themselves, enough to tip the race in her favor.  Harry Enten had a good discussion of why the 538 model also discounts these early Georgia polls to some extent:

In Georgia, the FiveThirtyEight projection is far more pessimistic about Democratic prospects than the polls. Here’s why: No Democrat holds an elected statewide office in Georgia. No Democrat has won a U.S. Senate race in the state in 14 years. No Democrat has won a presidential race in the state in 22 years. The Democratic candidate, Michelle Nunn, is probably benefiting from name recognition; her father, Sam Nunn, was a well-regarded senator from the Peach State. That edge may disappear once voters realize Michelle is not Sam. Moreover, Republicans may coalesce around the winner of the contentious GOP primary between Jack Kingston and David Perdue.

Second, it’s important to understand the implications of how we combine a model forecast and a polling average.  We think of the model’s forecast as a baseline prediction and polls as information that updates that prediction. (In the language of Bayesian statistics, the model’s forecast is the “prior” and the polls update the prior, producing a new “posterior” prediction.)  The model’s forecast comes with uncertainty — no model makes a perfect prediction, obviously — and the polls come with uncertainty too.
But, as political scientist Simon Jackman notes in this piece, the combination of a prior (the model) and new data (polls) tends to produce more certainty in the new forecast than you had in either the model’s prediction or the polls. This makes sense: The additional information that polls provide should make you more certain about your forecast.
Thus, when we say that a candidate has a 99 percent chance of winning, it reflects the relatively low uncertainty we have about that race.  When we simulate hypothetical elections over and over to generate a percent chance of winning, relatively few hypothetical elections end up with that candidate winning.
Q: The other major forecasting model, which is at the Upshot, doesn’t give the GOP as high of a chance of taking the Senate. Why is your estimate different?
In my previous post on this, I noted that one possible difference was the Upshot’s greater reliance on early polls.  Although we have subsequently factored the polls into our forecast, we put less weight on them in key races than does the Upshot.
We think this continues to make a difference.  If we place virtually all the weight on the polls in Senate races that have polls, our forecast becomes more favorable to the Democrats and quite similar to the Upshot’s: Republicans have about a 60 percent chance of taking the Senate.  Much of this difference stems from a few key races where our model differs most from the polls — especially Georgia, Kentucky and (to a lesser extent) Louisiana.  In many other races, such as Arkansas, Colorado, North Carolina and New Hampshire, our model and the polls are in very close agreement, and changing the weighting makes little difference.
We believe that, as of now, the polls deserve some weight but not all of the weight.  For example, take that new poll that showed Michelle Nunn ahead after a long period with no public polls in that race. With a heavy weight toward the polls, we would have been forced to think that the ground in that race had suddenly shifted, and with it, the odds of a Democratic Senate.  We think it’s too early to come to such a conclusion.  However, our analysis also suggests that, by about the middle of September, the most accurate forecast relies primarily on polls.
It’s also important to note that our model and the Upshot’s agree on key races and even some of the competitive races.  For example, we agree that the GOP is strongly favored in Kentucky and Arkansas, and that Democrats are strongly favored in New Hampshire, Michigan and Colorado.
We would also emphasize — as has the Upshot — that it is best not to put too much stock in small differences in a forecast probability. If the Upshot says that Mitch McConnell has an 85 percent chance of winning and we say 99 percent, it’s actually not that big of a difference. It is best to consider these probabilities in chunks of 10 or 20 percent. The available forecasting technology is just not precise enough to do a better job than that.
Q: Your forecast also differs from that of handicappers like the Cook Political Report, the Rothenberg Report and Larry Sabato’s Crystal Ball. Why is that?
That’s a good question. Our view is that handicappers can often provide a granular sense of individual races that can’t be easily captured in models. Chris Cillizza noted (correctly!) this limitation of models in response to our new forecast.  So one possibility is that handicappers are seeing things that we can’t really measure for both this election and for previous elections.
Another possibility is that in some cases, some handicappers’ take on these races depends on a sense that the polls are tied or close enough to it.  This might explain, for example, why both the Upshot and our model sees Kentucky and Arkansas as likely to go to the GOP and Michigan and Colorado as likely to go to the Democrats, whereas one or more of the handicappers see these races as pure tossups.
In any case, the point is not that forecasting models are always and everywhere superior. It is just that how those models weigh information may diverge from how others do so.  (But, interestingly, everyone tends to have a similar sense of what will happen in House elections.)
Q: Couldn’t your Senate forecast change?
Sure. First, it may change as the forecast weights the polls more heavily. In other words, if the polls don’t change in any Senate race the next two months, the forecast will shift toward the Democrats because right now the polls appear more favorable to Democrats in several key races than in the forecast.
Second, the forecast will change if the polls themselves change. Already, we’ve seen gains by GOP candidates in Kentucky and Iowa. Our sense is that the polls often move toward a model’s forecast — which likely will work in the GOP’s favor. However, occasionally the polls move in ways that we couldn’t have anticipated as of now.
So we’ll continue to update our forecast and Election Lab and write more about it here.