Home > News > Why the Senate forecasting models differ
242 views 7 min 0 Comment

Why the Senate forecasting models differ

- May 6, 2014


“There’s not a data journalism bubble (IMO) but there’s quite possibly a senate forecasting model bubble,” tweeted Nate Silver yesterday after Election Lab was launched.  Thus far, I think there are three formal forecasting models out there: ours, The Upshot’s (affectionately known as Leo), and that of political scientist Alan Abramowitz.  In addition, there are Silver’s initial expectations (not yet formalized in a model, I believe) and the informed views of handicappers like the Cook Political Report, the Rothenberg Political Report  and Larry Sabato’s Crystal Ball.  (The Upshot has a useful chart summarizing the views of all.)
There is a rough consensus that the GOP is likely to gain Senate seats, but no necessary consensus on whether they will gain enough to control a Senate majority.  Our current forecast gives the Republicans a higher likelihood of taking the Senate (82 percent) than does The Upshot’s (54 percent).  The Upshot’s team provided some thoughts yesterday on this discrepancy.  Here are some of ours.  (See also my conversation with Chris Cillizza.)
First, it’s important to note how similar the models are.  Both models capture whether it’s a midterm or presidential year, Obama’s vote share in each state in 2012, and some sense of the national mood. We capture national mood via measures of economic growth and presidential approval; The Upshot relies on the generic ballot.  (In our experience, substituting one measure for the other doesn’t affect the forecast very much.)  Both models also have a measure of each candidate’s previous highest elective office (if any).
In terms of the differences between the models, four main things stand out to me:
1) Leo is based on elections from 1992-2012, while Election Lab is based on 1980-2012.  As we have noted, the time period a model uses can affect its conclusion.  A model based on 1952-2012 generates more favorable forecast for the Democrats than one based on 1980-2012.  To be sure, there is no one correct answer as to which time period is more appropriate. As a general rule, forecasters tend to prefer more elections to fewer — that is, more data to less data. But there is always the risk that elections from decades ago are not very similar to elections today.  Moreover, certain factors — like fundraising — are not available for elections in the 1950s and 1960s.
2) Leo includes approval ratings for incumbent Senators.  For example, this could affect the forecast for North Carolina.  Kay Hagan’s approval rating is only 41 percent, according to one recent poll.  This could help explain why The Upshot sees North Carolina as a toss-up, while our current model gives Hagen an advantage.
3) Leo includes fundraising.  We’ll be adding this to Election Lab soon — for both the House and Senate. Preliminary analysis suggests that it will render the two sets of forecasts more similar: For example, the Democratic advantage in Hawaii, New Jersey and Minnesota will increase.  Our forecast for Iowa will shift to a toss-up.
4) Leo includes polls.  We’ll be adding this as well.  Early polls are certainly related to the eventual outcome in Senate races, but the predictive power of pre-election polls increases sharply as the campaign goes on.  For this reason, Leo, like similar previous models, puts more weight on other factors than polls, but the polls could also explain a few discrepancies.  Again, North Carolina comes to mind, since polls show Hagan in a tight race against her likely opponent, Thom Tillis.  At the same time, the polling average suggests a 64 percent chance that she is leading Tillis —  which is not that far out of line with our forecast.
So, in short, most of the discrepancies between these two models won’t persist.  Indeed, to the extent that these and any other models rely increasingly on a polling average as the election approaches, they’ll be generating similar Senate forecasts.  This is one reason why the 2012 presidential election forecasts that relied mostly or entirely on polls — 538, Votamatic, Princeton Election Consortium, Pollster — were all saying the same thing by the end of the campaign.  Personally, I think it’s valuable to have as many models as possible.  Indeed, the average of the forecasting models is likely to be the best prediction of all.
One last question might have occurred to some readers: why roll out a forecast based on a model that’s not “complete” — that is, without factors like fundraising or polls?  There actually is some value in doing so, as I’ve written about here.  A model that rolls out in a sequential fashion — as ours has here and here — helps identify the role of specific factors.
For example, the fact that an initial forecast based on 1980-2012 was favorable to Republicans told us something about the basic political landscape.  Then when the forecast became more favorable to Republicans after we took into account the previous political experience of candidates, we learned something about the quality of the candidates that Republicans were recruiting.
If incorporating fundraising changes the forecast, then we’ll know something else about the potential strength of the candidates.  If adding polls shifts the forecast, then we can investigate what the polls might be capturing that the model isn’t — especially factors unique to a race.  Moreover, we’ll also be able to identify whether the polls actually move toward our earlier forecasts, as often happens in elections.
In short, there are lots of ways to extract useful knowledge from a forecasting model (including even when the model proves to be wrong!).  We’ll have more to say about what we’re learning in the coming months.
[wapoad type=”inline”]