Home > News > Data and analytics have changed campaigns. Now what's next?
114 views 8 min 0 Comment

Data and analytics have changed campaigns. Now what's next?

- January 23, 2014

(Carolyn Kaster/Associated Press)
Lynn Vavreck and I have a new piece at Pacific Standard that looks at the state of data and analytics in campaigns.  Part of the argument is that data and analytics have made huge strides, most visible in the impressive work of the Obama campaign.  We cite several examples of the Obama campaign’s innovations in the piece.  My favorite is a cool February 2012 experiment in persuasion that, to date, has been mostly under-reported (with the natural exception of Sasha Issenberg, of course).

That February, the Obama campaign randomly assigned a sample of registered voters regardless of party registration to a control group or a treatment group. It then invested a massive amount of volunteer effort—approximately 300,000 phone calls—to reach the treatment group with a message designed to persuade them to vote for Obama. Two days later, the campaign was able to poll about 20,000 of these voters and ask how they planned to vote. The phone calls worked: People who were called were about four points more likely to support Obama. The effect was so large that Obama’s director of experiments, Notre Dame political scientist David Nickerson, calls it “stunning.”
Of course, the Obama campaign did not assume that a single phone call in February had locked up these voters. The main contribution of the experiment—what Nickerson now calls “one of the big innovations”—was the model the Obama analytics team then built. This model was not like the models that both Democratic and Republican campaigns typically construct, which generate scores representing how likely you are to vote for the Republican or the Democrat. The February experiment generated a statistical model of “persuadability”—how likely you were to change your mind when contacted by the campaign.

At the same time, we also argue that it’s important to acknowledge the limits of data and analytics and the challenges they face.  In talking about these limits, our argument isn’t that these tools or the efforts of the Obama campaign or campaigns generally “don’t matter.”  We are simply trying to get past some of the media hyperventilation and put these tools in proper perspective.
For one, although the “big data” about voters that many campaigns use are actually not that big (hence the title that Pacific Standard gave to the on-line version of the piece), these data have greatly improved the ability of campaigns to target voters.  That much is known and occasions a lot of gee-whiz stories in the media.  But at the same time, campaigns cannot target with perfect accuracy.  And even more importantly: there is no systematic evidence that micro-targeted messages are more effective than broadly targeted messages.  Check out this research from political scientists Eitan Hersh and Brian Schaffner, for example.  There are some related quotes from Obama campaign staff in our piece.
Second, although the Obama campaign took “testing” to new heights, they couldn’t test everything.  No campaign ever could, given the sheer number of decisions it must make and given limitations in time, money, and staff.  As the Obama campaign’s Chief Analytics Officer, Dan Wagner, told us: “You can’t test every single thing as it happens.”  Key decisions — the major themes of the campaign as well as television ad buys — were tested with traditional methods like polls and focus groups, not the kinds of randomized controlled experiments that constitute the “lab” of The Victory Lab.  We should see randomized experiments extend to new areas in the future — like ad buys, as promised by this new venture by Wagner and Jim Margolis of the Obama campaign — but such experiments didn’t figure in 2012.
Third, the growing use of social media is important and innovative, but its impact is poorly understood.  For example, the Obama campaign used the Facebook networks of supporters to identify targeted voters that were difficult to reach, and then asked these supporters to contact their friends.  Vavreck and I consider this an important innovation and a harbinger of what is to come.  But no one really knows what the effects of the program were.  That is, how many actual votes did targeted sharing generate?  Likely we’ll know more about these tools soon.  For now, however, it’s important not to let the hype get ahead of the data.
Finally, when we asked key practitioners of campaign analytics to talk not just about the upside but about the limits of data, experiments, and analytics, we were surprised at how forthcoming they were.  Wagner cited “time, talent, and money.”  The “talent” factor is key: people need to be trained in statistics, political science, or cognate fields to do this work.  Money might be even more important.  The Obama campaign could afford these tools, but many smaller campaigns cannot — although that too may change, as firms like Blue Labs are trying to make these tools scalable and affordable in down-ballot races.
The key is keeping these tools in perspective.  Here is a passage from the piece with quotes from Jennifer Green, the Executive Director of the Analyst Institute, the group most responsible for bringing randomized experiments to campaigns:

After the election, reporting and commentary was full of what Jennifer Green calls “guru-ism”— hagiographic features like Rolling Stone’s profiles of “The Obama Campaign’s Real Heroes,” a.k.a. the “10 key operatives who got the president reelected.” But many gurus reject such characterizations. “What’s next,” Green told us, “is reining in the guru-ism.” Later she said, “I think the media articles about how this stuff is the ‘magic everything’ are creating a certain unrealistic optimism about what these strategies can do anytime in the near future.”

All of the Obama staff that we talked to knew better than to make bold claims about how “data” or whatever “won” him the election.  The election itself is not a giant experiment and it is impossible to know how much of Obama’s four-point margin of victory was “because of” these tools or even the campaign itself.  Again, this does not mean that the work of Obama’s campaign — or Romney’s for that matter — wasn’t important.  Just that estimating its effect is a lot harder than the headlines would sometimes lead you to think.
For more, see our piece.