Home > News > Sometimes political fact-checking works. Sometimes it doesn’t. Here’s what can make the difference.
133 views 9 min 0 Comment

Sometimes political fact-checking works. Sometimes it doesn’t. Here’s what can make the difference.

- June 3, 2015

True, false, and otherwise: how has the rise in political fact-checking affected politics and political beliefs?
Political fact-checking has grown tremendously in the past decade, both in the United States and globally. Nationally, we first saw FactCheck.org in 2003; PolitiFact and The Washington Post’s Fact Checker came soon after, in 2007.
Their prominence has grown quickly. Between 2001 and 2012, as I found while compiling a report for the New America Foundation (NAF), newspapers increased their mentions of fact-checking by more than 900 percent while the broadcast news media did so by more than 2,000 percent. More recently, an American Press Institute (API) report by Lucas Graves, Brendan Nyhan and Jason Reifler estimated that between 2004 and 2008, newspaper mentions of fact-checking increased by more than 50 percent and between 2008 and 2012 by more than 300 percent.
So what? Does political fact-checking actually accomplish anything? Just two years ago, there were few empirical measures to tell us whether fact-checking was effective either in deterring politicians from lying or in changing people’s beliefs. Nor were there measurements of what kinds of fact-checks worked best.
Now there are. Just over 18 months later, the research is beginning to proliferate. Here’s what social science has learned about the growing enterprise of fact-checking.
Do fact-checks actually deter politicians from lying?
Yes. Evidence indicates fact-checking can improve political behavior. For instance, a frequent Democratic refrain in 2011 was that by supporting Sen. Paul Ryan’s proposed budget, Republicans voted to end Medicare – an inaccurate claim according to many leading fact-checkers.
As I documented in the NAF report, a search of the Congressional Record revealed 50 articles referring to some variant of “voted to end Medicare” during the first session of the 112th Congress. But after this claim was awarded PolitiFact’s “Lie of the Year,” the number of references during the second session was cut nearly in half.
Even more compelling evidence comes from a field experiment. Nyhan and Reifler randomly sent a letter to some (not all) state lawmakers reminding them about the reputational risks of being called out by a fact-checker. Compared to lawmakers who did not receive a letter, the ones who did were significantly less likely to have made claims that were challenged by fact-checkers. Just knowing they might be challenged, in other words, tends to make politicians careful about their claims.
Do fact-checks change people’s beliefs?
All the fact-checkers I interviewed for my NAF research claimed their first priority was to inform voters. But it’s not as clear that that’s working. A number of studies have shown that, under certain conditions, misinformation can be successfully corrected with fact-checking.
For instance, journalistic fact-checking was superior to “he-said, she-said” reporting at correcting people’s beliefs, according to research by Ray Pingree, Dominique Broussard, and Doug McLeod. But that study involved checking misinformation about a company, with no explicit partisan cues.
Political attitudes, of course, can be particularly entrenched and hard to change. Similarly, an API study I co-authored with Emily Thorson, Ashley Muddiman and Lucas Graves successfully corrected beliefs about a breakfast cereal’s health benefits – but that was another non-political topic.
Meanwhile, other researchers have found that fact-checking does not change beliefs when extraneous details in the fact-check itself, such as noting the religious affiliation of the public official, reinforce previously held partisan attitudes. Similarly, the API study I co-authored showed that fact-checking did change minds – but only among the people who were in the same party as the politician whose facts had been challenged.
Amazeen political beliefsFor example, Republicans were more likely to change their minds when a Republican politician’s claim had been disproven, and vice versa. But a fact-check did not change the minds of those aligned with the other party (say, Democrats when a Republican had been called out). That’s because those in the opposing party were less likely to believe the politician to begin with, so the fact-check had no substantial effect. That was true even when the claim itself was nonpartisan — in our case, about the amount of negative advertising in a campaign.
Generally, people assess a message based on whether the messenger is “one of them.”
Beyond being ineffective, correcting claims about a highly controversial issue can actually backfire. People who are diehard believers hold their beliefs even more firmly when those beliefs are challenged.
For instance, Nyhan and Reifler found that trying to correct conservatives’ beliefs about weapons of mass destruction in Iraq led those conservatives to reject the facts and hold more firmly to their beliefs. Trying to correct liberals’ beliefs about stem-cell research was also ineffective. The moral: partisanship is stronger than facts.
What kinds of fact-checks are more or less effective?
According to a study by Kim Fridkin, Patrick Kenney and Amanda Wintersieck, people find fact-checks most convincing when they challenge claims made in negative advertisements – and least convincing when fact-checks show those claims were correct. Fact-checkers seem to have figured this out on their own.
When I analyzed ads from the 2008 presidential election, I found that fact-checkers were most likely to scrutinize attack ads. Given the rise in negative campaign advertising, they’ve got plenty to work with for awhile.
Most people prefer fact-checks that use visual rating scales (see examples) – such as PolitiFact’s Truth-O-Meter or The Washington Post Fact Checker’s Pinocchios, according to an API study that I co-authored. But a substantial minority of people (44 percent) prefers textual corrections (like from FactCheck.org or TruthInAdvertising.org) that explain both the original claim and the facts.
Both formats are equally effective at correcting political misinformation. However, when correcting non-political misinformation, like a breakfast cereal’s health claims, people were more convinced by the ratings scale. Furthermore, despite some criticisms of rating scales, we found they don’t harm people’s attitudes toward a news organization.
But wait, there’s more!
It’s worth pointing out that the journalistic enterprise of fact-checking is spreading beyond politics. Fact-checkers are now evaluating science-based claims and consumer product and service claims. Future research is needed to measure the success of non-political fact-checkers, such as TruthInAdvertising.org, at deterring the spread of other forms of misinformation – like claims that a product was “Made in America” when it was not.
If John Bohannon’s story is any indication, many media outlets still completely fail to scrutinize the validity of “scientific” claims – such as that chocolate accelerates weight loss.
What fact-checking does best is reduce or prevent inaccurate political rhetoric and may be most effective during primary races. The growing effort is helping to shape what politicians say and whether their partisans take those statements on faith or with facts.
Political fact-checking can’t do everything. The old line “I know what I believe, don’t confuse me with the facts” accurately sums up how stubbornly people can hold onto their most cherished convictions. But whether via visual scales or explainer posts or both, the new genre of fact-checking is making a difference.
Michelle A. Amazeen is an assistant professor of advertising at Rider University in Lawrenceville, N.J.