Home > News > Controversial claims about marriage promotion break the statistical rules of evidence
104 views 5 min 0 Comment

Controversial claims about marriage promotion break the statistical rules of evidence

- January 16, 2014

(The Washington Post)
Sociologist Philip Cohen points us to a shaky study on the effects of government-supported Healthy Marriage Initiatives. Here’s Cohen:

In a (paywalled) article in the journal Family Relations, Alan Hawkins, Paul Amato, and Andrea Kinghorn, attempt to show that $600 million in marriage promotion money (taken from the welfare program!) has had beneficial effects at the population level. . . .
After a literature review that is a model of selective and skewed reading of previous research (worth reading just for that), they use state marriage promotion funding levels* in a year- and state-fixed effects model to predict the percentage of the population that is married, divorced, children living with two parents, one parent, nonmarital births, poverty and near-poverty . . .
To find beneficial effects — no easy task, apparently — they first arbitrarily divided the years into two periods. . . .

I agree with Cohen that the division seems arbitrary, it’s a way to create more ways of reaching statistical significance, but it doesn’t seem to make much sense if the goal is to try to understand what’s really going on.
Cohen goes into more details about a dubious outlier-removing exercise, and then concludes:

Finally, their own attempt at a self-serving conclusion is the most damning:

Despite the limitations, the current study is the most extensive and rigorous investigation to date of the implications of government-supported HMIs for family change at the population level.

Anyway, please keep giving the programs money, and us money for studying them**:

In sum, the evidence from a variety of studies with different approaches targeting different populations suggests a potential for positive demographic change resulting from funding of [Marriage and Relationship Education] programs, but considerable uncertainty still remains. Given this uncertainty, more research is needed to determine whether these programs are accomplishing their goals and worthy of continued support.

*The link to their data source is broken. They say they got other data by calling around.
**The lead author, Alan Hawkins, has received about $120,000 in funding from various marriage promotion sources.

I’ve received lots of research funding myself, including from corporate sources, so I can hardly criticize Hawkins for getting research support where he can. But, given the many specific problems that Cohen points out in this study, it appears that this research team had a goal in mind and then went through the data looking for confirming evidence. From their perspective, I imagine that complaints about multiple comparisons and selection bias would fall into the category of “technicalities.” But, y’know, if you want to play the game you gotta follow the rules. Hawkins et al. should feel free to write advocacy pieces and support their arguments with stories and qualitative data, but if they want to get quantitative, they should follow what the data tell them, rather than leading the data around by the nose to get the conclusions they want.
And all the above statistical issues are over and above the big problem, which is the near-impossibility of learning much about the effects of such a program from aggregate state-level statistics in an observational study.
Before I get a bunch of angry comments from supporters of government-supported Healthy Marriage Initiatives, let me be clear: the effects of such a program are just really hard to learn, no matter what. That doesn’t mean the program is a bad idea (or a good idea), it just means that you’re only going to learn the crudest things by looking at before-after aggregate data. The fact that a study didn’t really show the effects it claimed (because of the statistical problems that Cohen pointed out) does not mean it’s a bad program. You just can’t take the Hawkins et al. paper as any kind of real evidence. If you want to support (or oppose) the program, it shouldn’t be based on the numbers presented in this paper, which tell us little than there is large state-to-state variation in the numbers they are crunching.