Home > News > The “Funding Effect” in Sponsored Research
224 views 4 min 0 Comment

The “Funding Effect” in Sponsored Research

- July 23, 2008

The idea that how a survey question is asked affects – sometimes decisively – the way it is answered is nothing new. (The best source on issues of question-framing is still Schuman and Presser’s Questions and Answers in Attitude Surveys.) The very same thing holds in doing research. Here’s a good case in point.

bq. Wal-Mart and Toys R Us announced this spring that they will stop selling plastic baby bottles, food containers and other products that contain a chemical that can leach into foods and beverages. Even low doses of the chemical (bisphenol A, or BPA) are linked to prostate and mammary-gland changes in laboratory animals that were exposed as fetuses and infants. The big retailers are responding to the fears of parents, and Congress is considering measures to ban the chemical.

bq. But is there enough evidence of harmful health effects on humans? One of the eyebrow-raising statistics about the BPA studies is the stark divergence in results, depending on who funded them. More than 90 percent of the 100-plus government-funded studies performed by independent scientists found health effects from low doses of BPA, while none of the fewer than two dozen chemical-industry-funded studies did.

That’s from an article by David Michaels, an epidemiologist who teaches environmental health policy at, ahem, the George Washington University School of Public Health, in the Washington Post (July 15, 2008).

So why does this “funding effect” consistently emerge? According to Michaels, “Within the scientific community, there is little debate about the existence of the funding effect, but the mechanism through which it plays out has been a surprise.”

The simplest explanation, of course, is that companies hire researchers to produce shoddy studies that whitewash their products. But as Michaels notes, “Such scientific malpractice does happen, but close examination of the manufacturers’ studies showed that their quality was usually at least as good as, and often better than, studies that were not funded by drug companies.

Rather, it turns out that industry researchers rely on various “tricks of the trade,” including “testing your drug against a treatment that either does not work or does not work very well; testing your drug against too low or too high a dose of the comparison drug because this will make your drug appear more effective or less toxic; publishing the results of a single trial many times in different forms to make it appear that multiple studies reached the same conclusions; and publishing only those studies, or even parts of studies, that are favorable to your drug, and burying the rest.”

In short, the problem lies more in the framing of the research question and the presentation of the results than it does in the conduct of the research per se. “As long as sponsors of a study have a stake in the conclusions,” Michaels argues, “these conclusions are inevitably suspect, no matter how distinguished the scientist.” And the solution, he continues, lies in de-linking sponsorship and research.

bq. One model is the Health Effects Institute, a research group set up by the Environmental Protection Agency and manufacturers. HEI has an independent governing structure … [and] conducts studies paid for by corporations, but its researchers are sufficiently insulated from the sponsors that their results are credible.