I was intending to post about Andrew Oswald’s recent analysis of whether journal editors discriminate against certain types of authors or articles. Andy Eggers over at the Social Science Statistics Blog beat me to the punch, here, but I still have some comments of my own, based on several decades of submitting manuscripts and having them accepted or rejected, being a reviewer for more journals than I care to remember, being on the editorial board of several journals, being deputy editor of Social Science Quarterly, and being the editor of American Politics Quarterly and later the editor of the American Political Science Review.
Drawing on that experience, my point isn’t that Oswald’s conclusion should be rejected out of hand. Maybe he’s right and maybe he’s wrong. But nothing in his analysis persuades me that he’s right, nor could anything in his analysis have persuaded me that he’s wrong.
Without going into all the gory details, Oswald’s analysis basically consists of juxtaposing a measure of the true quality of an article as determined in the marketplace of ideas and a measure of its quality as judged by the editor of the journal in which it got published. When Oswald spots a discrepancy between the two measures, he — gotcha! — concludes that the editor (to use a George W. Bush-type construction) misunderestimated or misoverestimated an article’s true quality. If the two measurements are closely aligned, well, then the editor must not be biased.
Fine, as a general approach. But there’s way too much slippage between Oswald’s two focal concepts and their empirical indicators to warrant any such conclusion.
The basic problem is that where an article stands on both of these dimensions — its true quality and the editor’s assessment of its quality — are unknown and unknowable except by God. Oswald’s surrogate indicators fall well short of the mark. Oswald’s measures aren’t heaven-sent.
As a measure of an article’s true merit, Oswald uses the number of times it subsequently was cited in the research literature. That’s sort of okay, but there’s a large scholarly literature on using citation frequencies as measures of research quality/importance/whatever, and the general lesson I’ve taken away from that literature is that this is a pretty iffy thing to do.
But it’s done all the time, so I won’t hold Oswald personally responsible for using an indicator about which I have qualms; in this regard he was just following a standard (though questionable) practice in the research literature.
What I will hold Oswald responsible for is introducing a flawed new indicator into the public domain. That’s his measure of the editor’s assessment of an article’s worthiness, which consists of the order in which it appeared in the journal. So if an article were, say, the second one in the table of contents, it must have been because the editor considered it as superior to the article that he or she slotted into, say, the fifth position.
That’s a novel measurement approach. Perhaps it’s novel because Oswald was the first one to think of it. Or perhaps it’s novel because others had thought of it before but discarded it.
In any event, I see two flaws in this measurement approach. The first one is right off the shelf of pre-cooked reviews of any manuscript that focuses solely on accepted articles: In terms of what Oswald is trying to find out — whether editors are predisposed in favor of certain types of submissions and against others — he has a built-in sample selection problem because his analysis is confined to papers that the editor, after all, did accept. What remains for analysis, then, are articles whose merit, from the editor’s perspective, must have varied within narrow limits, for these are papers that the editor decided to accept. Acceptance rates at our leading journals run 10% or less, and I don’t know of any editors (certainly I was never blessed with finding one for any paper I submitted to a journal!) who are willing to publish papers that they consider real stinkeroos. (And I’ve known very few who are brave enough to override rave reviews.)
Much more importantly, I’ve never known an editor to move articles up or down in the order in which they appear in an issue according to his or her estimate of their relative worth. One major exception (being social scientists, we’re entitled to exceptions): There was a time, at least in political science, when being the “lead” article probably did mean that the editor considered it the best one on offer in that issue; that may still be true for some journals. Allowing for that exception, though, I will simply assert that the practice on which Oswald’s measure is based is simply unknown to me and that if it were at all widespread in the journals that I know something about, I think I would know something about it. Some of my own most favorably reviewed articles have ended up last in line in a table of contents. Some editors I’ve known just had the articles printed in the chronological order in which they were accepted — first in, first out. At the APSR, I cared about what our cover looked like and wanted to have an visually striking cover on each issue. (Some readers told me that they hated this; others said they liked it, but maybe they were just telling me what they thought I wanted to hear.) So I gave consideration, in deciding which article to run as our “lead,” to whether we would be able to come up with a nice cover graphic to match its content. More importantly, I tried to group articles thematically, in order to lend coherence to my “From the Editor” introduction to the articles in each issue: If, say, “decision making” was an underlying theme in several articles that were slated for a particular issue, I tried to place those articles adjacently. At that point, all the articles were in the same category for me: awaiting publication, and not differentiated according to some estimate of my own about how good they were.
(I should also note that Andy Eggers expresses some qualms about Andrew Oswald’s reliance on citation data to get an article’s underlying quality, just as I have done. However, Eggers characterizes Oswald’s use tof the article-order measure as “neat.” That’s not the adjective I would have chosen.)
I’m not suggesting that my editorial idiosyncrasies were the same as other editor’s idiosyncrasies, and I very much doubt that they were. But I’m certainly suggesting that articles get put in the order they get put in for many, many reasons that have nothing at all to do with an editor’s assessment of their merit, relative to the merit of the other articles in an issue. And thus I’m willing to award Oswald a merit badge for trying something different, but I think that in this instance, to cite an old saw, what’s new isn’t good and what’s good isn’t new.