A researcher created a set of deliberately bogus papers and sent them to a number of journals. The majority of the journals approved the papers despite their obvious flaws.
The paper can be freely accessed and downloaded: Who’s Afraid of Peer Review?
Ever so subtly, open access journals are made to look like the villains in the story. The only problem being that only open access journals were tested, so there is no possible comparison. That does not mean that there aren’t oa journals that are basically scams, many of them linked to “predatory publishers” as defined by Jeffrey Beal, who is cited in the article, but without links to Beal’s list.
A commenter wrote a tongue-in-cheek blog post titled I confess, I wrote the Arsenic DNA paper to expose flaws in peer-review at subscription based journals, in which he points to the egg in the face of Science (which published the results of the sting operation), having previously published equally questionable, though not deliberatly so, articles. He (correctly, in my opinion) points out that the problem does not lie in open access journals, but in the peer review system (with which I disagree).
Finally, oft-cited here PZMyers also commented the affair in his blog: Stones, glass houses, etc., also pointing out to deeply flawed articles previously published in Science (coincidentally, mentioning the (in)famous “arsenic DNA” blunder mentioned in the title of the previously cited blogpost as well). Myers ends his post with a conclusion with which I mostly agree: “I agree that there is a serious problem in science publishing. But the problem isn’t open-access: it’s an overproliferation of science journals, a too-frequent lack of rigor in review, and a science community that generates least-publishable-units by the machine-like application of routine protocols in boring experiments.”
I don’t deny that predatory open access publishers, flawed peer reviews and “me-too” science are all important problems. What all of the above seems to miss, in my opinion, is the bigger picture: the economic forces that shape scientific publishing in many ways, and which are slowly degrading it. The ever-increasing “publish or perish” pressure, on the one hand, and the economic interests of the large commercial publishers that dominate that market, on the other, are the distal determinants of the situation pointed by Myers. This is becoming increasingly problematic for science itself; it deteriorates the signal-to-noise ratio of scientific communication and incentivizes all sorts of unethical shortcuts (salami publishing, self-plagiarism, “honorary” authorship) on the part of authors, not to mention the publishing of increasing numbers or reiterative, mediocre papers that basically are more of the same.
I’ve written a number of things about this:
– Public health and the knowledge industry
If we don’t understand correctly what the issues are, the interventions may be ineffective, or even worsen the problem. In the collection of papers published in the special issue of Science which included the report on the sting operation, there is a paper that proposes yet another bibliometric indicator as a solution to some unnamed problem. After the San Francisco DORA manifest, I find it ludicrous that citation counts are still being pushed as a way to evaluate science.
Maybe this is the time to seriously consider the slow science alternative.