Academic Misconduct on the Rise

By | May 8, 2012

I was completely unaware of this, but apparently cases of academic misconduct, as evidenced by the retraction of papers from journals and other publication venues, have been on the rise.

According to the article, retractions from journals in the PubMed database have increased by a factor of 60 over ten years, from 3 in 2000 to 180 in 2009. That’s insane!

What’s going on, then? I suspect one or more of the following:

  • Worsening of the academic rat-race – the ever-increasing focus on publishing metrics in academia pressures researchers to publish, ideally in high-impact journals. Some may be willing to make up data in order to do so.
  • The rush to compete – Given the prestige attached to publishing first and the role of this prestige in securing grant funding, researchers may be taking shortcuts, overlooking shortcomings in their study designs, or failing to spend enough time verifying their results and data.
  • Commercial involvement – I can’t cite numbers, but my impression is that commercial research funding has increased over the last decade or so, particularly in high-stakes fields such as pharmaceuticals. Commercial funding is associated with bias and poor research practice.
  • Increased detection – It seems likely that today’s increased reliance on information technologies and shared repositories of data and publications would make it easier to detect fraudulent papers. Similarly, since communication is much easier today than it was even 10 years ago, it may be easier for editors to unearth patterns of fraudulent work.

One caveat: this result derives from PubMed, which primarily includes medical and pharmaceutical research, as well as some auxiliary technology and basic science. Does this pattern of misconduct apply in other fields, or is it particular to medicine?

Improved review processes are necessary, but it’s not clear how quickly change will come. Problems with peer review have been acknowledged for more than 20 years, with a report from 1990 showing that only 8% of members of the Scientific Research Society considered it effective as is. Despite this, in most venues, peer review functions the same way it always has.

There may be some movement, however. CHI, for example, includes the alt.chi track in which research is reviewed in a public forum before selection by a jury, which seems to offer a good compromise between open and free criticism, and peer-driven moderation. There’s also a special conference coming up entitled “Evaluating Research and Peer Review – Problems and Possible Solutions” – it was the Call for Papers for this that got me writing this post.

From my perspective, an ideal research review system would at least:

  • Expose all research data and methodology to unlimited, non-anonymous, public, scrutiny. Special rules might be employed to protect commercially sensitive material, but there needs to be a balance.
  • Allow meta-moderation. That is, allow the critique of critiques. To do this, reviewers need to have persistent identities, and signifiers such as the credentials and review history of each user need to be available.
  • Integrate review work into the research contribution of academics. As it is, peer review work is primarily voluntary, and the level of commitment of reviewers is thus presumably highly variable.

What else should a review system incorporate? How could such a system fail? Why might it not be adopted?

Update 2012-05-09: It’s not clear whether the aforementioned study relied on the same set of journals each year, or whether they used the full PubMed database each year. It’s probable that the PubMed mix has changed over the decade; for example, the NIH’s public access policy requiring publicly funded research be placed into PubMed was trialed in 2005, and made mandatory in 2008.