I was completely unaware of this, but apparently cases of academic misconduct, as evidenced by the retraction of papers from journals and other publication venues, have been on the rise.

According to the article, retractions from journals in the PubMed database have increased by a factor of 60 over ten years, from 3 in 2000 to 180 in 2009. That’s insane!

What’s going on, then? I suspect one or more of the following:

  • Worsening of the academic rat-race – the ever-increasing focus on publishing metrics in academia pressures researchers to publish, ideally in high-impact journals. Some may be willing to make up data in order to do so.
  • The rush to compete – Given the prestige attached to publishing first and the role of this prestige in securing grant funding, researchers may be taking shortcuts, overlooking shortcomings in their study designs, or failing to spend enough time verifying their results and data.
  • Commercial involvement – I can’t cite numbers, but my impression is that commercial research funding has increased over the last decade or so, particularly in high-stakes fields such as pharmaceuticals. Commercial funding is associated with bias and poor research practice.
  • Increased detection – It seems likely that today’s increased reliance on information technologies and shared repositories of data and publications would make it easier to detect fraudulent papers. Similarly, since communication is much easier today than it was even 10 years ago, it may be easier for editors to unearth patterns of fraudulent work.

One caveat: this result derives from PubMed, which primarily includes medical and pharmaceutical research, as well as some auxiliary technology and basic science. Does this pattern of misconduct apply in other fields, or is it particular to medicine?

Improved review processes are necessary, but it’s not clear how quickly change will come. Problems with peer review have been acknowledged for more than 20 years, with a report from 1990 showing that only 8% of members of the Scientific Research Society considered it effective as is. Despite this, in most venues, peer review functions the same way it always has.

There may be some movement, however. CHI, for example, includes the alt.chi track in which research is reviewed in a public forum before selection by a jury, which seems to offer a good compromise between open and free criticism, and peer-driven moderation. There’s also a special conference coming up entitled “Evaluating Research and Peer Review – Problems and Possible Solutions” – it was the Call for Papers for this that got me writing this post.

From my perspective, an ideal research review system would at least:

  • Expose all research data and methodology to unlimited, non-anonymous, public, scrutiny. Special rules might be employed to protect commercially sensitive material, but there needs to be a balance.
  • Allow meta-moderation. That is, allow the critique of critiques. To do this, reviewers need to have persistent identities, and signifiers such as the credentials and review history of each user need to be available.
  • Integrate review work into the research contribution of academics. As it is, peer review work is primarily voluntary, and the level of commitment of reviewers is thus presumably highly variable.

What else should a review system incorporate? How could such a system fail? Why might it not be adopted?

Update 2012-05-09: It’s not clear whether the aforementioned study relied on the same set of journals each year, or whether they used the full PubMed database each year. It’s probable that the PubMed mix has changed over the decade; for example, the NIH’s public access policy requiring publicly funded research be placed into PubMed was trialed in 2005, and made mandatory in 2008.


Posted in Mind & Society, Science & Technology, Uncategorized | 1 Comment »

I spent Saturday at the HCI for Peace workshop representing the Voices from the Rwandan Tribunal project. It was fairly informal, with only 10 participants, which made it easy for everyone to participate in the discussion. Several participants presented projects they’ve worked on, including:

  • Lahiru Jayatilaka, a Sri Lankan PhD student from Stanford, who presented his work on improving land mine detection systems by tracking the detector tip and allowing the operator to mark detection points that are then displayed back along with the detector’s path, making it easier to determine the shape of an object detected. In trials with the US Army, he also found that his tool significantly aided in training by making it easier for trainers to see the patterns used by students. He’s looking for funding and collaborators to help him bring the tool to maturity so he can start to spread it to NGOs working in land mine detection and removal around the world.
  • Janak Bhimani, a TV director and producer pursuing a PhD at the Keio Media Design lab, who presented a documentary he produced collaboratively with a small group of online volunteers about the aftermath of the Tohoku earthquake last year in Japan, called “Lenses + Landscapes“. Based on his experience with it, he’s become interested in tools for greater online collaboration in documentary making and, particular, in documentaries that evolve over time; what he calls the ‘growing documentary’.
  • John Thomas, a CHI veteran from IBM research, who presented his work on building a library of patterns for socio-technical systems that can avoid, deescalate, or assist in the resolution of conflicts. These focused more on a personal level than a societal one, but the general ideas run true to larger scales, and furthermore, large conflicts often emerge from small disagreements. He ran through several examples; here are a couple that struck me:
    • Who speaks for Wolf? – Based on a Native American story, this pattern suggests that in any decision making activity where one or more stake-holders are absent, it is important to identify that fact, and determine whether someone else at the meeting is able to speak with sufficient authority and knowledge on behalf of that stake-holder. By doing this, misunderstandings and conflicts can be avoided.
    • The Rule of Six – Whenever one is forced to make an assumption or interpretation because of limited or biased knowledge, one should attempt to come up with at least 5 other possible explanations before accepting the first (and probably easiest) one. This is particularly true with regards negative assumptions, and is basically a method for giving the benefit of the doubt.
  • Evangelos Kapros, a Greek PhD student at the University of Dublin’s Trinity College, who presented and discussed challenges in information visualization and data management with regards understanding flows of immigration and other critical demographic processes that sometimes lead to conflict.

Also in attendance were Juan Pablo Hourcade, an Assistant Professor at the University of Iowa and organizer of the event; Lisa Nathan, an Assistant Professor at the University of British Columbia, co-PI on the Rwandan project, and a former student at UW; Daniela Busse, from Samsung Research; Daisy Yoo, a student and colleague of mine at UW also working on the Rwandan project, and Kelsey Huebner, an undergraduate assisting Juan-Pablo with running the workshop. Neema Moraveji, director of the Calming Technology Lab at Stanford, was not present, but gave a short presentation on his work in ‘calming technology’ via Skype.

As well as individual project presentations, we also discussed the place of HCI in peace-making, peace-keeping, and harmony. A number of points and questions were salient:

  • The complexity of the term ‘peace’ is challenging, and requires much thought. We seemed to be conceptualizing peace as more than just the absence of war, but as a general promotion of peacefulness, including the avoidance of conflict, the promotion of harmony and calmness in life, and efforts to restore peace and order after events such as natural disasters.
  • The term peace may be over-broad to the point of being meaningless – by attempting to create a movement of HCI for Peace, are we mirroring the beauty queen who naively says she wants to bring about World Peace with her reign?
  • What should the research agenda of ‘HCI for Peace’ look like? Suggested approaches included creating tools like Ushahidi that aid others in peace-seeking efforts, working in the field to create new technical solutions that directly foster peace, and observing and understanding the use of technology by others in working for peace.
  • Who are logical ‘allies’ in this work – what other academics and disciplines should we look to for collaboration?

In the time available, it was impossible to come to any detailed consensus on these issues, and it was generally agreed that further thought and development would be necessary. Interactions magazine has offered us a spot as the cover article in an issue later this year, and we’re hoping that this will give us an opportunity to address these concerns in more depth.

All up, a fascinating and rewarding way to spend a day. Not to mention an excellent lunch and tasty pizza and conversation at the end of the day!


Posted in Conference Notes, Mind & Society, Science & Technology, Uncategorized | 1 Comment »
Contents

    Reviews, rants, reflections, arguments, scrawls, ideas, refutations, pontifications, rhetoric, records, accounts, journals, scraps, plans, authentic articles of thought.

    No artificial ingredients.
    May contain pretentiousness.
    May reflect personal bias.

Tweets
    • Hey baby! Do you want to taste the sting? #PostsThatNeedContext 2011-12-29
    • Today's new word: apophatic - adj, beliefs that god can only be known in terms of what it is not. Opposite, cataphatic 2010-01-24
    • Naptime over. Now becoming fully cognizant of all of the little things I need to catch up on. Foo! 2010-01-14
    • Anyone got suggestions on Twitter clients for Windows. I'm using twhirl - got anything better? 2010-01-14
    • More updates...
Categories
Archives
Subscribe