I was completely unaware of this, but apparently cases of academic misconduct, as evidenced by the retraction of papers from journals and other publication venues, have been on the rise.
According to the article, retractions from journals in the PubMed database have increased by a factor of 60 over ten years, from 3 in 2000 to 180 in 2009. That’s insane!
What’s going on, then? I suspect one or more of the following:
One caveat: this result derives from PubMed, which primarily includes medical and pharmaceutical research, as well as some auxiliary technology and basic science. Does this pattern of misconduct apply in other fields, or is it particular to medicine?
Improved review processes are necessary, but it’s not clear how quickly change will come. Problems with peer review have been acknowledged for more than 20 years, with a report from 1990 showing that only 8% of members of the Scientific Research Society considered it effective as is. Despite this, in most venues, peer review functions the same way it always has.
There may be some movement, however. CHI, for example, includes the alt.chi track in which research is reviewed in a public forum before selection by a jury, which seems to offer a good compromise between open and free criticism, and peer-driven moderation. There’s also a special conference coming up entitled “Evaluating Research and Peer Review – Problems and Possible Solutions” – it was the Call for Papers for this that got me writing this post.
From my perspective, an ideal research review system would at least:
What else should a review system incorporate? How could such a system fail? Why might it not be adopted?
Update 2012-05-09: It’s not clear whether the aforementioned study relied on the same set of journals each year, or whether they used the full PubMed database each year. It’s probable that the PubMed mix has changed over the decade; for example, the NIH’s public access policy requiring publicly funded research be placed into PubMed was trialed in 2005, and made mandatory in 2008.
I spent Saturday at the HCI for Peace workshop representing the Voices from the Rwandan Tribunal project. It was fairly informal, with only 10 participants, which made it easy for everyone to participate in the discussion. Several participants presented projects they’ve worked on, including:
Also in attendance were Juan Pablo Hourcade, an Assistant Professor at the University of Iowa and organizer of the event; Lisa Nathan, an Assistant Professor at the University of British Columbia, co-PI on the Rwandan project, and a former student at UW; Daniela Busse, from Samsung Research; Daisy Yoo, a student and colleague of mine at UW also working on the Rwandan project, and Kelsey Huebner, an undergraduate assisting Juan-Pablo with running the workshop. Neema Moraveji, director of the Calming Technology Lab at Stanford, was not present, but gave a short presentation on his work in ‘calming technology’ via Skype.
As well as individual project presentations, we also discussed the place of HCI in peace-making, peace-keeping, and harmony. A number of points and questions were salient:
In the time available, it was impossible to come to any detailed consensus on these issues, and it was generally agreed that further thought and development would be necessary. Interactions magazine has offered us a spot as the cover article in an issue later this year, and we’re hoping that this will give us an opportunity to address these concerns in more depth.
All up, a fascinating and rewarding way to spend a day. Not to mention an excellent lunch and tasty pizza and conversation at the end of the day!
Another quick observation about motivation:
I was lying in bed just now, tired and thinking of sleep. I picked up the book I’m currently reading (the 2008 Year’s Best Fantasy & Horror anthology, if anyone cares), thumbed to the next story, but barely made it to the end of the second paragraph before giving up.
I was about to get up and turn of the light, but for no apparent reason, I picked up a random paper that was sitting on the pile of books next to my bed. I didn’t intend to actually read it – after all, if high quality short fiction can’t hold my attention, dry academic writing certainly won’t do any better – but for some reason I ended up flicking my eyes through the abstract. I wasn’t taking much in, but for some reason I stuck with it long enough to encounter some survey results that were quite interesting, enough so that I felt like writing some quick notes about it in Evernote. So I did.
But then something interesting happened. The motivation I had for my original note-taking gave me a little momentum, and instead of turning the laptop off and going to bed like I originally intended, I suddenly felt like doing something useful, in this case, writing a paper review that I’ve been putting off for about a week, and then writing this.
Why’s this interesting? I’m intrigued by the idea of motivation having momentum. That by getting excited about some small, easy task, I can carry that over to feeling motivated about some larger task. Thinking back, this is definitely something that’s happened before, and that I think I sometimes take advantage of, but never explicitly. I should try to employ this more often.
Actually, thinking on it, this relates to some great advice I got once about writing. If you’ve got some big writing task that’s really hard to start on, just commit to sitting down in front of the computer, loading up the word processor, and looking at it. You don’t have to do anything more than that. More often than not, you’ll want to write a few words down, and sometimes, you’ll get sucked in. Here, like above, the momentum of defeating a small task carries you into defeating a larger one. I think I’ll have to write more about this in the context of writing some other time.
Oh, and by the way, the paper was about the sociology of strategy board games. You can bet I’ll write something about it some other time
Came across this in an issue of IEEE Computer today. It’s a simple conceptual model from the 1960s by a guy called Bruce Tuckman of the stages small groups go through; groups such as committees, work groups, and project teams. The basic stages seem obvious, but, as with many models of human behaviour, the value comes from their being made explicit such that they can be recognized, acknowledged, and facilitated appropriately.
Here’s what the original article says (my emphasis):
Groups initially concern themselves with orientation accomplished primarily through testing. Such testing serves to identify the boundaries of both interpersonal and task behaviors. Coincident with testing in the interpersonal realm is the establishment of dependency relationships with leaders, other group members, or pre‑existing standards. It may be said that orientation, testing and dependence constitute the group process of forming.
The second point in the sequence is characterized by conflict and polarization around interpersonal issues, with concomitant emotional responding in the task sphere. These behaviors serve as resistance to group influence and task requirements and may be labeled as storming.
Resistance is overcome in the third stage in which in-group feeling and cohesiveness develop, new standards evolve, and new roles are adopted. In the task realm, intimate, personal opinions are expressed. Thus, we have the stage of norming.
Finally, the group attains the fourth and final stage in which interpersonal structure becomes the tool of task activities. Roles become flexible and functional, and group energy is channeled into the task. Structural issues have been resolved, and structure can now become supportive of task performance. This stage can be labeled as performing.
So, basically, small groups go through the following phases:
I don’t know about you, but these stages certainly feel familiar. I don’t think it’s useful to claim that these are distinct and clear stages, however. Rather, I think they’re best thought of as overlapping phases describing a ‘natural’ progression. With that in mind, then, here’s a bunch of ways you could use this model:
Firstly, with a model at hand, it’s easy to see when behaviour deviates from a ‘normal’ pattern. This isn’t intrinsically bad, but, if unexplained, may be indicative of certain problems within a group.
Secondly, individuals and subgroups might not necessarily move through this progression at a uniform rate – if part of the group is still stuck storming, it makes it hard for the rest of the group to begin norming. In such situtations, a skilled group leader might be able to gently nudge such individuals by, for example, allowing them other outlets to express themselves.
Thirdly, it seems like these stages aren’t just what normally occurs, but also what needs to occur for a group to function. It’s probably important to be aware of this when forming expectations of a group’s performance.
Fourthly, it’s always nice to have a vocabulary to describe things like this, particular given that the elements of group behaviour are normally quite implicit.
Edit: Lastly, it’s interesting to think about the emotional conflicts and outbursts that sometimes occur and realize that they’re actually just part of the process rather than some intrinsically negative distraction.
Imagine a search engine that, instead of just doing text matching, attempts to parse your statement into questions it can answer, then provides you with as many of those answers as it can. Imagine a search engine that can deal with numerical relationships and analysis. Imagine a search engine that’s tailored towards returning facts and knowledge instead of websites.
Now, go watch the Wolfram Alpha demo video.
Next, imagine if you had analytical tools of this nature at your fingertips at all times, and were able to project and share them on surfaces using some form of augmented reality. Finally, imagine what this could do to intelligent argument, discussion, design, and political discourse.
Quite a step, huh?
Today I read a paper from 1971 by Horst Rittel and Melvin Webber about “wicked problems” – problems that are intrinsically difficult or impossible to solve in the sense that one can solve a crossword or mathematical proof, or win a game of chess. Wicked problems abound in policy questions and design, and it’s interesting to think about what differentiates them from these other “tame problems”.
The paper defines a wicked problem as one with most of the characteristics in the list below. Bear with me, because being able to spot a wicked problem and thus infer the consequences of that fact is quite a powerful tool for thinking about decision making in pretty much any context. Once you’ve got a clear idea of the concept, you can start seeing them everywhere – in policy such as city planning, in international conflict, project management, personal time management, and even in family Christmases. They’re everywhere, and, unlike tame problems, they’re impossible to solve absolutely, though sometimes they can be resolved partially with relative ease.
This list constitutes a polythetic or cluster definition; that is, problems must have some, but not all of the criteria to be considered wicked. Furthermore, problems possess them to a greater or lesser extent than others, implying the idea of a continuum of problem wickedness. Polythetic definitions are normally used to define complex concepts in philosophy, and the fact that such a definition is required to define wicked problems suggests that they are not a clear or natural category as the paper suggests.
That said, however, the category of tame problems is much clearer. It consists of problems with stopping criteria, clear correctness of outcomes, limited solution action sets with clear results. It seems, then, that wicked problems are perhaps best understood as the set of all problems that are not tame.
One implication of the wickedness continuum is that wicked problems could be made less wicked if we understood the factors that make them wicked. Unfortunately, however, the list of criteria above is primarily descriptive, not explanatory, and so only of use as a starting point. On Tuesday, I’ll be participating in a further discussion on this topic in which I’d like to explore explanations of what makes a problem wicked. This would, I think, give a better definition as well as some ideas for how wickedness might be reduced. Below are some candidate explanations:
There’s one last point I want to make. Being written in 1973, the paper gives the impression that the difference between wicked and tame problems maps fairly clearly to the difference between abstract, mathematical or game problems and real political and social problems. It’s interesting to note that in recent years, the term wicked problem has been used to describe problems in software engineering and design that exhibit many of the same properties of the social problems outlined in the paper, demonstrating that it is not abstraction itself that makes a problem tame. It would also be interesting, I think, to look at some of the strategies employed by engineering teams to deal with wicked software problems, and work out if they could be applied to wicked problems in a social or political context. Food for thought, anyway.
This week, I decided to make sure that Sunday was a day where I wouldn’t have to do any work so that I could actually relax.
Normally, I’m really bad at relaxing. I’ll spend most days stressing about work, and this saps my motivation such that I get less work done. In turn, I stress more, and my motivation lowers even further. And so on. This is neither pleasant nor productive, and is one of the central stupid things about the way in which I function. So, I’m trying to change it.
The plan is to take one day a week where I don’t stress about work. What’s important is not that I don’t work, but that I don’t feel I have to work. Nor is it even specific to work – I want it to be a day where I don’t feel I have to do anything – where “I just don’t feel like it” is a valid excuse.
The idea is that, by doing this, I’ll be less stressed through the rest of the week, have a little time for reflection, and generally be happier and more motivated. It’s really just the third (or fourth) commandment taken as practical advice – “Remember the Sabbath and keep it holy”.
This morning, then, I woke up, had breakfast, listened to music for a while, and generally just chilled out. But then, something interesting happened – I thought to myself, “Y’know, maybe I might just read that paper I’ve been avoiding”, then ended up reading it through thoroughly, taking notes and everything. By way of contrast, during every previous attempt to read it, my mind was constantly looking for a way out, and no distraction was too small. Today, though, it didn’t even feel like work.
The conclusion I want draw, then, is that, for me at least, motivation has a lot to do with choice. If I choose to do something without coercion, then I feel better about it, do better at it, and generally get it done faster. I’ve heard this idea before and I vaguely recall seeing it mentioned in a paper on motivational theory I read a while back, but I’ve never seriously applied it to myself before.
So, for the next month or so, I’m going to try to treat Sunday as my own personal holy day, even though I’m not religious.
As an aside, I wonder if this is partly why I find task lists so useful for motivating myself. That is, I wonder if part of their value is the fact that they with choices of what task I should do, improving my sense of ownership over the decision to do them, and therefore my motivation.
In my last post, I talked about the meaningless of taste as a way of describing our preferences. In this post, I’m going to sketch out a scheme that I tend to use in describing preferences.
There are many ways in which a given piece of music might be appreciated. Examples include virtuosity, technical characteristics, emotional reaction, nostalgia, or even lyrics. Some of our appreciation is based on the objective properties of the music; other times on our subjective interpretation, and still others on partially objective criteria; that is, criteria that are objective within a group or other frame of reference such as a particular aesthetic.
I like to think of these as distinct modes of appreciation and, for the purposes of this discussion, will call them facets. Thinking about appreciation in terms of facets brings out the following observations:
That’s all for now – in my next post on this topic, I’ll build a rough taxonomy of different facet types.
You might have heard about the recent kerfuffle over the Facebook terms of service. If you didn’t, this brief summary from Rocketboom will get you up to speed.
Mostly, it was about whether or not you could revoke their license to use and distribute your material by deleting your account. Their argument was that they couldn’t practically delete material from their backups and, if you’d sent things to someone else, they weren’t willing to delete that material if you deleted your account. These aren’t unreasonable concerns, but their approach was to require perpetual licenses for all material and all uses. The change was far broader than needed to achieve those goals – more nuance was required in their terms. After lots of wailing and gnashing of teeth, Facebook withdrew their initial set of changes, then, a few days ago, released a re-written set of terms that appear to be much less contentious. In particular, they explicitly state ‘People should own their information’. Hear, hear.
But, that’s not the point of this post. I’m interested in the fact that they’ve chosen to release two documents; one a high-level statement of principles, the other a statement of user rights and responsibilities. Compared to the old terms, which were legalistic and dense, these documents are quite readable. This, I applaud.
It’s not entirely clear, however, which one of them represents the real terms and conditions. Which is legally binding? If there’s a conflict, which takes priority? If they’re not binding, then where are the real terms of service? Most likely, the statement of user rights and responsibilities is meant to be the binding terms and conditions.
Generally, I really like the idea of providing a human-readable license alongside a legally rigorous version, because no one really ever reads terms of service, even though they should, and at least part of the reason is that they’re generally impenetrable. If the relationship between the two is clear and there are no incongruities, then great! Of course, language often isn’t that precise, and you can see how problems might arise.
A great example of this approach is in the Creative Commons license. When they were launched, much was said about licenses being written ‘legal code’ in that we have trained engineers and machines to read and use them, being lawyers and courts respectively. Let’s run with this, and see how a few concepts from software can be applied.
Software design are just common ways of thinking and solving particular problems that crop up again and again in various contexts. They might be abstract, and pertain to the way code is written (such as the decorator and singleton patterns), or they might be more concrete features that are applied such as, for example, common interface widgets like menus, scroll bars, and drop down boxes. In some form, design patterns probably appear in everything that people design. However, in software, these patterns are explicitly sought for, studied and re-applied. I’m not aware of this being a common practice in law, but I would expect the benefits of clarity, scalability, and re-usability that this brings to software engineering would be really useful in legal engineering.
Like software, legal systems can become horribly complex. In software, a major means of reducing this complexity is to employ modularity – problems are defeated by dividing and conquering. Where possible, software consists not of a single monolith of tightly coupled code, but of hierarchically organized components that interact cohesively. Benefits of this approach are a reduction in complexity, re-usability and portability of parts, and conceptual tools for analyzing and engineering models of complex systems. Various coding paradigms exist, the best known of which is object oriented programming; aspect-oriented programming and programming by contract are other paradigms that facilitate modularity. In law, there’s obviously some modularity (law is broken down into individual acts and codes, which are broken into articles, sections, clauses and so forth). Unlike well-engineered software, however, these components are strictly hierarchical and cannot be taken out of context.
Wrappers are an example of a pattern that allows software engineers to insulate themselves from the idiosyncracies of a messy component, a third-party driver, or a piece of hardware. Basically, an engineer writes a piece of code that knows all about how to handle the mess, then presents a nice clean interface that other engineers can work with without having to learn about the details of the mess themselves. Imagine if, instead of having to read all of the messy details of a complex license, you could just inquire, through a simple, well-defined interface, if certain conditions were true.
Before any of this makes sense, it’s important to consider the difference between informal language (that which we use every day), and formal language, where the meaning of all symbols and elements is defined within a particular lexicon, much as all software languages are. That is, legal writing needs to follow formal rules. One obvious problem here is that is required to be able to address pretty much any conceivable situation; this is effectively impossible to do with formal language, as you quickly end up with self-referentiality (which then allows self-contradiction a la the Epimenides paradox). If you don’t believe me, read Hofstadter’s ‘Gödel, Escher, Bach’ first, then argue with me. To overcome this problem, then, we need to have some way of insulating the parts that can be modeled formally (effectively, the parts that are most clear and logic) from the parts that cannot (effectively, everything that’s subjective in some way). The wrapper pattern mentioned above allows for this – tokens can be used to represent subjective elements; these tokens are treated as simple propositions within the formal part of the system, then spat out at the end. Incidentally, this is how propositional logic, and almost all written reasoning works. However, lest I make this sound easy, I should mention that while, hypothetically, this is possible, it’s unclear whether or not the resulting system of formal law and subjective tokens are workable.
If, hypothetically, enough of the mechanics of law could be formalized in such a way that it can be treated computationally, all sorts of things become possible. Firstly, there no longer needs be a legal priesthood whose job it is to parse the complexities of legal argument and language and explain this to the masses – this can be by software, and learned systematically. Imagine if legal code could be translated through some filter into a human-readable form. Imagine if you could query, using a well-defined interface whether a body of law has certain properties, or if certain activities are true. Imagine if law was extensible and modular. Imagine if the legal system was simple, accessible, and thin enough that legal disputes could be resolved in a matter of seconds rather than years, through software interfaces rather than the courts.
I don’t know which parts of this are actually plausible, or if it’s even possible. However, it t would make damned interesting research project for someone. I wonder if someone’s already tried..
A while back, there was a post on Coming Anarchy that referenced this fatwa concerning the question of whether a woman could, under Shariah, lawfully refuse her husband’s request for sex if she is tired from having performed her other Islamic duties (such as nightly prayers).
I’m not at all impressed by the conclusion reached – at best, it’s medieval sophistry, at worst, it’s little more than institutionalized rape. That’s not what motivated me to write this post, though.
For those not in the know, a fatwa is basically a ruling on Islamic religious law issued by an imam or other Islamic authority. Since they’re issued by a wide range of individuals and institutions distributed throughout the Ummah, they often disagree with one another, sometimes violently. Taken together, though, they’re an organic body of law quite different to what we have in the West – probably the closest parallel is English common law – Shariah, however, is much more diverse and, it seems, much less structured. As a method of making and applying law, its distributed nature is actually somewhat attractive; however, as it’s based on literal interpretations of a religious text, it is, by definition, fundamentalist, and thus thoroughly unattractive.
There’s a number of online repositories containing fatwas, some with comparatively liberal outlooks, others extremely conservative. I find them interesting because they offer a window into Islamic law and culture that I’ve not had before. While I’m sure there’s a selection bias based on which groups are willing to put their fatwas online and in English, they still contain a diversity of opinion, and really interesting to browse through.
Bias time – I’m a filthy materialist, looking with a perspective similar to someone visiting the zoo. Some fatwas repulse me, others vaguely disturb me, and still others make a certain amount of sense.
It’s really important not to judge Islamic culture in its entirity by these; many Islamic cultures do not rely solely on law derived from fundamentalist interpretations of a religious text. Even so, it’s hard not to be dumbfounded by the quaintness of it all. Take, for example, the particularly convoluted line of reasoning in the first link below, in which video recordings bypass restrictions on images by virtue of the fact that you can’t actually see little sports people when you look at the tape. It’s a good demonstration of how literally applying 1400 year old writings to modern situations leads to absurdity.
So, in the interests of learning, here’s a few that I’ve dug up: