Since arriving at UW almost four years ago, I’ve been involved in student politics. I’ve not talked about it here as most of it is rather prosaic and, for those not at UW, largely irrelevant. Every now and then, though, something comes up which might be of wider interest..
In early May, GPSS, the Graduate and Professional Student Senate, ran its second Science and Policy Summit, an event for academics, policy-makers, and the general public that looks at the interactions between scientific research and policy development. This year, we ran two panels, one on the impact of bioinformatics on preventive medicine, and the other on the role of science in public political discussion, focusing on the US Presidential debates. As well as the panels, we also ran a series of short talks, inspired by the TED model.
There were about 10 talks, 10 minutes each, delivered by a mix of graduate students, post-docs, and faculty, and of a very high quality. Here’s a few topics to whet your interest:
We recorded all of the talks, and they’ve now been posted to YouTube. They’re all rather interesting, and worth at least a look, even if the film quality isn’t all I’d hoped when I filmed them.
The idea is that if someone is warned of an imminent seizure in advance, say 20 minutes out, they can remove themselves from unsafe or embarrassing situations, take other precautions (lying down, perhaps), or take fast acting drugs that might stop it from happening. This is a big deal, as it helps the 30% or so of those suffering from the disease for whom conventional drug treatment doesn’t work. It may also allow some of those receiving drugs to come off them, reduce their dose, or shift to less effective drugs with fewer side effects. It might also help reduce the number of deaths from epilepsy-related accidents – 50,000 annually in the US, apparently.
The technology’s actually fairly simple. There’s three main parts, none of which appears to be particularly magical.
It’s in trials at the moment in Australia, and is apparently performing well, with no known associated adverse events.
Apparently, this all started out with Jaideep’s PhD research into the sensorimotor function of moths – he basically designed chips and implants small enough to put in a moth, then studied the different nerve signals associated with its wings as it flew. They also did trials of the epilepsy detection technology on dogs, as they also suffer from epilepsy. Unfortunately, I was unable to find copies of the cute pictures he showed in the talk.
If you’re interested in hearing more, there’s a 5 minute video article from ABC News in Australia talking about it. It’s formatted a bit weird, so you might need to download it and switch to the second audio track in VLC.
Edit: Apparently the electrodes are implanted beneath the dura mater, but outside the arachnoid mater. So, between the second and third membranes that encase the brain.
I was completely unaware of this, but apparently cases of academic misconduct, as evidenced by the retraction of papers from journals and other publication venues, have been on the rise.
According to the article, retractions from journals in the PubMed database have increased by a factor of 60 over ten years, from 3 in 2000 to 180 in 2009. That’s insane!
What’s going on, then? I suspect one or more of the following:
One caveat: this result derives from PubMed, which primarily includes medical and pharmaceutical research, as well as some auxiliary technology and basic science. Does this pattern of misconduct apply in other fields, or is it particular to medicine?
Improved review processes are necessary, but it’s not clear how quickly change will come. Problems with peer review have been acknowledged for more than 20 years, with a report from 1990 showing that only 8% of members of the Scientific Research Society considered it effective as is. Despite this, in most venues, peer review functions the same way it always has.
There may be some movement, however. CHI, for example, includes the alt.chi track in which research is reviewed in a public forum before selection by a jury, which seems to offer a good compromise between open and free criticism, and peer-driven moderation. There’s also a special conference coming up entitled “Evaluating Research and Peer Review – Problems and Possible Solutions” – it was the Call for Papers for this that got me writing this post.
From my perspective, an ideal research review system would at least:
What else should a review system incorporate? How could such a system fail? Why might it not be adopted?
Update 2012-05-09: It’s not clear whether the aforementioned study relied on the same set of journals each year, or whether they used the full PubMed database each year. It’s probable that the PubMed mix has changed over the decade; for example, the NIH’s public access policy requiring publicly funded research be placed into PubMed was trialed in 2005, and made mandatory in 2008.
I spent Saturday at the HCI for Peace workshop representing the Voices from the Rwandan Tribunal project. It was fairly informal, with only 10 participants, which made it easy for everyone to participate in the discussion. Several participants presented projects they’ve worked on, including:
Also in attendance were Juan Pablo Hourcade, an Assistant Professor at the University of Iowa and organizer of the event; Lisa Nathan, an Assistant Professor at the University of British Columbia, co-PI on the Rwandan project, and a former student at UW; Daniela Busse, from Samsung Research; Daisy Yoo, a student and colleague of mine at UW also working on the Rwandan project, and Kelsey Huebner, an undergraduate assisting Juan-Pablo with running the workshop. Neema Moraveji, director of the Calming Technology Lab at Stanford, was not present, but gave a short presentation on his work in ‘calming technology’ via Skype.
As well as individual project presentations, we also discussed the place of HCI in peace-making, peace-keeping, and harmony. A number of points and questions were salient:
In the time available, it was impossible to come to any detailed consensus on these issues, and it was generally agreed that further thought and development would be necessary. Interactions magazine has offered us a spot as the cover article in an issue later this year, and we’re hoping that this will give us an opportunity to address these concerns in more depth.
All up, a fascinating and rewarding way to spend a day. Not to mention an excellent lunch and tasty pizza and conversation at the end of the day!
I’m feeling pretty jazzed at the moment about patronage as a funding model for creative endeavours.
It’s a pretty simple idea: instead of today’s dominant practice, where creative works are funded and owned by someone expecting to make money back from advertising or sales through a limited distribution channel, under patronage, creators fund their work by appealing directly to potential fans, asking them to put up funds in advance in return for various rewards and input into the work. Historically, patronage was widespread, and meant that artists, musicians, and philosophers gathered in the courts of sympathetic nobles to seek funding, lending their creativity to the glory of kings and emperors. In return, nobles gained prestige as patrons of the arts as well as substantial influence over the works created.
Today’s patronage models are a little different, in that they rely on a much more broader base of patrons. Instead of seeking out extremely wealthy individuals to fund entire works, creators can appeal to a worldwide audience through the internet, collecting many small contributions directly from the people who care most about their work. This is a good thing for creators and patrons alike:
It might be that patronage isn’t the best funding model for all creative works, but here’s a few examples where it’s been successful:
There are many more – these are just the few I’ve paid close attention to.
As traditional publishing industries that rely on firmly controlled distribution of hard-copy works continue to erode, it’ll be very interesting to see how patronage evolves. The fact that big box book stores are dying doesn’t mean people don’t want to read, and the collapse of newspapers has little to do with the public’s interest in the news. It’s just that the old business models are increasingly being undermined. I don’t foresee corporate creative endeavours going away, but I do expect them to become less dominant in the long term, and patronage seems a likely means of that happening.
Questions for comments:
Clearly I’ve got work to do, because I’m procrastinating with blog posts.
#include<speculative comments about motivation>
Interesting piece about the futurist implications of the promising new technologies on the horizon becoming corporate controlled walled-gardens, much as everything is now. It’s clear that some level of profit driven development is good, as it spurs innovation, but it’s also clear that too much moves to stifle innovation. To me, it seems that the iPhone is an example that’s swinging to the stifling end of the spectrum.
I have an iPhone, and I like it, but in some ways I regret buying it – had I known about the imminent release of Android phones back in Sept last year, I would have waited. Aside from the overly optimistic prospect of me writing apps for Android, owning the iPhone makes me feel slightly dirty, like I’ve just been sent a particularly glossy membership card to the NZ National party or some other vaguely nefarious organization. Despite their clear skill at aesthetics and design, Apple just seem sinister to me. It must be all the fanboys. Organizations that have and encourage a cult-like following always disturb me.
From the article:
I say that the iPhone is not the future, but what I mean by that is that the iPhone is not representative of a future I want to see. The future is not just a retail opportunity and a finer world is not built entirely of consumer goods. I’m not keen on a future where the major technologies of environmental and social mediation are owned and controlled by corporate ideology. As AR creeps closer and closer, the question of who gets to plant a flag in the liminal space of a technologically re-mediated environment becomes a more pressing concern – with new horizons there are always new forms of colonialism.
Interesting comments and discussions. Here’s mine:
Let’s assume we’re talking about the actions that a certain group or subculture can take to adapt these future unfriendly devices for themselves – aboniks is totally right that we can’t somehow convince the mass body public that the abridgement of rights they are barely aware of in the first place is enough reason for them to give up their shiny toys and stop responding emotionally to well crafted marketing. That’s just human nature, and immutable, at least for now.
Granted, the principle of openness could be crafted into a compelling message that might slowly challenge these closed cultures, but that’s an eternal vigilance problem – we’d have to have to resources to push our message on a similar scale, push it hard, and keep pushing it. If we were really capable manipulators, we could try dressing it up in religious clothes, but again, that’s not something a small group of hackers can easily do (though I’m always for starting a cult of technology).
This is all just paraphrasing of the old maxim “show, don’t tell”. Open source and future friendly systems and devices need to beat closed systems at their own game. We have to design systems, devices, whatever it is we design to be more usable, more focused, more elegant, more aesthetically pleasing, and with not necessarily more features, but better and more applicable features.
So, what can we do? Design stuff. Make stuff. Publicize everything we do. Help each other make stuff. Get past ego – it’s not about designing things to make one person or one subgroup look awesome, it’s about designing things to help us all move forward. Hack things. Publish our hacks. Design our creations to work together. Establish open de facto standards before the big corporates come in and foist closed ones upon us. Put every good idea in the commons, and make that commons so visible that patent inspectors can’t help but notice it. Encourage our children.
Some of that’s really practical, some of that’s philosophical. I think both are necessary – ideology without designs is just pretentious pap, design without ideology is all to easily co-opted by the greedy.
Edit: Seems that, two years ago, when I posted this, I left out the link to the original article. How stupid of me.
Imagine a search engine that, instead of just doing text matching, attempts to parse your statement into questions it can answer, then provides you with as many of those answers as it can. Imagine a search engine that can deal with numerical relationships and analysis. Imagine a search engine that’s tailored towards returning facts and knowledge instead of websites.
Now, go watch the Wolfram Alpha demo video.
Next, imagine if you had analytical tools of this nature at your fingertips at all times, and were able to project and share them on surfaces using some form of augmented reality. Finally, imagine what this could do to intelligent argument, discussion, design, and political discourse.
Quite a step, huh?
Check out this rather impressive imagining of virtual world construction in a fully tangible VR / AR environment.
The interface used is quite cool and inspirational, but there’s a lot of funky interface videos out there, and the basic idea of creating worlds from within isn’t new; Snow Crash has this sort of thing, and, to some extent, it’s a logical extension and extrapolation of Wayne Piekarski’s PhD work in using AR to build 3D models on the world around us. That said, it’s a very polished imagining of this idea, and well worth the watch.
What I really liked, though, is the emotional context in which this is placed – the film’s not just a cool interface concept, but rather an example of how virtual worlds and technology might be able to provide emotional support of a sort. Effectively, the protagonist is creating worlds to embody and relive his memories. Once, our memories were limited to shared stories, then writing, then photos, then video – it seems logical that, if 3D environments and simulated experiences could be captured, then these too would be something that we collect, file away for posterity, and maybe share with our friends.
Imagine if, instead of showing wedding photos to friends who couldn’t make it, you could compellingly simulate the experience of being there.
found via Long Now
Why do I blog this?
I’ve always loved world building, and the idea of being able to easily create and experience worlds excites me. To really be compelling, though one would need to be able to create believable simulated people and animals to populate the world; as it is, the world in this video seems somewhat lonely.
So, everyone knows what r stands for, right? What about v? Or f(x) and f’(x)? OK. How about x, y, and z?
If you’re not a math geek of some kind, you’re probably not reading anymore, but just in case you are, the point is that each of these letters has a common meaning in a lot of mathematical notation – p is a probability, v some arbitrary vector, f(x) and f’(x) some arbitrary function and its derivative, and x, y and z, are coordinates in 3-space.
The problem is that a lot of the time, this isn’t true, and even when it is true, it’s hard to tell exactly _which_ probability or set of coordinates you might be talking about.
Good math books typically get this – they define their notation, and use it consistently. If p means probability in chapter 1, it probably doesn’t mean ‘an arbitrary solution to the dual problem’ in chapter 2, unless it’s been explicitly re-defined. Each symbol should correspond to one particular value or concept at any given time. This makes the text easier and faster to read, and avoids all sorts of nasty confusion.
So, why is it that people presenting mathematical results always assume that you know their notation? If they throw up a complicated expression using a bunch of different letters, why do they assume that you know that r doesn’t actually mean radius (even though it’s shown on a circular diagram), and that, today, we’re using g to refer to probability, not p (except for that slide near the end, because it’s from a different slide set).
You’d think this just happens in badly prepared and presented seminars. Unfortunately, either you’re wrong, or I have an uncanny ability to attend only seminars that meet that criteria.
So, if you’re ever in a position to be presenting mathematical notation to a bunch of people, please, please, do the following..
I could go on, but instead, I refer people to Polya’s lovely short rant on the subject in ‘How to Solve It’. There’s a free version online. It’s on page 134.
People seem to forget that the entire point of notation is the economical expression of an idea for the purpose of memory or communication. Furthermore, memory is really just a special case of communication – you’re communicating with your future self. Imagine how confused they’ll be if, in your notes, q means different things without clear distinction. Imagine how confused your audience will be, not having been you in the first place.
This all boils down to this general point about communicating – if you don’t value your idea enough to make sure your audience understands, don’t bother opening your mouth. Play Minesweeper instead.
X-posted to various places
While waiting for pizza this evening, I read an article by David Allan Grier in IEEE Computer about the ways in which technology has changed entertainment, particularly the theatre, over the last 40 years or so.
In particular, he discusses how automated lighting, sound and so forth can afford a stage manager the opportunity to calibrate the response of the audience by controlling the timing of cues much more closely, much in the same way a live television producer does the same. What this has meant is that show production, in addition to be a massive organizational exercise, is now a performance unto itself.
Later, he goes on to talk about ways in which producers of other media gauge audience reaction and adapt accordingly – focus groups for TV and movies, golden ears for music, and now, with technology, learning systems based on customer profiling and crowd-sourcing, that can supplement socially driven recommendations such as friends or local record store owners – last.fm being a prominent example.
So inspired, here’s an interesting extension that occurred to me:
What if specialized AI, running locally, could be injected into traditionally mass-produced media like music, TV, or movies to act as a kind of virtual stage manager? It could observe you, the audience, a focus group of one, then tweak the timing, the content, the tone, and even the script of media to better suit your current mood, your tastes, to stimulate you in ways to which you are more sensitive, or even to better fit your available time.