SD Conf. – VII – Moxnes – Do people follow advice w.r.t complex systems?

By | August 3, 2009

Notes from one of the closing plenaries at the 27th System Dynamics Conference:

Are advice adhered to? “Populist” versus “activist” or “systems analyst” advice by Erling Moxnes

Hypothetically, imagine you’re a reindeer herder. Weird, I know, but this research is from Norway.

You’ve got, say, 1850 reindeer on your island. Reindeer eat lichen, which grows back every year at a rate determined by its current height; slowly if there’s not much left, slowly if it’s nearing maximum growth, and swiftly about half way in between. For the amount of lichen to remain constant, the annual consumption has to equal the annual growth. Then, the maximum sustainable herd size is the amount of reindeer who consume lichen at the maximum growth rate. Given your existing reindeer herd, cultivate your herd and the lichen on your island to attain the maximum sustainable herd size.

Past research has shown that people are generally unable to determine near optimal strategies in situations like this. Interestingly, this is the case for experts as well as novices. This paper built on those results, looking further to see what would happen if people were given advice. The author considered three types of advice: ‘populist’ advice based on normal behaviour, ‘activist’ advice that basically accused reindeer herders of being irresponsible, and ‘systems’ advice using reasoned language similar to that in my previous paragraph. The strategies advocated by the activist and systems advice were optimal and identical, differing only in the justifications presented.

There were three groups of participants; one, the control, just heard the ‘populist’ advice, while the two experimental groups also heard either the activist or systems advice.

The results were pretty interesting. None of the groups followed the optimal strategy advocated by the systems or activist advice. That said, those who got the extra advice generally began to adapt their suboptimal strategy more swiftly than those who did not. Furthermore, those who got the emotive advice adapted more swiftly than those who got the well-reasoned advice.

Key point: People tend to build defective mental models of dynamic systems. Furthermore, people also tend not to follow advice that runs contrary to those models. Incidentally, it’s not the case that the suboptimal strategies were only a little worse – if one followed the explicit advice given, one would achieve the sustainable maximum herd within about 3 cycles; as it were, none of the groups  got close, even after 15 cycles. Taken with other research in this vein, it’s pretty clear that computational models give much better results than people. Furthermore, people are just as vulnerable, if not more so, to all of the failings that models are accused of (say, by climate skeptics).