The “Passive Surveillance” myth May 18, 2010Posted by dissident93 in Iraq mortality.
Note: an extended version of this post has been published by the Comment Factory
Les Roberts, the epidemiologist (and runner for Congress), uses the term “passive surveillance” to describe media-based counts of war dead. The term has entered the Iraq war lexicon – commentators often compare survey estimates (eg Lancet 2006) to figures from so-called “passive surveillance” (eg Iraq Body Count).
But Roberts misuses the term, and the lexicon is poorer for it. Consider the Iraqi journalists hired by Reuters to get the facts by going out and talking to people. Are they more “passive” than survey teams or pollsters? (Also, as a reader of this blog points out, Iraq was the “world’s deadliest nation for journalists” for six years, 2003-2008. That’s a measure of actual fatalities and abductions, etc, not of “passive” sitting in an office – RS, 1/6/10).
What about the processing stage – is it more “passive” to process media-based data than it is to process survey results? Does it help if you type faster or do press-ups at regular intervals? In fact, the active/passive metaphor has little relevance here. A media-based count of war dead may be incomplete, an “undercount”; a survey estimate may be way off due to a bias in sampling, etc – such things have nothing to do with relative passivity/activity.
So what are the origins of the phrase “passive surveillance”, and why is it used in this context?
The term ‘passive surveillance’ seems to have originated in the medical literature to refer to data on medical ailments compiled by recording the number of people who present themselves to medical facilities for treatment. This is contrasted to ‘active surveillance’ methods by which data collectors proactively search the community and find ailing people. Applying the ‘passive surveillance’ term to conflict journalism is misleading since journalists actively seek out violent events, witnesses and informed sources in the field. (Note 44, Ethical and Data Integrity Problems in the second Lancet survey.., Defence and Peace Economics, Volume 21, Issue 1)
So, the term is misleading. Why use it? Well, if you’re trying to discredit media-based counts, it helps to use a word with derogatory connotations. Labelling something as “passive” is like saying “not good enough”, “should try harder”, etc. As an epidemiologist, Les Roberts can get away with using a phrase from the medical domain. The key to establishing a term is repetition, and that’s what Roberts has done.
Les Roberts’s latest attack on so-called “passive surveillance” makes some sweeping (and misinformed) statements. As a correspondent of mine points out:
1. Roberts writes: “Aside from the Human Security Report, whose conclusions are largely based on news media reports, a variety of other publications have been produced based on press reports, or worse, passive surveillance by governments involved in a war [5,6]”
This is an odd statement, as the Human Security Report’s main conclusions are not “based” on media reports. Roberts doesn’t even specify which of its conclusions he thinks are “based” on news reports. He’s equally vague about the “variety of other publications”, from which he mentions just two (in his footnotes) without specifying which of their findings, if any, he has a problem with, or why.
2. Roberts then writes: “This Journal has shown that news reports are in part a cultural construct. For example, the ratio of civilian to Coalition military deaths in Iraq reversed when comparing 11 US newspapers with three from the middle east.”
His wording here is somewhat misleading. It’s not the “the ratio of civilian to Coalition military deaths” which “reversed”. If a newspaper reports an Iraqi death once, and a US death 10 times, the ratio of deaths reported is still 1-1, although the ratio of reports is 1-10. The latter is reversed – but that’s not a great illustration of the type of “cultural construct” that Roberts apparently has in mind – ie one which would justify his next statement: “The dangers of drawing conclusions from passive surveillance processes are profound: they allow one to conclude mortality goes down in times of war making war more acceptable and they allow armies, like those invading Iraq, to manipulate the press to portray resistance fighters as the primary killers when population-wide data conclude the opposite [8,9]”
His latter claim is not only unsupported by anything in his article – it’s clearly refuted by, for example, a comparison between IBC and Roberts’s own 2004 Lancet Iraq survey. The Lancet 2004 estimate shows that 43% of violent deaths (for the whole country outside Falluja) were directly caused by US-led forces, compared to IBC’s 47% over the same period. (IBC analysis, p23-26)
3. Roberts makes the absurd, sweeping statement that “We should not tolerate publications of surveillance data where the sensitivity of that data cannot be shown.” As my correspondent points out, this is like saying that we shouldn’t “tolerate” police-recorded crime figures, or any kind of simple count, without some statistical interpretation (a “sensitivity analysis”). This is complete nonsense, of course. Roberts should be asking himself whether we should “tolerate” statistical estimates from surveys which don’t provide the nitty-gritty facts on how the claimed random sampling was achieved.
Surely Action Heroes would not tolerate statistical constructs insufficiently supported by hard facts?