jump to navigation

Project Censored as censors? October 20, 2008

Posted by dissident93 in Iraq mortality, Project Censored.
trackback

How many Iraqi deaths due to the US occupation? Project Censored (Top censored story for 2009) focuses on the calamity in Iraq, but excludes several crucial scientific studies from its account. As a result it presents a headline deaths figure which isn’t supported by scientific consensus.

The misleading headline

Iraq has become a bloodbath, but Project Censored’s headline claim, “Over One Million Iraqi Deaths”, isn’t endorsed by the majority of experts in the field – many leading authorities dispute this level of deaths: Jon Pedersen, Beth Duponte Osborne, Debarati Guha-Sapir, Mark van der Laan, etc.

There are more peer-reviewed scientific studies casting doubt on Project Censored’s headline figure than there are corroborating it.* But Project Censored doesn’t tell readers about this body of scientific research. Why?

“Good” censorship?

Two studies (ORB and Lancet 2006) are cited by Project Censored, but other, larger, scientific surveys (eg WHO/IFHS*) have been excluded, as have important critical studies and overviews of existing research (eg from the Centre for Research on the Epidemiology of Disasters, which estimated 125,000 deaths over the same period as Lancet 2006*).

Questionable scientific standing

ORB (Opinion Research Business) is the market research company which publicised the “over a million” estimate cited by Project Censored. ORB’s Iraq poll wasn’t peer-reviewed science. The person conducting ORB’s poll, Munqith Daghir, began his polling career in 2003, with little in the way of formal training or field experience (according to ORB’s publicity literature).

The ORB poll doesn’t have the scientific standing of the major studies (eg IFHS, ILCS*) which Project Censored excludes.

Leading researchers disagree with Project Censored

Many researchers either flatly reject the mortality level claimed by Project Censored, or are highly critical of the Lancet 2006 estimates which it cites. On the other hand, studies which Project Censored overlooks (or intentionally excludes) have been well-received by professional epidemiologists and demographers as important contributions to the field…

• Jon Pedersen (Fafo) is one of the leading experts on Middle East demography. He conducted the Iraq Living Conditions Survey (ILCS, a cluster-sample survey of over 21,000 Iraqi households – much larger than Lancet 2006, which surveyed approximately 1,800). Pedersen has commented that the Lancet 2006 mortality estimates were “high, and probably way too high. I would accept something in the vicinity of 100,000 but 600,000 is too much.” (Source: Washington Post, 19 Oct 2006)

• Research by Debarati Guha-Sapir and Olivier Degomme, from the Centre for Research on the Epidemiology of Disasters (CRED) estimates the total war-related death toll (for the period covered by Lancet 2006) at around 125,000. They reach this figure by correcting errors in the Lancet 2006 survey, and triangulating with IBC and ILCS data. Source: CRED paper.

• Beth Osborne Daponte (the renowned demographer who produced authoritative death figures for the first Gulf War) argues in a recent paper that the most reliable information available (to date) is provided by a combination of IFHS, ILCS and Iraq Body Count. This puts a working estimate well below the “million” figure claimed by Project Censored. Daponte is critical of the Lancet 2006 study – like several other researchers, she finds its pre-war crude death rate too low (which would inflate the excess deaths estimate). She writes that the Lancet authors “have not adequately addressed these issues”. http://tinyurl.com/48mq63

• Paul Spiegel, an epidemiologist at the UN, commented on IFHS (which estimated 151,000 violent deaths over the same period as Lancet 2006): “Overall, this [IFHS] is a very good study […] What they have done that other studies have not is try to compensate for the inaccuracies and difficulties of these surveys.” He adds that “this does seem more believable to me [than Lancet 2006]“. http://tinyurl.com/53s82b

• Mark van der Laan, an authority in the field of biostatistics (and recipient of the Presidential Award of the Committee of Presidents of Statistical Societies) has written, with Leon de Winter, on the Lancet 2006 study:

“We conclude that it is virtually impossible to judge the value of the original data collected in the 47 clusters [of the Lancet study]. We also conclude that the estimates based upon these data are extremely unreliable and cannot stand a decent scientific evaluation.” http://tinyurl.com/4txbpw

• Mohamed M. Ali, Colin Mathers and J. Ties Boerma, from the World Health Organization at Geneva (authors of IFHS), write that it “is unlikely that a small survey with only 47 clusters [Lancet 2006] has provided a more accurate estimate of violence-related mortality than a much larger survey sampling of 971 clusters [IFHS].” http://content.nejm.org/cgi/content/full/359/4/431

• Survey methodologist Seppo Laaksonen has expressed many doubts over the Lancet 2006 estimates due to problems with the data (an attempt was made by Laaksonen to reconstruct country-level estimates using data received from the Lancet team). See Retrospective two-stage cluster sampling for mortality in Iraq by Seppo Laaksonen, International Journal of Market Research, Vol. 50, No. 3, 2008.

• Neil F. Johnson, et al, in Bias in Epidemiological Studies of Conflict Mortality, argue that there may be a “substantial overestimate of mortality” in the Lancet 2006 study due to a bias introduced in the street sampling procedure. The Lancet authors responded by asserting that such a “main street bias” was intentionally avoided, but (to date) have not been able to explain how this was achieved (without fundamentally changing the published sampling scheme). It remains a serious, unresolved issue.

• Many other researchers have criticised the estimates produced by Lancet 2006 and ORB. These criticisms include a comprehensive paper by Professor Michael Spagat on “ethical and data-integrity problems” in the Lancet study. Fritz Scheuren, a past president of the American Statistical Association, has said the response rate in the Lancet 2006 study was “not credible”. Professor Stephen Fienberg, the well-known statistician, is on record as stating that he doesn’t believe the Lancet 2006 estimate. Two of the world’s prestigious scientific journals, Nature and Science, ran articles critical of the Lancet 2006 study.

Project Censored doesn’t mention this substantial body of opinion among leading researchers which, inconveniently, contradicts its message.

*References & further reading

Research disputing or indirectly contradicting the mortality estimates cited by Project Censored:

1. Estimating mortality in civil conflicts: lessons from Iraq, by Debarati Guha-Sapir, Olivier Degomme. Centre for Research on the Epidemiology of Disasters, University of Louvain, School of Public Health, Brussels. Paper (PDF format)

2. Wartime estimates of Iraqi civilian casualties, by Beth Osborne Daponte. International Review of the Red Cross, No. 868. http://tinyurl.com/48mq63

3. Violence-Related Mortality in Iraq from 2002 to 2006, Iraq Family Health Survey (IFHS) Study Group. The New England Journal of Medicine, Volume 358:484-493. http://tinyurl.com/yoysuf

4. Bias in Epidemiological Studies of Conflict Mortality, by Neil F. Johnson, Michael Spagat, Sean Gourley, Jukka-Pekka Onnela, Gesine Reinert. Journal of Peace Research, Vol. 45, No. 5. http://jpr.sagepub.com/cgi/content/abstract/45/5/653

5. Sampling bias due to structural heterogeneity and limited internal diffusion, by Jukka-Pekka Onnela, Neil F. Johnson, Sean Gourley, Gesine Reinert, Michael Spagat. http://arxiv.org/abs/0807.4420

6. Ethical and Data-Integrity Problems in the Second Lancet Survey of Mortality in Iraq, by Michael Spagat, Department of Economics, Royal Holloway College. http://tinyurl.com/4xsjtl

7. Confidence Intervals for the Population Mean Tailored to Small Sample Sizes, with Applications to Survey Sampling, by Michael Rosenblum, Mark J. van der Laan. University of California, Berkeley Division of Biostatistics Working Paper Series. http://www.bepress.com/ucbbiostat/paper237/

8. Reality checks: some responses to the latest Lancet estimates, by Hamit Dardagan, John Sloboda, and Josh Dougherty. Iraq Body Count Press Release 14, Oct 2006. http://tinyurl.com/ysfpbj

9. Retrospective two-stage cluster sampling for mortality in Iraq, by Seppo Laaksonen, International Journal of Market Research, Vol. 50, No. 3, 2008. http://tinyurl.com/4yawmx

10. Mainstreaming an Outlier: The Quest to Corroborate the Second Lancet Survey of Mortality in Iraq, by Michael Spagat, Department of Economics, Royal Holloway College. http://tinyurl.com/46v8jy

11. Iraq Living Conditions Survey (ILCS) 2004, United Nations Development Programme. http://tinyurl.com/5yfyye

12. Mortality after the 2003 invasion of Iraq: Were valid and ethical field methods used in this survey?, by Madelyn Hsiao-Rei Hicks. Households in Conflict Network, The Institute of Development Studies, University of Sussex. 1 December 2006. http://www.hicn.org/research_design/rdn3.pdf

13. “Mortality after the 2003 invasion of Iraq: A cross-sectional cluster sample survey”, by Burnham et al: An Approximate Confidence Interval for Total Number of Violent Deaths in the Post Invasion Period, by Mark J. van der Laan, Division of Biostatistics, University of California, Berkeley, October 26, 2006. http://socrates.berkeley.edu/~jewell/lancet061.pdf

14. Lancet 2006 study criticised – letters from researchers published in the Lancet journal, 2007; 369.

About these ads
Follow

Get every new post delivered to your Inbox.