jump to navigation

Leading researchers disagree with Project Censored October 20, 2008

Posted by dissident93 in Demography, Iraq mortality, Project Censored.
trackback

Project Censored’s Top 2009 censored story” headline states: “Over One Million Iraqi Deaths Caused by US Occupation”. Many researchers either flatly reject this level of deaths, or are critical of the two studies that Project Censored cites (ORB and Lancet 2006). On the other hand, studies which Project Censored overlooks (or intentionally excludes) – eg IFHS, ILCS, CRED, etc – have been well-received by professional epidemiologists and demographers as important contributions to the field…

Jon Pedersen is one of the leading experts on Middle East demography. He conducted the Iraq Living Conditions Survey (ILCS, a cluster-sample survey of over 21,000 Iraqi households – much larger than Lancet 2006, which surveyed approximately 1,800). Pedersen has commented that the Lancet 2006 mortality estimates were “high, and probably way too high. I would accept something in the vicinity of 100,000 but 600,000 is too much.” (Source: Washington Post, 19 Oct 2006)

• Research by Debarati Guha-Sapir and Olivier Degomme, from the Centre for Research on the Epidemiology of Disasters (CRED) estimates the total war-related death toll (for the period covered by Lancet 2006) at around 125,000. They reach this figure by correcting errors in the Lancet 2006 survey, and triangulating with IBC and ILCS data. Source: CRED paper (PDF).

Beth Osborne Daponte (the renowned demographer who produced authoritative death figures for the first Gulf War) argues in a recent paper that the most reliable information available (to date) is provided by a combination of IFHS, ILCS and Iraq Body Count. This puts a working estimate well below the “million” figure claimed by Project Censored. Daponte is critical of the Lancet 2006 study – like several other researchers, she finds its pre-war crude death rate too low (which would inflate the excess deaths estimate). She writes that the Lancet authors “have not adequately addressed these issues”. http://tinyurl.com/48mq63

Paul Spiegel, an epidemiologist at the UN, commented on IFHS (which estimated 151,000 violent deaths over the same period as Lancet 2006): “Overall, this [IFHS] is a very good study [...] What they have done that other studies have not is try to compensate for the inaccuracies and difficulties of these surveys.” He adds that “this does seem more believable to me [than Lancet 2006]“. http://tinyurl.com/53s82b

Mark van der Laan, an authority in the field of biostatistics (and recipient of the Presidential Award of the Committee of Presidents of Statistical Societies) has written, with Leon de Winter, on the Lancet 2006 study:

“We conclude that it is virtually impossible to judge the value of the original data collected in the 47 clusters [of the Lancet study]. We also conclude that the estimates based upon these data are extremely unreliable and cannot stand a decent scientific evaluation.” http://tinyurl.com/4txbpw

Mohamed M. Ali, Colin Mathers and J. Ties Boerma, from the World Health Organization at Geneva (authors of IFHS), write that it “is unlikely that a small survey with only 47 clusters [Lancet 2006] has provided a more accurate estimate of violence-related mortality than a much larger survey sampling of 971 clusters [IFHS].” http://content.nejm.org/cgi/content/full/359/4/431

• Survey methodologist Seppo Laaksonen has expressed many doubts over the Lancet 2006 estimates due to problems with the data (an attempt was made by Laaksonen to reconstruct country-level estimates using data received from the Lancet team). See Retrospective two-stage cluster sampling for mortality in Iraq by Seppo Laaksonen, International Journal of Market Research, Vol. 50, No. 3, 2008.

Neil F. Johnson, et al, in Bias in Epidemiological Studies of Conflict Mortality, argue that there may be a “substantial overestimate of mortality” in the Lancet 2006 study due to a bias introduced in the street sampling procedure. The Lancet authors responded by asserting that such a “main street bias” was intentionally avoided, but (to date) have not been able to explain how this was achieved (without fundamentally changing the published sampling scheme). It remains a serious, unresolved issue.

• Many other researchers have criticised the estimates produced by Lancet 2006 and ORB. These criticisms include a comprehensive paper by Professor Michael Spagat on “ethical and data-integrity problems” in the Lancet study. Fritz Scheuren, a past president of the American Statistical Association, has said the response rate in the Lancet 2006 study was “not credible”. Professor Stephen Fienberg, the well-known statistician, is on record as stating that he doesn’t believe the Lancet 2006 estimate. Two of the world’s prestigious scientific journals, Nature and Science, ran articles critical of the Lancet 2006 study.

Donald Berry is Chairman of the Department of Biostatistics and Applied Mathematics at the University of Texas MD Anderson Cancer Center. Berry is reported as writing that the Lancet 2006 estimates are “unreliable”:

“…The last thing I want to do is agree with Bush, especially on something dealing with Iraq. But I think ‘unreliable’ is apt. (I just heard Bush say ‘not credible.’ ‘Unreliable’ is better. There is a certain amount of credibility in the study, but they exaggerate the reliability of their estimate.)

“Selecting clusters and households that are representative and random is enormously difficult. Moreover, any bias on the part of the interviewers in the selection process would occur in every cluster and would therefore be magnified. The authors point out the possibility of bias, but they do not account for it in their report.

“It is true that the range reported (392,979–942,636) is huge. Its width represents only one source of variability, the statistical error present under the assumption that their sample is representative and random. I believe their analysis to be correct under these assumptions. However, it does not incorporate the possibility of biases such as the one I mentioned above. Incorporating the possibility of such biases would lead to a substantially wider range, the potential for bias being huge. Although there is no formal way to address bias short of having an ‘independent body assess the excess mortality,’ which the authors recommend, the lower end of this range could easily drop to the 100,000 level.”

References & further reading

Research disputing or indirectly contradicting the mortality estimates cited by Project Censored:

1. Estimating mortality in civil conflicts: lessons from Iraq, by Debarati Guha-Sapir, Olivier Degomme. Centre for Research on the Epidemiology of Disasters, University of Louvain, School of Public Health, Brussels. http://www.cedat.be/sites/default/files/WP%20Iraq_0.pdf

2. Wartime estimates of Iraqi civilian casualties, by Beth Osborne Daponte. International Review of the Red Cross, No. 868. http://tinyurl.com/48mq63

3. Violence-Related Mortality in Iraq from 2002 to 2006, Iraq Family Health Survey (IFHS) Study Group. The New England Journal of Medicine, Volume 358:484-493. http://tinyurl.com/yoysuf

4. Bias in Epidemiological Studies of Conflict Mortality, by Neil F. Johnson, Michael Spagat, Sean Gourley, Jukka-Pekka Onnela, Gesine Reinert. Journal of Peace Research, Vol. 45, No. 5. http://jpr.sagepub.com/cgi/content/abstract/45/5/653

5. Sampling bias due to structural heterogeneity and limited internal diffusion, by Jukka-Pekka Onnela, Neil F. Johnson, Sean Gourley, Gesine Reinert, Michael Spagat. http://arxiv.org/abs/0807.4420

6. Ethical and Data-Integrity Problems in the Second Lancet Survey of Mortality in Iraq, by Michael Spagat, Department of Economics, Royal Holloway College. http://tinyurl.com/4xsjtl

7. Confidence Intervals for the Population Mean Tailored to Small Sample Sizes, with Applications to Survey Sampling, by Michael Rosenblum, Mark J. van der Laan. University of California, Berkeley Division of Biostatistics Working Paper Series. http://www.bepress.com/ucbbiostat/paper237/

8. Reality checks: some responses to the latest Lancet estimates, by Hamit Dardagan, John Sloboda, and Josh Dougherty. Iraq Body Count Press Release 14, Oct 2006. http://tinyurl.com/ysfpbj

9. Retrospective two-stage cluster sampling for mortality in Iraq, by Seppo Laaksonen, International Journal of Market Research, Vol. 50, No. 3, 2008. http://tinyurl.com/4yawmx

10. Mainstreaming an Outlier: The Quest to Corroborate the Second Lancet Survey of Mortality in Iraq, by Michael Spagat, Department of Economics, Royal Holloway College. http://tinyurl.com/46v8jy

11. Iraq Living Conditions Survey (ILCS) 2004, United Nations Development Programme. http://tinyurl.com/5yfyye

12. Mortality after the 2003 invasion of Iraq: Were valid and ethical field methods used in this survey?, by Madelyn Hsiao-Rei Hicks. Households in Conflict Network, The Institute of Development Studies, University of Sussex. 1 December 2006. http://www.hicn.org/research_design/rdn3.pdf

13. “Mortality after the 2003 invasion of Iraq: A cross-sectional cluster sample survey”, by Burnham et al: An Approximate Confidence Interval for Total Number of Violent Deaths in the Post Invasion Period, by Mark J. van der Laan, Division of Biostatistics, University of California, Berkeley, October 26, 2006. http://socrates.berkeley.edu/~jewell/lancet061.pdf

14. Lancet 2006 study criticised – letters from researchers published in the Lancet journal, 2007; 369.

About these ads
Follow

Get every new post delivered to your Inbox.