jump to navigation

“Most toxic affair in football history”? – Or most toxic media coverage? March 31, 2012

Posted by dissident93 in Censored, Media Criticism, Racism.
comments closed

100+ Guardian articles on Suarez story

This is off-topic for me. But when a newspaper publishes over 100 articles on a story without mentioning certain key facts – and censors a comment which does mention these facts – I’m inclined to write about it. Particularly as the deleted comment was mine.

With charges of racial abuse, it’s crucial that reporters get the facts right and not resort to conjecture. Sadly, this wasn’t the case with the Guardian on the Suarez-Evra story.

I don’t want to rehash the details (the background, regarding media failures, has already been provided in two incisive articles from @NewsFrames) – but to address the responses from journalists. Many people complained about the Guardian’s coverage of the Suarez case. The Guardian’s “Readers’ editor”, Chris Elliott, eventually responded to complaints in an ‘Open Door’ article. Elliott’s first two paragraphs frame the issue – with all the subtlety of a large shovel – in terms of irrational fan loyalty. He then replies directly to just one complaint:

I think it is entirely reasonable to use the word “findings” when describing the commission’s outcome. The independent regulatory commission was properly constituted, acting within clear parameters that were clearly explained in an exhaustive report. The dictionary definition of findings is conclusions reached following an inquiry. (Chris Elliott, Guardian, 15/1/12)

An unsatisfactory reply. The complaint (which is quoted in Elliott’s article) had been about the Guardian’s misleading statement, that Suarez “was found to have used the word ‘negro’ or ‘negros’ seven times”. (Variations on this claim were printed in several Guardian articles). Regardless of dictionary definitions of “findings”, a responsible newspaper would have informed readers that there was no evidence for this “seven times” claim (the FA commission’s “findings” included uncorroborated allegations). The Guardian failed to inform readers of this crucial point in every instance.

Elliott’s article concludes with this paragraph:

Re-reading the complaints made me think that what the readers – who were very open about their allegiance – really wanted to do was argue with the findings of the commission and they expected the Guardian to do the same. The report is based on the balance of probabilities and the commission’s members have gone out of their way to explain in minute detail how they reached their decision. (Chris Elliott, Guardian, 15/1/12)

There are two problems with this. First, the complaint which Elliott quotes is not asking the Guardian to “argue” anything, but to inform readers of certain crucial facts, so they are not misled with a partial account of the report’s findings. Second, the “minute detail” merely reveals that the commission’s decision was based on a lack of direct evidence. The “balance of probabilities” rule requires strong evidence in serious cases such as this – a point acknowledged in paragraphs 76, 79 & 80 of the FA commission’s report (but ignored in over 100 Guardian articles):

80. The FA accepts that the Charge against Mr Suarez is serious, as do we. It is for this reason that we have reminded ourselves that a greater burden of evidence is required to prove the Charge against Mr Suarez.
(FA commission’s report)

Toxic hyperbole

Ian Prior (Guardian Sports Editor) wrote of the Suarez affair that it was “possibly the most toxic affair in the history of English football. It deserves every inch it gets and more”. He added that “Every escalation that has fuelled this story has come from Liverpool or the FA investigation”.

Let’s be charitable for a moment, and assume this isn’t just an editor being defensive over the Guardian’s tabloidesque coverage. Perhaps Prior really believes, for example, that Liverpool players wearing T-shirts (with a picture of Suarez) represents a toxic “escalation” in itself, without the Guardian interpreting it for us, running a series of articles about it, raising it to the status of “news” by collating the outraged opinions of a small handful of blogger-tweeters (whilst ignoring the views of those who thought it was about as newsworthy as the Pope wearing a funny hat).

The implication of Ian Prior’s remarks is, of course, that the events themselves were toxic – not the media circus. But, subtract the media spectacle (eg over 100 Guardian articles – I stopped counting after I got to 120) and what bare facts are we left with? Here are perhaps the most crucial ones, which – tellingly – were not mentioned anywhere in the Guardian’s remarkably voluminous coverage:

  • There was no direct evidence or witnesses of Suarez making the alleged racial remarks.
  • Evra changed his allegation of what was said (“n***er” to “negro”), and the number of times it was said.
  • The FA’s language experts said that Suarez’s account of his use of the word “negro” (Spanish) “would not be offensive. Indeed, it is possible that the term was intended as an attempt at conciliation and/or to establish rapport”. (FA panel’s report, paragraph 190)

The Guardian’s hyperbole (“shameful“, “beyond the pale, pigheaded, etc) was aimed largely at the reactions by Liverpool to the FA ruling (eg the statements and T-shirts supporting Suarez). But these reactions stem from an honourable stance:- namely, the right to question a questionable verdict (ie one based on no direct evidence) – and/or to support a person you believe to be innocent. In most cases, the Guardian – supposedly a “liberal” newspaper – would actively support such a basic right.

Presumably Guardian editors and writers – and anti-racism campaigners given space by the Guardian – can tell the difference, if pressed, between supporting someone accused of an offense, and condoning the offense itself. At least I hope they can.

Sadly, the illiberal reasoning applied by the Guardian to the Suarez case seems widespread. Thus, Ian Prior quotes Garth Crooks (speaking at the Guardian Open Weekend):  “there was an acceptance in football media that racism wasn’t as important as other matters … we are now in danger of returning to that, with complaints of too much coverage for Suarez or Terry affairs”.

The implication of such “logic” is pretty chilling: whatever valid reasons you may have for criticising such media coverage (and there are many), you’d better shut up, because some people might infer that you aren’t taking racism seriously enough. It’s an insidious and counterproductive form of reasoning. Bad logic will not help the fight against racism.

Update

Ian Prior (Guardian Sports Editor) appears to have read the above – he writes that my piece is “loaded” with “bad logic”. Ian, if you’d like to provide some examples of where you think my logic is at fault – rather than hiding behind 140-character Twitter assertions – you can contact me here.

Update #2 (3/4/12): It seems Ian’s not up to the challenge of backing up his own assertions. He tweets that “life’s too short” – although it’s apparently not too short for him (as Guardian Sports Editor) to publish over 100 articles on the Suarez case.

Advertisements

Project Censored correspondence July 1, 2010

Posted by dissident93 in Project Censored.
comments closed

While archiving emails I came across the eye-opening correspondence I had with Project Censored, which led to this post. I’d complained that their “Over One Million Iraqi Deaths” story effectively “censored” all the research (including major studies such as WHO/IFHS) which refuted their headline.

Their (then) director, Peter Phillips, replied to me at length, together with his researchers, Michael Schwartz & Joshua Holland. But their replies were riddled with errors (see list below). And nearly all of these errors were second-hand – they’d been circulating on the web for some time, and most had already been refuted. This indicated to me a very poor level of research – worse, ironically, than many pieces I’d seen in the mainstream media (which at least acknowledged the existence of studies such as WHO/IFHS).

I took the time to reply in detail, correcting the errors, etc (see list below). Their responses to this were interesting. Joshua Holland wrote that he was too busy to respond as he was covering the election for a few weeks (I never heard from him again). Michael Schwartz replied (abruptly) that he wasn’t interested in “recapitulating these disputes” (even though it was he who’d raised these detailed disputes, as justification for airbrushing IFHS, etc, out of the picture – I hadn’t mentioned them at all). I never heard anything further from him.

Peter Phillips eventually replied again: “I have given your comments serious thought and have decide not to change the title on our headline. I believe that over one million Iraqis have been died because of the US invasion and occupation and there is significant evidence to support that conclusion.” (15/10/2008).

In other words, a complete avoidance of the issue I’d raised: that regardless of his own belief, there was no scientific consensus over the “one million” figure (many authorities in the field did not accept this figure, as I documented in my original piece), and that he had effectively “censored” a large body of research because it didn’t support his “belief”.

Here’s the list of corrections to the errors of Project Censored’s researchers (which I sent to them on 30/9/08):

ERRORS & DETAILS

1. Joshua Holland writes that “IFHS also looked only at civilians”. This is incorrect. IFHS (like Lancet 2006) includes “combatants” as well as “civilians” in its estimate of violent deaths. (Ref: IFHS Q&A, p5: http://tinyurl.com/4o3w82)

2. Joshua Holland writes that “IFHS also omitted 11% of its sample”. This is correct, but Lancet 2006 also omitted some of its planned clusters (6%). IFHS made an effort to compensate for the omitted clusters; Lancet 2006 did not. Furthermore, IFHS made an effort to reflect regional population changes from migration during the survey period. Lancet 2006 did not.

3. Michael Schwartz writes that “the IFHS methodology has resulted in underestimates of both the number of total excess deaths and total violent deaths”. This is clearly incorrect, as IFHS provides no estimate for “total excess deaths” (IFHS provides an estimate for total violent deaths only). I’d be grateful if Michael could indicate which estimate of “total excess deaths” he is referring to here. (Perhaps he means the widely circulated estimate of 400,000 which has been falsely attributed to IFHS? More on this below). [*See July 2010 footnote on the “400,000” figure]

Michael claims that IFHS “underestimates” violent deaths – an unsupported assertion. Interestingly, the IFHS paper actually makes the argument that conflict surveys tend to underestimate violent deaths (mainly due to survivor bias), and as a result IFHS have adjusted their estimate upwards by about 54% to account for this proposed bias. Lancet 2006 doesn’t make this upward adjustment.

4. Joshua Holland writes that “[IFHS] was conducted by officials of the Iraqi Ministry of Health […] The Moth was a Sadrist ministry, and that likely caused trepidation among many respondents.”

This is another widely-circulated attempt to “criticise” IFHS – and probably the weakest and least worthy I’ve seen, based as it is on misinformed assumptions and speculations. IFHS was peer-reviewed science, and well-received by leading epidemiologists. It was also, like the Lancet surveys, a very brave effort by the researchers in the field who risked their lives. One IFHS team member was shot and killed on his way to work. http://content.nejm.org/cgi/content/full/NEJMp0709003?query=TOC

5. Michael Schwartz writes that “Shone understates the results of the IFHS study”. In fact I don’t “understate” – I accurately state that IFHS “estimated 151,000 violent Iraqi deaths”. This “understates” no more than if I’d written, accurately, that Lancet 2006 estimated 601,000 violent deaths.

(Of course, I’m aware than both IFHS and Lancet 2006 covered only as far as Summer 2006, but I think it’s better to accurately represent survey estimates than to extrapolate to the present day based on one’s own fallible set of assumptions. I also think it’s better to accurately represent survey estimates than to attribute “total excess mortality” figures based on one’s own assumptions, or based on widely circulated, but false, attributions).

6. Joshua Holland writes that “total excess mortality attributed to the war was 400,000 in the IFHS study”. This is incorrect. The source of the “400,000” figure isn’t the IFHS study or the IFHS authors. IFHS doesn’t provide a “total excess mortality” estimate. It provides only crude death rates, and simple extrapolations from these to total excess deaths should not be imputed to it (for reasons made clear by the IFHS authors – eg recall issues could lead to a spurious “excess” figure without “further analysis”). [*See July 2010 footnote on the “400,000” figure]

(Ref: NEJM Volume 358:484-493 Number 5. The following is as far as the IFHS authors went in providing a “total excess” figure, ie not very far: “Overall mortality from nonviolent causes was about 60% higher in the post-invasion period than in the pre-invasion period. Although recall bias may contribute to the increase, since deaths before 2003 were less likely to be reported than more recent deaths, this finding warrants further analysis”. http://content.nejm.org/cgi/content/full/NEJMsa0707782)

7. Joshua Holland writes that the “400,000” excess mortality figure [falsely] attributed to IFHS is “a number that’s certainly in the neighborhood of the Lancet findings”.

It’s incomprehensible to me that Joshua thinks “400,000” is “certainly in the neighborhood of” the Lancet 2006 estimate of 655,000. There’s a difference here of over a quarter of a million deaths! Furthermore, the whole exercise of comparing “400,000” to 655,000 conflates violent and non-violent deaths, since only approximately 54,000 of the Lancet 2006 estimated deaths were non-violent (compared to 249,000 of the “400,000” figure falsely attributed to IFHS).

Even if one accepts the 400,000 “total excess” figure and overlooks the conflation of violent and non-violent deaths in the IFHS-Lancet comparison, it doesn’t remotely follow that IFHS corroborates the “one million deaths” figure.

8. Michael Schwartz writes that “While [ORB’s] methodology is shakier than that of the Lancet studies, its results are consistent with their estimates.”

This alleged “consistency” between ORB and Lancet 2006 disappears upon close scrutiny. There are serious discrepancies in geographic distribution of violent deaths. ORB implies 700,000 violent deaths in Baghdad alone, compared to an estimate of about 150,000 implied by Lancet 2006 (requiring half a million violent deaths in Baghdad alone between July 2006 and August 2007 for the two studies to be “consistent”).

Moreover, there’s not even consistency between Lancet 2004 and Lancet 2006 on violent deaths. Lancet 2006 estimates twice as many violent deaths for the same period as Lancet 2004. (The Lancet 2004 estimate for violent excess deaths was 57,600; the Lancet 2006 data provides an estimate, for the period covered by Lancet 2004, of 112,500 -126,000 violent excess deaths, approximately twice that of Lancet 2004). Source for Lancet 2004 violent deaths estimate: Richard Garfield, Lancet 2004 co-author. http://www.epic-usa.org/An_Interview_with_EPIC_A.html

9. Michael Schwartz writes that he doesn’t accept, as a “lower bound”, the CRED figure of 125,000 deaths. But he provides no reason for dismissing the world-renowned and highly experienced epidemiologists at CRED. (Of course, I’m aware that the CRED figure, like Lancet 2006 and IFHS, covers only up to Summer 2006, and is by definition an “underestimate” relative to the present day – like all out-of-date studies).

Of all the published scientific studies, which do Michael Schwartz and Joshua Holland accept as providing a credible “lower bound”? And if Michael and Joshua are regarded as authorities by Project Censored, could this “lower bound” figure not be reflected in the Project Censored headline (eg “Estimates for Iraqi dead range from [approved lower bound] to over a million”)?

10. Joshua Holland writes: “I disagree, however, with the idea it [ORB’s estimate] should be rejected because ORB’s ‘core competency’ is opinion polling.”

I agree. ORB’s findings shouldn’t be “rejected”, and nowhere do I suggest they should be. I merely point out possible reasons why ORB’s Iraq survey hasn’t been widely recognised or accepted in the scientific literature (with the exception of non-refereed writings by the Lancet authors, various blogs, etc). [July 2010 update – given the recent published criticisms of the ORB poll, I think its estimate should not be regarded as reliable]

I’d also add that while the peer-reviewed IFHS can be validly criticised, it seems to have been well-received by the majority of experts in the field – in contrast to the reception it received from, for example, John Tirman (who commissioned Lancet 2006). Tirman, who is no epidemiologist, was quick to dismiss the IFHS estimate of violent deaths as “not credible” (echoes of George Bush dismissing the Lancet study). http://tinyurl.com/3o8lsc

11. Michael Schwartz writes of “the second Lancet study, which remains the standard that I would accept for methodological rigor and therefore for accurate estimates”.

This is a curious statement, as the Lancet authors themselves are emphatic that they haven’t provided “accurate” estimates, but merely ballpark figures in need of corroboration. Michael also mentions the confidence interval – a wide range between lower and upper bounds on the estimates, and adds that this is “the correct way to report it in scholarly contexts”. Actually it’s the only correct way to report it in all contexts (including Project Censored headlines). As the revered demographer, Beth Osborne Daponte, points out, it’s a misinterpretation of the data in such studies to pick the mid-point of the range and present that as “the” definitive estimate. (Ref: the peer-reviewed ‘Wartime estimates of Iraqi civilian casualties’, by Beth Osborne Daponte, p7) http://www.icrc.org/Web/eng/siteeng0.nsf/htmlall/review-868-p943/$File/irrc-868_Daponte.pdf

12. Joshua Holland writes that the “IFHS study also revealed a pretty consistent number of deaths from year-to-year. That’s simply not plausible”.

This appears to be one of the more valid criticisms of IFHS. However, another possibility should be considered: that this only appears to be the case in the IFHS data due to the range of sampling error being substantial for the annual figures. The difference in the IFHS trendline and that presented by IBC is in fact not statistically significant.

Compare also the fact that Lancet 2006 data doesn’t show the spiking of violent deaths that you’d expect in the shock-and-awe phase (which is seen clearly in IBC’s detailed data). The “explanation” given by Les Roberts (Lancet co-author) for this should certainly raise some eyebrows: “Our data suggests that the shock-and-awe campaign was very careful, that a lot of the targets were genuine military targets”. Source for Les Roberts quote: http://tinyurl.com/4yo5uw

13. Joshua Holland writes that the “Lancet study measured excess mortality over a pre-war baseline from all causes — violent, nonviolent, etc. That means we have to compare apples and apples. Your correspondent cites the 151,000 figure from IFHS, but that was only violent deaths”.

Lancet 2006 in fact provided an estimate for violent deaths: 601,000 (the estimate of 655,000 was for deaths by all causes), so we can compare apples with apples (since IFHS also covered the same period as Lancet 2006). The direct comparison is as follows:

– Lancet: 601,000 violent deaths
– IFHS: 151,000 violent deaths

This is the most direct comparison available, and it shows a massive difference (450,000 violent deaths) between the two studies. Other comparisons are less direct and more questionable, as they introduce more assumptions, further extrapolations, etc.

*[July 2010 update. My comments on the false attribution of the 400,000 figure to IFHS remain accurate, but one of the study’s authors has reportedly mentioned a similar figure, “397,000”, several months after the study came out. To date, no details have been published – but it’s possible that the IFHS authors will at some point publish a figure. Of course, this doesn’t change the fact that when IFHS was published in January 2008, its authors specifically warned against attributing a figure to excess deaths, due to recall issues]

Unrelated update – The Comment Factory has published another of my articles: Dubious polls: How accurate are Iraq’s death counts?

Misrepresenting “science” November 8, 2008

Posted by dissident93 in Iraq mortality, Medialens, Project Censored.
comments closed

“Over One Million Iraqi Deaths Caused by US Occupation” (Project Censored)

“I don’t believe there is any consensus that the number is that high” (David A. Marker, chair of the American Statistical Association’s Scientific and Public Affairs Advisory Committee, and author of the ‘Methodological Review’ [of the Lancet 2006 study])*

Anyone who consults the available research will recognise that there’s no scientific consensus supporting the “one million deaths” claim, but several influential websites (Project Censored, Just Foreign Policy, Medialens, etc) present it as if there is a supporting consensus.

Medialens, for example, write that “1.2 Million Iraqis Have Been Murdered” (subheading in their 18/9/07 alert) – although in a later article, they deny promoting the figure as if it were factual:

But who has asserted the 1 million figure as “a fact”? Certainly we at Media Lens haven’t. We have simply reported the most credible scientific advice on the most credible numbers. And as you know, science is not about offering certainty… [Medialens, email to John Rentoul, 4 April 2008]

Of course, this is disingenuous. Medialens have not “simply reported the most credible scientific advice”. They’ve reported one peer-reviewed study (and one unrefereed poll)** and ignored (or overlooked) at least five peer-reviewed studies (plus several critical reviews by leading researchers).*** And all the research they’ve ignored (or overlooked) coincidentally does not support their statement that “1.2 Million Iraqis Have Been Murdered”.

Curiously, Medialens also write: “It seems clear that the Lancet figure of 655,000 deaths, although now a year out of date, was accurate”. Clear to whom? Few “credible” scientific researchers share Medialens’s “certainty” over the accuracy of this figure. Most in fact are honest enough to admit they are unclear over the real number of deaths.

* Email from David Marker to me (6/11/08). I’d asked him if he was aware of any scientific consensus supporting the “over one million deaths” claim.
** Medialens cite the Lancet 2006 study (peer-reviewed) and the ORB poll (which isn’t peer-reviewed science).
*** For example see Leading researchers disagree with Project Censored.

Project Censored as censors? October 20, 2008

Posted by dissident93 in Iraq mortality, Project Censored.
comments closed

How many Iraqi deaths due to the US occupation? Project Censored (Top censored story for 2009) focuses on the calamity in Iraq, but excludes several crucial scientific studies from its account. As a result it presents a headline deaths figure which isn’t supported by scientific consensus.

The misleading headline

Iraq has become a bloodbath, but Project Censored’s headline claim, “Over One Million Iraqi Deaths”, isn’t endorsed by the majority of experts in the field – many leading authorities dispute this level of deaths: Jon Pedersen, Beth Duponte Osborne, Debarati Guha-Sapir, Mark van der Laan, etc.

There are more peer-reviewed scientific studies casting doubt on Project Censored’s headline figure than there are corroborating it.* But Project Censored doesn’t tell readers about this body of scientific research. Why?

“Good” censorship?

Two studies (ORB and Lancet 2006) are cited by Project Censored, but other, larger, scientific surveys (eg WHO/IFHS*) have been excluded, as have important critical studies and overviews of existing research (eg from the Centre for Research on the Epidemiology of Disasters, which estimated 125,000 deaths over the same period as Lancet 2006*).

Questionable scientific standing

ORB (Opinion Research Business) is the market research company which publicised the “over a million” estimate cited by Project Censored. ORB’s Iraq poll wasn’t peer-reviewed science. The person conducting ORB’s poll, Munqith Daghir, began his polling career in 2003, with little in the way of formal training or field experience (according to ORB’s publicity literature).

The ORB poll doesn’t have the scientific standing of the major studies (eg IFHS, ILCS*) which Project Censored excludes.

Leading researchers disagree with Project Censored

Many researchers either flatly reject the mortality level claimed by Project Censored, or are highly critical of the Lancet 2006 estimates which it cites. On the other hand, studies which Project Censored overlooks (or intentionally excludes) have been well-received by professional epidemiologists and demographers as important contributions to the field…

• Jon Pedersen (Fafo) is one of the leading experts on Middle East demography. He conducted the Iraq Living Conditions Survey (ILCS, a cluster-sample survey of over 21,000 Iraqi households – much larger than Lancet 2006, which surveyed approximately 1,800). Pedersen has commented that the Lancet 2006 mortality estimates were “high, and probably way too high. I would accept something in the vicinity of 100,000 but 600,000 is too much.” (Source: Washington Post, 19 Oct 2006)

• Research by Debarati Guha-Sapir and Olivier Degomme, from the Centre for Research on the Epidemiology of Disasters (CRED) estimates the total war-related death toll (for the period covered by Lancet 2006) at around 125,000. They reach this figure by correcting errors in the Lancet 2006 survey, and triangulating with IBC and ILCS data. Source: CRED paper.

• Beth Osborne Daponte (the renowned demographer who produced authoritative death figures for the first Gulf War) argues in a recent paper that the most reliable information available (to date) is provided by a combination of IFHS, ILCS and Iraq Body Count. This puts a working estimate well below the “million” figure claimed by Project Censored. Daponte is critical of the Lancet 2006 study – like several other researchers, she finds its pre-war crude death rate too low (which would inflate the excess deaths estimate). She writes that the Lancet authors “have not adequately addressed these issues”. http://tinyurl.com/48mq63

• Paul Spiegel, an epidemiologist at the UN, commented on IFHS (which estimated 151,000 violent deaths over the same period as Lancet 2006): “Overall, this [IFHS] is a very good study […] What they have done that other studies have not is try to compensate for the inaccuracies and difficulties of these surveys.” He adds that “this does seem more believable to me [than Lancet 2006]“. http://tinyurl.com/53s82b

• Mark van der Laan, an authority in the field of biostatistics (and recipient of the Presidential Award of the Committee of Presidents of Statistical Societies) has written, with Leon de Winter, on the Lancet 2006 study:

“We conclude that it is virtually impossible to judge the value of the original data collected in the 47 clusters [of the Lancet study]. We also conclude that the estimates based upon these data are extremely unreliable and cannot stand a decent scientific evaluation.” http://tinyurl.com/4txbpw

• Mohamed M. Ali, Colin Mathers and J. Ties Boerma, from the World Health Organization at Geneva (authors of IFHS), write that it “is unlikely that a small survey with only 47 clusters [Lancet 2006] has provided a more accurate estimate of violence-related mortality than a much larger survey sampling of 971 clusters [IFHS].” http://content.nejm.org/cgi/content/full/359/4/431

• Survey methodologist Seppo Laaksonen has expressed many doubts over the Lancet 2006 estimates due to problems with the data (an attempt was made by Laaksonen to reconstruct country-level estimates using data received from the Lancet team). See Retrospective two-stage cluster sampling for mortality in Iraq by Seppo Laaksonen, International Journal of Market Research, Vol. 50, No. 3, 2008.

• Neil F. Johnson, et al, in Bias in Epidemiological Studies of Conflict Mortality, argue that there may be a “substantial overestimate of mortality” in the Lancet 2006 study due to a bias introduced in the street sampling procedure. The Lancet authors responded by asserting that such a “main street bias” was intentionally avoided, but (to date) have not been able to explain how this was achieved (without fundamentally changing the published sampling scheme). It remains a serious, unresolved issue.

• Many other researchers have criticised the estimates produced by Lancet 2006 and ORB. These criticisms include a comprehensive paper by Professor Michael Spagat on “ethical and data-integrity problems” in the Lancet study. Fritz Scheuren, a past president of the American Statistical Association, has said the response rate in the Lancet 2006 study was “not credible”. Professor Stephen Fienberg, the well-known statistician, is on record as stating that he doesn’t believe the Lancet 2006 estimate. Two of the world’s prestigious scientific journals, Nature and Science, ran articles critical of the Lancet 2006 study.

Project Censored doesn’t mention this substantial body of opinion among leading researchers which, inconveniently, contradicts its message.

*References & further reading

Research disputing or indirectly contradicting the mortality estimates cited by Project Censored:

1. Estimating mortality in civil conflicts: lessons from Iraq, by Debarati Guha-Sapir, Olivier Degomme. Centre for Research on the Epidemiology of Disasters, University of Louvain, School of Public Health, Brussels. Paper (PDF format)

2. Wartime estimates of Iraqi civilian casualties, by Beth Osborne Daponte. International Review of the Red Cross, No. 868. http://tinyurl.com/48mq63

3. Violence-Related Mortality in Iraq from 2002 to 2006, Iraq Family Health Survey (IFHS) Study Group. The New England Journal of Medicine, Volume 358:484-493. http://tinyurl.com/yoysuf

4. Bias in Epidemiological Studies of Conflict Mortality, by Neil F. Johnson, Michael Spagat, Sean Gourley, Jukka-Pekka Onnela, Gesine Reinert. Journal of Peace Research, Vol. 45, No. 5. http://jpr.sagepub.com/cgi/content/abstract/45/5/653

5. Sampling bias due to structural heterogeneity and limited internal diffusion, by Jukka-Pekka Onnela, Neil F. Johnson, Sean Gourley, Gesine Reinert, Michael Spagat. http://arxiv.org/abs/0807.4420

6. Ethical and Data-Integrity Problems in the Second Lancet Survey of Mortality in Iraq, by Michael Spagat, Department of Economics, Royal Holloway College. http://tinyurl.com/4xsjtl

7. Confidence Intervals for the Population Mean Tailored to Small Sample Sizes, with Applications to Survey Sampling, by Michael Rosenblum, Mark J. van der Laan. University of California, Berkeley Division of Biostatistics Working Paper Series. http://www.bepress.com/ucbbiostat/paper237/

8. Reality checks: some responses to the latest Lancet estimates, by Hamit Dardagan, John Sloboda, and Josh Dougherty. Iraq Body Count Press Release 14, Oct 2006. http://tinyurl.com/ysfpbj

9. Retrospective two-stage cluster sampling for mortality in Iraq, by Seppo Laaksonen, International Journal of Market Research, Vol. 50, No. 3, 2008. http://tinyurl.com/4yawmx

10. Mainstreaming an Outlier: The Quest to Corroborate the Second Lancet Survey of Mortality in Iraq, by Michael Spagat, Department of Economics, Royal Holloway College. http://tinyurl.com/46v8jy

11. Iraq Living Conditions Survey (ILCS) 2004, United Nations Development Programme. http://tinyurl.com/5yfyye

12. Mortality after the 2003 invasion of Iraq: Were valid and ethical field methods used in this survey?, by Madelyn Hsiao-Rei Hicks. Households in Conflict Network, The Institute of Development Studies, University of Sussex. 1 December 2006. http://www.hicn.org/research_design/rdn3.pdf

13. “Mortality after the 2003 invasion of Iraq: A cross-sectional cluster sample survey”, by Burnham et al: An Approximate Confidence Interval for Total Number of Violent Deaths in the Post Invasion Period, by Mark J. van der Laan, Division of Biostatistics, University of California, Berkeley, October 26, 2006. http://socrates.berkeley.edu/~jewell/lancet061.pdf

14. Lancet 2006 study criticised – letters from researchers published in the Lancet journal, 2007; 369.

Leading researchers disagree with Project Censored October 20, 2008

Posted by dissident93 in Demography, Iraq mortality, Project Censored.
comments closed

Project Censored’s Top 2009 censored story” headline states: “Over One Million Iraqi Deaths Caused by US Occupation”. Many researchers either flatly reject this level of deaths, or are critical of the two studies that Project Censored cites (ORB and Lancet 2006). On the other hand, studies which Project Censored overlooks (or intentionally excludes) – eg IFHS, ILCS, CRED, etc – have been well-received by professional epidemiologists and demographers as important contributions to the field…

Jon Pedersen is one of the leading experts on Middle East demography. He conducted the Iraq Living Conditions Survey (ILCS, a cluster-sample survey of over 21,000 Iraqi households – much larger than Lancet 2006, which surveyed approximately 1,800). Pedersen has commented that the Lancet 2006 mortality estimates were “high, and probably way too high. I would accept something in the vicinity of 100,000 but 600,000 is too much.” (Source: Washington Post, 19 Oct 2006)

• Research by Debarati Guha-Sapir and Olivier Degomme, from the Centre for Research on the Epidemiology of Disasters (CRED) estimates the total war-related death toll (for the period covered by Lancet 2006) at around 125,000. They reach this figure by correcting errors in the Lancet 2006 survey, and triangulating with IBC and ILCS data. Source: CRED paper (PDF).

Beth Osborne Daponte (the renowned demographer who produced authoritative death figures for the first Gulf War) argues in a recent paper that the most reliable information available (to date) is provided by a combination of IFHS, ILCS and Iraq Body Count. This puts a working estimate well below the “million” figure claimed by Project Censored. Daponte is critical of the Lancet 2006 study – like several other researchers, she finds its pre-war crude death rate too low (which would inflate the excess deaths estimate). She writes that the Lancet authors “have not adequately addressed these issues”. http://tinyurl.com/48mq63

Paul Spiegel, an epidemiologist at the UN, commented on IFHS (which estimated 151,000 violent deaths over the same period as Lancet 2006): “Overall, this [IFHS] is a very good study […] What they have done that other studies have not is try to compensate for the inaccuracies and difficulties of these surveys.” He adds that “this does seem more believable to me [than Lancet 2006]“. http://tinyurl.com/53s82b

Mark van der Laan, an authority in the field of biostatistics (and recipient of the Presidential Award of the Committee of Presidents of Statistical Societies) has written, with Leon de Winter, on the Lancet 2006 study:

“We conclude that it is virtually impossible to judge the value of the original data collected in the 47 clusters [of the Lancet study]. We also conclude that the estimates based upon these data are extremely unreliable and cannot stand a decent scientific evaluation.” http://tinyurl.com/4txbpw

Mohamed M. Ali, Colin Mathers and J. Ties Boerma, from the World Health Organization at Geneva (authors of IFHS), write that it “is unlikely that a small survey with only 47 clusters [Lancet 2006] has provided a more accurate estimate of violence-related mortality than a much larger survey sampling of 971 clusters [IFHS].” http://content.nejm.org/cgi/content/full/359/4/431

• Survey methodologist Seppo Laaksonen has expressed many doubts over the Lancet 2006 estimates due to problems with the data (an attempt was made by Laaksonen to reconstruct country-level estimates using data received from the Lancet team). See Retrospective two-stage cluster sampling for mortality in Iraq by Seppo Laaksonen, International Journal of Market Research, Vol. 50, No. 3, 2008.

Neil F. Johnson, et al, in Bias in Epidemiological Studies of Conflict Mortality, argue that there may be a “substantial overestimate of mortality” in the Lancet 2006 study due to a bias introduced in the street sampling procedure. The Lancet authors responded by asserting that such a “main street bias” was intentionally avoided, but (to date) have not been able to explain how this was achieved (without fundamentally changing the published sampling scheme). It remains a serious, unresolved issue.

• Many other researchers have criticised the estimates produced by Lancet 2006 and ORB. These criticisms include a comprehensive paper by Professor Michael Spagat on “ethical and data-integrity problems” in the Lancet study. Fritz Scheuren, a past president of the American Statistical Association, has said the response rate in the Lancet 2006 study was “not credible”. Professor Stephen Fienberg, the well-known statistician, is on record as stating that he doesn’t believe the Lancet 2006 estimate. Two of the world’s prestigious scientific journals, Nature and Science, ran articles critical of the Lancet 2006 study.

Donald Berry is Chairman of the Department of Biostatistics and Applied Mathematics at the University of Texas MD Anderson Cancer Center. Berry is reported as writing that the Lancet 2006 estimates are “unreliable”:

“…The last thing I want to do is agree with Bush, especially on something dealing with Iraq. But I think ‘unreliable’ is apt. (I just heard Bush say ‘not credible.’ ‘Unreliable’ is better. There is a certain amount of credibility in the study, but they exaggerate the reliability of their estimate.)

“Selecting clusters and households that are representative and random is enormously difficult. Moreover, any bias on the part of the interviewers in the selection process would occur in every cluster and would therefore be magnified. The authors point out the possibility of bias, but they do not account for it in their report.

“It is true that the range reported (392,979–942,636) is huge. Its width represents only one source of variability, the statistical error present under the assumption that their sample is representative and random. I believe their analysis to be correct under these assumptions. However, it does not incorporate the possibility of biases such as the one I mentioned above. Incorporating the possibility of such biases would lead to a substantially wider range, the potential for bias being huge. Although there is no formal way to address bias short of having an ‘independent body assess the excess mortality,’ which the authors recommend, the lower end of this range could easily drop to the 100,000 level.”

References & further reading

Research disputing or indirectly contradicting the mortality estimates cited by Project Censored:

1. Estimating mortality in civil conflicts: lessons from Iraq, by Debarati Guha-Sapir, Olivier Degomme. Centre for Research on the Epidemiology of Disasters, University of Louvain, School of Public Health, Brussels. http://www.cedat.be/sites/default/files/WP%20Iraq_0.pdf

2. Wartime estimates of Iraqi civilian casualties, by Beth Osborne Daponte. International Review of the Red Cross, No. 868. http://tinyurl.com/48mq63

3. Violence-Related Mortality in Iraq from 2002 to 2006, Iraq Family Health Survey (IFHS) Study Group. The New England Journal of Medicine, Volume 358:484-493. http://tinyurl.com/yoysuf

4. Bias in Epidemiological Studies of Conflict Mortality, by Neil F. Johnson, Michael Spagat, Sean Gourley, Jukka-Pekka Onnela, Gesine Reinert. Journal of Peace Research, Vol. 45, No. 5. http://jpr.sagepub.com/cgi/content/abstract/45/5/653

5. Sampling bias due to structural heterogeneity and limited internal diffusion, by Jukka-Pekka Onnela, Neil F. Johnson, Sean Gourley, Gesine Reinert, Michael Spagat. http://arxiv.org/abs/0807.4420

6. Ethical and Data-Integrity Problems in the Second Lancet Survey of Mortality in Iraq, by Michael Spagat, Department of Economics, Royal Holloway College. http://tinyurl.com/4xsjtl

7. Confidence Intervals for the Population Mean Tailored to Small Sample Sizes, with Applications to Survey Sampling, by Michael Rosenblum, Mark J. van der Laan. University of California, Berkeley Division of Biostatistics Working Paper Series. http://www.bepress.com/ucbbiostat/paper237/

8. Reality checks: some responses to the latest Lancet estimates, by Hamit Dardagan, John Sloboda, and Josh Dougherty. Iraq Body Count Press Release 14, Oct 2006. http://tinyurl.com/ysfpbj

9. Retrospective two-stage cluster sampling for mortality in Iraq, by Seppo Laaksonen, International Journal of Market Research, Vol. 50, No. 3, 2008. http://tinyurl.com/4yawmx

10. Mainstreaming an Outlier: The Quest to Corroborate the Second Lancet Survey of Mortality in Iraq, by Michael Spagat, Department of Economics, Royal Holloway College. http://tinyurl.com/46v8jy

11. Iraq Living Conditions Survey (ILCS) 2004, United Nations Development Programme. http://tinyurl.com/5yfyye

12. Mortality after the 2003 invasion of Iraq: Were valid and ethical field methods used in this survey?, by Madelyn Hsiao-Rei Hicks. Households in Conflict Network, The Institute of Development Studies, University of Sussex. 1 December 2006. http://www.hicn.org/research_design/rdn3.pdf

13. “Mortality after the 2003 invasion of Iraq: A cross-sectional cluster sample survey”, by Burnham et al: An Approximate Confidence Interval for Total Number of Violent Deaths in the Post Invasion Period, by Mark J. van der Laan, Division of Biostatistics, University of California, Berkeley, October 26, 2006. http://socrates.berkeley.edu/~jewell/lancet061.pdf

14. Lancet 2006 study criticised – letters from researchers published in the Lancet journal, 2007; 369.