Thursday, October 12, 2006

Losing the American Statistical Association Vote?

by Tom Bozzo

The WPE, on the latest study of excess mortality in Iraq due to the war in the Lancet, h/t Ken:
President Bush slammed the report Wednesday during a news conference in the White House Rose Garden. "I don't consider it a credible report. Neither does Gen. (George) Casey," he said, referring to the top ranking U.S. military official in Iraq, "and neither do Iraqi officials."

"The methodology is pretty well discredited," he added.
The Shakespeare's Sister comments section responds at the appropriate level:
Oh yes. Bubble Boy knows statistical methodology like the back of his hand. Fo' shizzle.
More seriously, Daniel Davies's reviews at CT (1, 2) of the totally innumerate and merely bad objections to the 2004 study by the same authors remains an appropriate entry point to discussion of the latest results. They've been widely re-linked, but in matters of gravity, more never hurts, eh?

Innumeracy indeed reigns in the gamma quadrant. The famed statistician Glenn Harlan Reynolds
starts his post on the total innumeracy side by approvingly linking a post by Tim Blair (which itself makes some pointless comparisons with various WWII civilian death figures) with the note that the researchers supposedly based their results on a "paucity of actual data." By this, Blair means that he objects to the inflation of the sample results to the population figures that are (duh) the object of the study. (At this point, an admission of having never taken a sufficiently advanced statistics class, slept through most of the lectures, and/or flunked the class, would be appropriate.) Measuring population characteristics with what seem to be amazingly small numbers of observations is what statistical surveys do. This "logic" would, by analogy, suggest that it's conceptually worse to try to represent 60 million or so votes with 500 or so respondents to a phone survey.

In fact, the epidemiology experts quoted in the press as "skeptics" don't question the broad sampling methodology employed in the Lancet study. Rather, the concerns relate to secondary details: adequacy of the sample (as noted above) and representation of non-sampling errors in the study.

The sample size critique seems to flow mainly from this statement, obtained by the NY Times:
Robert Blendon, director of the Harvard Program on Public Opinion and Health and Social Policy, said interviewing urban dwellers chosen at random was “the best of what you can expect in a war zone.”

But he said the number of deaths in the families interviewed — 547 in the post-invasion period versus 82 in a similar period before the invasion — was too few to extrapolate up to more than 600,000 deaths across the country.
As a statement relating to the sampling methodology, Blendon's claim would have to be regarded as conclusory, since it's not rocket science to determine a target sample size to obtain a desired level of precision in the result (and the Lancet article is clear as to what they intended to be able to measure given their sample [*]). Like another widely-quoted actuality from biostatistician Donald Berry, it is more properly a critique of the potential role of non-sampling errors, or what Berry termed the study's "tone of accuracy." (**) I say "non-sampling" insofar as the actual paper reports sampling margins of error (95% confidence intervals) for what looks to be every sample-based result that's mentioned in the text.

Indeed, the Lancet article has a fairly extensive discussion of possible sources of non-sampling error, though for what should be obvious reasons to those in the know they are not quantified. While the factors cited by the researchers could lead to substantial errors, they are by no means confined to a single direction of error. Here are two notable ones that would tend to downwardly bias the results:
I've seen other mischaracterizations ranging from innocent misstatements to complete idiocy.

In the former category, Cervantes (a non-critic whose post is very good; see the link above above) states that response rates are not provided, when the report does indicate two forms of non-response (due to no answer and refusal to participate; both are reported as just under 1%, which is very low). Cervantes also raises the question of substitution of blocks of housing units within clusters, though as discussed above, this does not obviously affect results in the direction favored by study critics.

Less innocently, or so it would seem, a commenter to Juan Cole's post explaining how body count methods could understate actual deaths implies that respondents produced more death certificates than they were asked, which inappropriately compares a pair of statistics: the fraction of deaths where interviewers requested death certificates (87%) and the fraction of those requests where a certificate was provided (92%). Needless to say, the upshot of the high confirmation rate is that the potential upward bias from unsubstantiated death reports in the survey would not change the picture of substantial excess mortality. (***)

On the utter stupidity front, the Rightwing Nut House lives up to its name in trying to make hay about a breakdown of casualties, rounded to whole percents, adding up to 101% instead of 100%. Commenters who can't conceive of rounding error should stay away from numbers. They also should get a Questionable Frame award for characterizing mere measurement of the war's human cost as "ghoulish" and "unseemly."

Finally, as is noted by Kieran Healy at CT, some of the criticism amounts to simple incredulity — e.g., Michael O'Hanlon of Brookings, quoted by the Washington Post:

"I do not believe the new numbers. I think they're way off," he said.

Other research methods on the ground, like body counts, forensic analysis and taking eyewitness reports, have produced numbers only about one-tenth as high, he said. "I have a hard time seeing how all the direct evidence could be that far off ... therefore I think the survey data is probably what's wrong."

O'Hanlon had published a paper before the fact using a variety of methods to predict a potential Iraqi death toll from an invasion that could easily range into the tens of thousands on both the military and civilian sides. While the invasion itself might have been less severe than the worst-case pre-war scenarios, O'Hanlon didn't seem to have particularly accounted for the extended insurgency. He noted that the relatively brief 1989 invasion of Panama resulted in something ranging from 10-30 times as many Panamanian civilian deaths as U.S. military deaths — and the U.S. forces in Iraq would seem to be much more heavily armed and armored than the airborne forces that conducted most of the Panama operation, not to mention have access to far superior battlefield medicine. So, with 3,000 coalition soldiers dead, and another 20,000 wounded (thousands severely), it's not at all inconceivable a priori that the civilian toll over the time period could extend far beyond the body counts in which administration's defenders have taken refuge.

So while exact magnitudes may be debated, there's in the end no doubt that the war has been hard on lots of Iraqis. In this regard, an Onion "priceless national treasure" moment — one of those headlines without story — best sums things up:
New Woodward book blows the lid off what everybody already knew.

-------------------------------

(*) That was to be able to measure a doubling of the death rate with high degrees of confidence and power. The study's critics, of course, don't really note that the study design would not tend to identify relatively small increases in the death rate as statistically significant.

(**) The AP characterized Berry's reaction as follows:
Donald Berry, chairman of the statistics department at the University of Texas' M.D. Anderson Cancer Center in Houston, said he believes the study was done ''in a reasonable way.'' But he said the range of uncertainty given for the estimates was much too narrow, because of potential statistical biases in the survey.
(***) I read somewhere that there have been some writers who have suggested that Iraqis might have hoodwinked the interviewers with forged death certificates. It doesn't seem that credible that, in a war zone, this would be something that hundreds of Iraqis would be inclined to do in case the U.N. or some other academics happened to come on by. But even if true, this theory would suggest that the Bushies have been doing just a grand job winning the hearts and minds of the Iraqi populace.
Comments:
Thanks for all the explanation. I'm still scarred by my undergrad stats class some (ahem) 15 years ago.
 
Mrs. Coulter is very young.

If I were bending over backwards to be nice to the current Administration, I would note that those interviewed had documentation of their statements slightly over 80% of the time.

So if (1) you're innumerate, (2) you ignore that cluster sampling tends to understate (and here clearly would do so), and (3) you assume that all those who didn't have ready documentation and didn't participate would report "0," you end up with a merely ca. 500,000 incremental, local-populace deaths.

That's ignoring the 300,000+ refugees, the dead soldiers from other countries, and the effect of destroying the country's infrastructure on development. But those are just casualties of war, no?
 
Mrs. C.: You're welcome. The methods used here are probably beyond most undergrad stats intros, not that it would make a difference for much discourse on the other side.

Ken: Thanks for doing that bit of math.
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?