I talked about this last week. Vox is running a year long series on "fatal medical errors". Last week I exposed some of the incoherent, misleading aspects in Sarah Kliff's opening salvo. Today I need to spend some time addressing foundational reference points of her entire project, namely the idea that hundreds of thousands of Americans are dying due to physician incompetence. She cites two papers to support this. I am a very boring, uninteresting person without many hobbies or likes. I printed off those papers and gave them a close read this week.
In 1999 the Institute of Medicine published a study called "To Err is Human" that claimed 100,000 people were dying every year from health care system malfeasance. The authors used data retrospectively gleaned from patient charts from several New York hospitals from the year 1984. I found the paper to be a little stingy with the details. (What defines an "adverse event"? To what extent are adverse events the result of physician negligence? Who decides?) Plus it was based on data and medical practice paradigms from 30 years ago. All gallbladders were getting whacked out via giant RUQ saber slash incisions 30 years ago. I don't even know if penicillin had been discovered in 1984. I think kids were still being hooked up to Iron Lungs for polio back then.
The 2nd paper Ms. Kliff cites to fan the flames of outrage is one from the Journal of Patient Safety in 2013. Now that's more like it. Just 2 years ago. I sunk my teeth into that one. This is the paper that unabashedly alleges that potentially (we'll get back to this potentiality later) 400,000 people are getting cut down every year by doctors afflicted with hands of death and destruction (HODAD syndrome). This one had my attention.
In light of the evidence above, and especially that of the Weisman study,14 and although it is probably an underestimate, a minimum estimate of a 2-fold increase in the medical record–based estimate is reasonable to compensate for the known absence of evidence in medical records of errors of commission and the inability of the GTT to detect errors of omission even when the evidence that guidelines were not followed may be present in the medical record
Egregious as that is, it gets worse. The number 200,000 seemed awful suspicious to me. Why would deaths from medical errors double as medical technology became more refined, medical decisions more evidence based and more rigorous? So I checked the numbers. And immediately something seemed fishy. As a preface to his results section, Dr. James outlines the math that the IOM used to calculate the infamous number of 100,000. It goes like this: number of nationwide hospitalizations x percentage of adverse events x percentage of preventable adverse events x percentage of deaths attributable to preventable adverse events. Note that the equation has four terms. Also note that each term after "percentage of adverse events" is a percentage of the term that comes before it. So, the "percentage of deaths attributable to preventable adverse events" is not a percentage of the starting point number of nationwide hospitalizations, rather it is a percentage of a percentage. This is important, mathematically. 0.8% of number X is a lot larger than 0.8% of number Y which is a derivative of a certain percentage of number X. Is that clear?
Anyway, Dr. James then goes on to detail the methodology of his own mathematical conclusions.
This means that the study of patients hospitalized in North Carolina was heavily weighted compared with the other studies. Thus, there were a total of 4252 records reviewed (compiled from Table 2). Among the records reviewed, there were 38 total deaths associated with adverse events. The ratio projects to a death rate from adverse events of 0.89%. This is well below the percentages from Medicare and tertiary-care studies (1.1%–1.4%) and well above the data from the North Carolina study (0.60%). There were an estimated 34.4 million hospital discharges in 2007,26 and the average percentage of preventable adverse events among all adverse events in the 3 studies where this was reported or postulated was 69% (averaged from Table 2). Thus, the best estimate from combining these 4 studies is 34,400,000 × 0.69 × 0.0089 = 210,000 preventable adverse events per year that contribute to the death of hospitalized patients
I was not a double major in chemical engineering and calculus in college. But this isn't right. You can't just eliminate a variable when calculating extrapolated outcomes. He just wantonly drops the category of "percentage of adverse events" from the formula. So the 69% number used for "preventable adverse events" (which is supposed to be just 69% of total adverse events) becomes 69% of total nationwide hospitalizations. That's a bad mistake!
So if we re-run the numbers using Dr. James' estimate of 14% for "adverse events" we get 34,000,000 x 0.14 x 0.69 x 0.0089 = 29,000. That's not 210,000. And if we use the number the IOM used in their landmark "To Err is Human" study ( 3.7% for total adverse events) then the result is 7800 deaths due to avoidable errors. Now that's a number I can work with. 7000 preventable deaths due to medical error would still be unaceptable but I think you can further whittle away at it by examining the methodology of error determination. Not all surgical complications are "preventable errors". Not all urinary tract infections or hospital acquired pneumonias or DVT's are preventable. "Never event" is often just a talking point for non-clinical healthcare administrators.
We all die. This is clear enough. As far as this statistical maelstrom of doctor induced death goes, I suspect there is a far more reasonable explanation. People are living longer. People are not dying peacefully in homes anymore. Too many die like dogs intubated, slack jawed, dehumanized in ICU's. I got called today about a 89 year old guy with a bleeding duodenal ulcer. He came in with heart failure and respiratory distress. A long standing ulcer opened up. He started coughing bright red blood by the liter one morning. GI scoped him and had to abort mission; blood everywhere, call surgery. Just before the endoscopy he coded. After a couple rounds of epi, his vitals returned. Post endoscopic procedure he coded four more times. Then they called me. The guy was intubated, unresponsive, getting blood. The nurses were pouring it into him like a bucket leaking from six different places. Labile vitals. A river of fresh blood whooshing through his NG. I talked to the son. I said, listen man, his only chance at something resembling survival is an operation but it's a faint resemblance at best. Maybe we get him off the OR table. Unlikely he gets off the ventilator for weeks. He's coded 5 times. Who knows what his neurological injury is. I get what you're saying, the son says. They withdraw aggressive care. He dies. If the son says, doc you said an operation is his only hope, go for it and I take him and he survives the surgery, GDA tied off, but dies 2 weeks later from a "hospital acquired" post op pneumonia, inability to wean from vent, then he's just another statistic, another victim of medical error.
Post Update 3/17/15:
Well it seems I have stepped outside my realm of expertise into something beyond my pay grade (i.e. statistical analysis) and, as a result, have committed an egregious error. This post became a topic of conversation at one of the scienceblogs forums and the smart people there made some valid critiques of the conclusions I drew. Furthermore they were able to contact the author of the paper cited in my post, Dr James, and ask him to comment. Well it seems my error was rather amateurish. The "mystery of the missing fourth variable" was not such a mystery after all. Basically two of the variables were combined (multiplication: it's fascinating!) making it seem like a factor had been dropped.
My apologies go out to Dr James and anyone who read this post in its original form. A lot of what happens in Blog World is instantaneous and "off the cuff" but I need to do a better job vetting my thoughts before writing.
Less compelling was Dr James' explanation in the paper of why the final tally was doubled to arrive at the total of 400,000 deaths. Also, this was a retrospective review of only 4252 charts in North Carolina. I am not sure if that "n" is a large enough number to achieve any statistical significance. (No statistical analysis was offered in the paper). And the database involved Medicare patients, i.e. generally older patients, less robust, many co-morbidities. Finally, I don't like the inconsistency in terminology these sorts of papers exhibit in terms of how the data is derived and then how the conclusions are described. To wit: Incidents of "error" include events that merely "contribute to" a patient's death, or "hasten" death more quickly than would otherwise have occurred anyway. But then in the grand conclusions, in the title's of these papers, all nuance is suddenly lost between ultimate etiology of a patient's demise and "contributing factors" and we instead read headlines like : Medical Error Kills more people than Cancer! I don't like that so much