In a previous post I outlined the flaws and derivative inconsistencies in the methodology used to compile the Scorecard. And I am not alone in my lack of enthusiasm; the overwhelming consensus opinion from surgeons and physicians across America has been strongly negative. Even the American College of Surgeons has weighed in, submitting the following op-ed to the Washington Post (unpublished as of today):
Surgeon ratings need to be a shared responsibility
The American College of Surgeons strongly believes that patients and their families deserve to have meaningful information available to assist them in selecting the right surgeon. This week, two public interest groups launched websites promising to assist with surgeon evaluation. Unfortunately, the usefulness of the information they shared is questionable for a number of reasons.
The two groups used differing methodologies, including how many years of Medicare data they reviewed, procedures studied, and rating scales used. A patient who visited both websites could potentially find the same surgeon rated very differently or only find a surgeon on one of the two websites.
Use of clinically validated data would have more fully taken into account the severity of the patient’s condition when assessing surgeon performance. For example, an 80-year-old diabetic patient with heart disease undergoing a gall bladder removal faces many more challenges than a healthy 40-year-old undergoing the same operation. Without factoring in surgeons’ success rate with the more challenging patients, the potential for wrongly directing patients away from these surgeons certainly increases. And as troubling, some insurers might restrict access to these surgeons in the future.
The importance of relying on clinical data to accurately measure surgeon performance is well documented in scientific literature, and clinical registries are considered the standard for collecting this information. As recently as this year, this point was underscored in a peer-reviewed article by Lawson et al in the Annals of Surgery.
Collection and dissemination of accurate clinical data, however, is a shared responsibility because it is a labor- and cost-intensive process. Private payors, government, professional societies, and public interest groups—all of whom are invested in transparency—must share this responsibility.
Two other issues bear consideration. First, surgery is a team experience. The surgeon works closely with the anesthesiologist and surgical nurses during an operation. While using clinical data can get us closer to measuring surgical performance, the reality is that in the operating room, many factors and many individuals contribute to the surgical outcome. Rating a surgeon’s skill in performing a particular operation, without factoring in these other considerations, leads to an incomplete analysis.
Second, we must ask ourselves how much data is helpful to a patient’s decision. The American College of Surgeons fully supports sharing the right data with the right person at the right time. We are open to collaborating with other stakeholders, including those in the public and private sector to identify the data that will serve the public interest.
At its core, the American College of Surgeons is committed to improving the care of the surgical patient and believes that sharing meaningful data is key to that endeavor. Let’s do it right and together.
The ProPublica measure is not valid. Though the methodology does account for some of the potential biases that might unjustly influence findings, it fails to account for another significant bias. For the ProPublica method to be a valid measure of surgical quality, all patients facing a potential readmission should have the same probability of being readmitted. Only then could readmission rates serve as a surrogate for complication rates and thus surgeon quality.
But patient factors such as their social support system, physician factors such as willingness to accept risk, and factors effecting access to care such as the presence of observation units or care in the emergency department, all impact whether a patient will be readmitted. Indeed, CMS has stopped reimbursing hospitals for admissions lasting less than two days because they recognize that the decision to admit a patient is arbitrary and that many of the same patients could be managed under observation.
The Methodology Needs Improvement: Even with these adjustments to the model, like any new quality measure, this would need to be tested and validated before it should be presented as a valid tool intended to assist consumers in their medical decision-making. In summary: the model uses an indirect measure of complications that fails to properly account for the variation in the reasons for a readmission.
Here's what I would do:
- Mandatory reporting for all cholecystectomies performed in the United States. Then divide into elective vs emergent procedures
- Data on common bile duct injuries, unexpected bile leaks, and intra-operative deaths compiled for every surgeon and made available publicly. Those surgeons who exceed expected complication rates would be red flagged.
- Higher than expected rates of conversion to open cholecystectomy, although not considered a "complication" by surgeons, would be published as an indirect indicator of surgeon skill
- 30 day readmissions could be included as part of it, but only if the data is analyzed by practicing surgeons in order to exclude those admissions occurring due to unrelated medical issues.
- Surgical site infections (SSI's), although not as useful in the realm of minimally invasive procedures, is always a reasonable, objective metric to include
- Surgeons who perform a higher percentage of emergent or urgent laparoscopic cholecystectomies could be highlighted as potentially more technically adept, assuming complication rates are equivalent to those surgeons who tend to perform more elective procedures.
Not only should the data be compiled and published, but there ought to be internal review process (perhaps run in concert by the American College of Surgeons and the American Board of Surgery) wherein those surgeons who exceed expected complication rates would be required to undergo remedial training or have their next 20 cases proctored via video analysis. Failure to comply would potentially result in revocation of "board certified" status.
These are just random ideas from a nobody general surgeon in Cleveland. I am sure colleagues at the higher levels of my profession would have plenty of useful insight as well. We all have emails and published office numbers. The next time Marshall Allen et al. want to put together another physician evaluation tool, they can always reach out, drop us a line. We'd be happy to assist.