GlassHospital

Demystifying Medicine One Month at a Time

Questioning a Health Care Sacred Cow

If you’ve worked in U.S. health care for any length of time, you’ve no doubt lived through a period of impending ‘inspection’ by the Joint Commission at your hospital or health care organization. Stress levels amongst all staff inevitably rise in the runup.

Everyone needs to look sharp, have their protocols down, and most importantly, where to find organizational policy information if it’s not available by quick memory retrieval.

One of the 800 lb. gorillas of the U.S. health care world, the JC (as it’s known) audits, inspects and accredits nearly twenty-one thousand U.S. health care enterprises.

I was always under the impression that the JC had a complete monopoly in its market–that is, if your health care organization wanted to be accredited (the vital ‘seal of approval’ for your organization’s public relations and safety standards, but also key for reimbursement through CMS) than you had to play ball with them.

In 2012, one of the hospitals at which I worked decided to go in a different direction, choosing instead to work with the accrediting agency DNV, which has its origins in the world of Norwegian shipping. For real. As in, ocean liners need a ton of regulation and safety standards so that they don’t run into each other and sink. We’re always comparing health care to airlines, right? Maybe it’s not such a big stretch after all.

Like most of my physician colleagues who’d lived through years of JC audits, we were a bit flabbergasted: “You mean the JC actually has competition?” As it turns out, the JC only controls a mere 80% of the market. Turns out it’s only a 785 lb. gorilla.

Even though this whole issue is a little bit “inside baseball,” I wrote an essay about it for NPR. My reasoning was that there’s always value in questioning monolithic conformity. And I had been really surprised to learn that there was actually competition to the JC.

Now comes a study in BMJ, led by Harvard researcher Ashish Jha. The study compared more than 4000 U.S. hospitals and the outcomes generated for 15 common medical conditions and six common surgical conditions between the years 2014-2017 in a Medicare population data set of more than four million patients.

What did the study find?

Interestingly, there was no statistical difference in 30-day mortality or readmission rates in the patients that were seen at JC-accredited hospitals vs. those at hospitals accredited by ‘other independent organizations.’ There was a slight but not statistically significant benefit in mortality and readmission rates for JC-accreditation vs. hospitals reviewed and accredited by state survey agencies.

The study raises the reasonable question: if there aren’t patient outcome differences in hospitals accredited by JC vs. those accredited by either state review (government) or other independent agencies (other privates), then should the JC enjoy such a massive industry dominance?

After all–many health care leaders cite the JC’s regulatory and inspection processes as burdensome, and argue that the whole preparation game and citation-fixing business is expensive and distracting from the core hospital mission: taking care of people.

Other JC critics cite the fact that the organization is less than optimally transparent, electing to keep its inspection reports private, despite the fact that many health care enterprises flagged for violations are able to stay accredited.

Congress has even begun an investigation into possible lax oversight.

Apparently Jha’s work has struck a chord, as there was some notable media coverage about the BMJ piece. For one, the Wall Street Journal ran a story about it, which it kept in front of its paywall, while noting that hospitals pay on average $18,000 for an inspection and annual fees of up to $37,000 to the Commission.

Cardiologist and prolific blogger John Mandrola also wrote an opinion piece titled “Joint Commission Accreditation: Mission Not Accomplished.” In his piece, Mandrola compares JC accreditation to medications or surgery that fail to live up to evidence-based standards and subsequently fall out of practice. He concludes, “If the JC’s brand of accreditation can’t show benefit, than it too needs to be de-adopted.”

Having learned that there’s an emerging marketplace of agencies equipped to inspect hospitals and health care enterprises it seems there’s an opportunity here: Perhaps the agency offering the greatest value in terms of cost, reporting, and public accountability will triumph against a behemoth that seems too complacent and entrenched in its ways.

2 Comments

  1. This one is nice article on U.S. health care. This is good one, thanks for sharing.

  2. I hope that people actually read the study by Lam and colleagues recently published in BMJ and do not take its findings and conclusions at face value. It has serious methodological flaws that make the comparison of accrediting organizations (AO) and state surveyors (SS) invalid.

    There were radical differences in hospital size and teaching status between AO and SS hospitals: 446 large hospitals were surveyed by AOs and only 4 by SSs, and there were 234 major teaching hospitals surveyed by AOs but zero by SSs (Table 1). Including categorical variables with very few or no observations in some categories violates basic principles of multivariate analysis. The large difference in hospital types surveyed by AOs and SSs is also of critical importance because large hospitals, especially major teaching hospitals, care for the most severely ill patients; the multivariate models for mortality and readmission were based on claims data that had no variables whatsoever to adjust for differences in severity of illness (despite their claim in the methods section). Many studies have shown that comparing hospital mortality rates adjusting for age and comorbidities alone without markers of severity of illness yields erroneous results.

    The analysis of surgical mortality is also deeply flawed. The total number of surgeries at the SS hospitals was extremely small for four of the six procedures, ranging from 154 cases for open abdominal aortic aneurysm repair to 685 cases for coronary artery bypass grafting (Appendix Table 2). With 1063 hospitals in the SS group, this means that the vast majority of the hospitals did not even perform these four surgical procedures. In addition, despite these small numbers of cases, the authors combined the outcomes of the six types of surgery into a single multivariate model. This is problematic because the data in Appendix Table 2 show that 16,563 of 20,627 (80.3%) of all surgical cases for SS hospitals were for hip replacement. So, the authors’ conclusion that surgical mortality rates were identical (2.4%) at AO and SS hospitals was based almost entirely on the lack of differences in mortality for hip surgery (mortality rates 0.5% for AO hospitals and 0.6 for SS hospitals). For three of the five other surgical procedures, the results favored AO hospitals (Appendix Table 4).

    The article has factual errors as well. The introduction states that SS hospitals are surveyed annually. However, a 2009 report from the U.S. Government Accounting Office raised concerns about the frequency of hospital surveys by state agencies and reported that 5% of non-accredited hospitals had not been surveyed for six years or more.

    Despite these methodological flaws and the clear bias against hospitals with more complex case-mix, the mortality rate at AO hospitals for patients admitted with medical conditions was 0.4% lower than SS hospitals and met traditional criteria for statistical significance (p<0.03). Nevertheless, the authors draw strong conclusions that hospital accreditation is not associated with lower mortality. The authors also minimize the importance of AO hospitals’ lower 30-day readmission rate for medical conditions vs. SS hospitals (22.4% vs. 23.2%, respectively; p < 0.001), saying that it was only “slightly” associated. Based on the 3 million medical admissions at Joint Commission-accredited hospitals, which represent 88% of all medical admissions to AO hospitals, the findings indicate that patients treated in Joint Commission-accredited hospitals experienced 12,000 fewer deaths and 24,000 fewer readmissions. These differences matter to patients.

    We all try to practice evidence-based medicine. I hope we also practice evidence-based policy and analyze articles before drawing conclusions about their findings.

    David W. Baker, MD, MPH
    Executive Vice President, Healthcare Evaluation
    The Joint Commission

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2018 GlassHospital

Theme by Anders NorenUp ↑