Not long ago, we took a critical look at the, er, “evidence” behind the Department of Health and Human Services’ (HHS) (and CMS/Medicare’s) new policy regarding hospital readmissions within 30 days of discharge. For those who may have missed the excitement, these HHS/Medicare policies mandate dole out financial punishments to hospitals that are found to have a higher-than-average percentage of patients with certain diagnoses (heart attack, heart failure and pneumonia ) who are re-admitted within a 30 day period. Rather than simply docking the hospitals for the cost of the re-admission, or even charging a penalty on the admissions for each of these diagnoses, the federal government will seek to claw back a portion of payments for all Medicare patients admitted with any diagnosis over the course of the entire year. The largest potential reduction for a hospital would be one percent in FY 2013; two percent in FY 2014; and three percent in FY 2015 and beyond. This may not sound like much, but the Health Care Advisory Board estimates that about 60% of hospitals will affected to the tune of around $200,000,000 per year in lost revenue.
The only problem with this program to ensure “quality” is that it does not appear to be based on any rational analysis of the available data concerning readmissions. In fact, rather than basing the program on actual data, the Medical Payment Advisory Commission (who recommended the strategy to HHS) based its analysis on a computer simulation that did not in any way reflect reality. The result is that, statistically speaking, excellent hospitals are at least as likely to be punished by the government as not-so-good ones. It’s not the sort of strategy one could use to successfully train a pet, let alone operate a healthcare system, but what the heck. It’s the law of the land.
We’d sincerely hoped that this saga couldn’t get any more pathetic, but that turns out to have been wishful thinking. For this you can blame a recent study entitled “Risk Prediction Models for Hospital Readmission”, that was just published in the Journal of the American Medical Association by Dr. Devan Kansagara, et al.
Here’s the problem. Even if we were to (incorrectly) assume that the HHS/Medicare readmission policy was both appropriate and evidence-based, implementing it isn’t as simple as simply counting each hospital’s readmission rate per diagnosis, and singling out those with the highest rates. In the case of very sick patients, elderly patients or those with multiple chronic diseases, one might well expect that they might need to be readmitted at a higher rate than otherwise healthy people who happen to get pneumonia or have a heart attack. To compensate for this sort of patient selection bias, Medicare says it will “risk-adjust” the patients readmitted to each hospital. This would allow the regulatory bean counters to compare “patient apples” to “patient apples” much as they might any other commodity, and punish each deficient hospital with the correct financial penalty. This is an important part of making the regulatory process fair and balanced – so important that implementing the program without an effective method of adjusting for risk is frankly capricious, arbitrary and unethical.
How does one adjust for risk? It can be complicated. Ideally one would know in advance exactly which factors are most responsible for unpreventable readmissions, and be able to objectively measure and weight them appropriately.
Because the risk-adjustment process is so important, Kansagara and colleagues took it upon themselves to survey the world’s literature on the predictive value of models used to assess readmission risk for the purpose of comparing hospital performance. They were able to find reports on 26 different models used in a variety of different countries, including the U.S., Australia, Canada, Ireland, Switzerland, and the United Kingdom. Their findings? If you’re a regular reader of this column you’ve probably already guessed the answer:
“Data Synthesis: Of 7843 citations reviewed, 30 studies of 26 unique models met the inclusion criteria. The most common outcome used was 30-day readmission; only 1 model specifically addressed preventable readmissions. Fourteen models that relied on retrospective administrative data could be potentially used to risk-adjust readmission rates for hospital comparison; of these, 9 were tested in large US populations and had poor discriminative ability (c statistic range: 0.55-0.65). Seven models could potentially be used to identify high-risk patients for intervention early during a hospitalization (c statistic range: 0.56-0.72), and 5 could be used at hospital discharge (c statistic range: 0.68-0.83). Six studies compared different models in the same population and 2 of these found that functional and social variables improved model discrimination. Although most models incorporated variables for medical comorbidity and use of prior medical services, few examined variables associated with overall health and function, illness severity, or social determinants of health.
Conclusions: Most current readmission risk prediction models that were designed for either comparative or clinical purposes perform poorly. Although in certain settings such models may prove useful, efforts to improve their performance are needed as use becomes more widespread.”
Which of these models is Medicare going to use? We have no idea, and there’s a better than 50:50 chance that the folks at HHS don’t know either. But the poor predictive performance of virtually all of them is just one more reason that a program this poorly constructed should be shelved before it’s even started. How can hospitals be expected to improve their performance if the means used to evaluate them are shaky at best? Would anyone take the Olympics seriously if the judges in the long-jump measured the distance of each jump by pacing it off?
Like Caesar’s wife, the science behind the rules and regulations promulgated by HHS and the Center for Medicare and Medicaid Services should be beyond reproach. This one doesn’t even come close.