Data can show us many things, not all of them real…

March 13, 2019 / By Richard Fuller, MS

Modern Healthcare recently published another article questioning the utility of the CMS hospital readmission reduction program (HRRP). The article used as its basis two publications, the first of which returned to the theme that hospitals are now allowing people to die rather than receive a potential but at the time unquantified penalty for a readmission1 (for brevity, I will call this the “mortality” article), while  the second found that performance changes reported under the HRRP are a mirage caused instead by changes in coding practice2 (this I will call the “coding” article).

These two articles are obviously in conflict. The mortality article finds that a change in hospital practice patterns has occurred that is detrimental to patient health, while the coding analysis finds that little or nothing has changed across hospitals outside of the way in which hospitals capture and report diagnosis codes. Both articles conflict with the findings of the Medicare Payment Advisory Commission (MedPAC), the watchdog given the task of reviewing the effect of the HRRP, which reported that:

“ …the Hospital Readmissions Reduction Program contributed to a significant decline in readmission rates without causing a material increase in emergency department (ED) visits or observation stays or an adverse effect on mortality rates.“ (June 2018)

There are some high-level similarities in the approaches taken by the two articles. Both estimate changes in the risk-adjusted trend of readmission rates over an extended period of time (pre and post introduction of the HRRP) and both use a series of variables to assess the effect of changes in patient risk within their models. But the specific study design varies between the models and indeed the models used by CMS in the HRRPa.  

Since this is a blog, I will attempt to avoid becoming overly technical and perhaps tedious, by pointing out a few key findings and assumptions from the articles. The mortality article found an increase in mortality in the 30 days after discharge, but there was no net observed (statistically significant) effect on mortality when it was measured for 45 days after admission. In other words, the increased mortality rate was found in post-acute care (PAC) when analyzed separately from the inpatient period, and not when the two periods were combined. This phenomenon should be explainable by the generally accepted knowledge that there has been a pattern of earlier discharge and increasing PAC facility admission (often reported as attempts to reduce length of stay), and that comparison of hospital inpatient mortality statistics without equalizing for a post procedure or post discharge period creates a bias in favor of lower length of stay hospitals (those that more frequently utilize PAC care to reduce length of stay).3 It is therefore not surprising that the authors found an effect of rising post-acute care mortality. Patients were not dying more frequently, but were dying after leaving the hospital.

The mortality article also recorded a 21 percent drop in hospitalizations between the pre and post HRRP periods and showed some large swings in the average number of risk factors reported for the population but did not interpret these shifts as having a large effect on case mix.

The coding study begins by stating that for readmissions, “Most of the declines occurred during the period between the enactment of the ACA (March 2010) and the month when hospitals first faced penalties (October 2012).” This conclusion was derived by measuring change in the risk-adjusted trend of readmission rates. The article goes on to find that changes in readmission rates were a product of claim submission changes correlating with the same timeline as the HRRP, thus permitting more diagnosis codes to be captured. While the mortality article reported a change in the underlying rate of hospitalizations (the aforementioned 21 percent drop), the coding article observed a stunning 59 percent drop in monthly admissions for targeted conditions in the anticipation period (from 103,000 to 43,000 admissions per month) followed by a 57 percent increase in the post period (to 66,812 per month—a net 35 percent drop between the pre and post period)b. Although these numbers concur with a general picture of rapidly declining hospitalizations, their volatility is somewhat alarming and even baffling.

The mortality article also gave specific examples of changes in the prevalence of reported risk factors that affect the HRRP risk model, while the coding article reported the general increase in the number of codes. Significantly, the mortality article showed that changes in the prevalence of coded complications and comorbidities (CCs) affecting the risk adjustment algorithm exhibited rapid growth between the 2005-2007 (pre) period and the 2007-2010 (anticipation) period before mostly leveling off. This finding is to be expected since in 2008 CMS rebased the IPPS using MS DRGs resulting in a large and systematic period of coding and documentation improvement along with a 5 percent impact upon payment. The coding article did not present an analysis of how coding practice changed in the baseline “pre” period.

Put simply, the baseline trend observed in the pre-period was more likely to be significantly impacted by coding change than the post period. It should be noted that this coding change associated with factors that impact the risk model shown in the mortality article (and endless discussion of hospital code creep) is not reflected in Exhibit 2 of the coding article that documents changes in the absolute number of reported codes over time.

MedPAC was mandated to conduct an analysis of the HRRP under the 21st century Cures Act and as part of that report they computed raw readmission rates (shown in MedPAC’s Figure 1.2 below). MedPAC’s raw numbers paint a very different picture compared to what was presented in the coding article.

Source: MedPAC   Mandated report: The effects of the Hospital Readmissions Reduction Program P15. June 2018.

The first thing to observe in Figure 1.2 is that the rate of readmissions has been declining more rapidly from 2011 through to 2016, a direct contradiction of the statement made in the coding article that, “Most of the declines occurred during the period between the enactment of the ACA (March 2010) and the month when hospitals first faced penalties (October 2012).” In fact, Exhibit 2 of the coding article shows similar dramatic reductions in risk-adjusted rates for the same period whether considering only 9 diagnosis codes or “all” codes.

One finding posited by the coding article is that “the HRRP had no effect on readmissions.” For risk-adjusted readmission rates to be constant while raw readmission rates decrease requires the patients for whom risk is being measured to be less risky (less sick) thereby offsetting the drop in the raw rate. But to accept this sequence of events we also have to account for the decline in hospital admissions. This decline fits with what we know of the last decade of clinical practice—a shift to observation stays and treating fewer sick patients as outpatients. Therefore to accept that risk-adjusted readmission rates held steady while raw readmission rates declined rapidly we have to explain why average patient severity was increasing more rapidly in the pre-period than in the post period which was accompanied by a large migration of admissions to outpatient care. No simple explanation comes to mind that squares this circle.

So we return to the purpose of the blog. How can a model be objective when it is based on data return results that contradict other studies and leads to results that are patently at odds with other changes  we know to be occurring? For me, it is the inability to control for so many complex inter-related changes over an extended period of time. Both models held assumptions of consistency of coding in the 2007-2010 period in projecting trends. They also made assumptions defining consistent relationships between predictor variables and outcome effects over the study period which the shift to observation (outpatient) and greater use of post-acute care are likely to have undermined.

While we have stated previously that there is much to fix in the HRRP,  we should also be careful when reviewing and reacting to such sweeping statements as those made in these two articles. The reality is that many hospitals have both worked at reducing readmissions and reported positive results from their efforts. To accept that they have in fact failed to impact care processes (or simply reported more codes to make it seem like progress) should require greater scrutiny. Similarly, statements that hospitals have compromised patient safety due to financial incentives (and anyone familiar with risk management and litigation will be skeptical!) need to think through the wider context of the changes in clinical practice within which their study exists.

Data can be instructive, but only as far as the underlying model accounts for the myriad of changes. It is also important to recognize and respect the role played by MedPAC as an independent arbiter of claim (and counter claim) surrounding the efficacy of the Medicare program.  

Richard Fuller, MS, is an economist with 3M Clinical and Economic Research.


a For example, neither article calculates independent coefficients for the study periods (the mortality article stating that, “the difference in case mix across the study periods is not large”), the mortality article uses propensity scoring while the coding article uses logistic regression and neither use the HGLM clustering model employed by CMS as each admission is treated as an event independent of the hospital. The studies use only Inpatient and Outpatient diagnoses while CMS also uses diagnoses from professional claims.

b 4,037,827 for 39 months pre-period; 1,274,870 for 30 months anticipation period and 1,737,109 for 26 months post-period; Appendix Exhibit A2.


  1. Wadhera RK, Joynt Maddox KE, Wasfy JH, Haneuse S, Shen C, Yeh RW. Association of the Hospital Readmissions Reduction Program With Mortality Among Medicare Beneficiaries Hospitalized for Heart Failure, Acute Myocardial Infarction, and Pneumonia. JAMA. 2018;320(24):2542. doi:10.1001/jama.2018.19232.
  2. Ody C, Msall L, Dafny LS, Grabowski DC, Cutler DM. Decreases In Readmissions Credited To Medicare’s Program To Reduce Hospital Readmissions Have Been Overstated. Health Aff (Millwood). 2019;38(1):36-43. doi:10.1377/hlthaff.2018.05178.
  3. Drye EE, Normand S-LT, Wang Y, et al. Comparison of Hospital Risk-Standardized Mortality Rates Calculated by Using In-Hospital and 30-Day Models: An Observational Study With Implications for Hospital Profiling. Ann Intern Med. 2012;156(1_Part_1):19. doi:10.7326/0003-4819-156-1-201201030-00004.