Skip to main content

Hybrid Hospital-Wide Readmission (HWR) Measure with Claims and Electronic Health Record Data

CBE ID
2879e
1.4 Project
Endorsement Status
1.1 New or Maintenance
Previous Endorsement Cycle
Is Under Review
Yes
1.3 Measure Description

Hybrid Hospital-Wide Readmission (HWR) Measure with Claims and Electronic Health Record Data measures facility-level risk-standardized rate of readmission (RSRR) within 30 days of discharge from an inpatient admission, among Medicare Fee-For-Service (FFS) and Medicare Advantage (MA) patients aged 65 years and older. 

Index admissions are divided into five groups based on their reason for hospitalization (e.g., surgery/gynecology, general medicine, cardiorespiratory, cardiovascular, and neurology); the final measure score (a single risk-standardized readmission rate) is calculated from the results of these five different groups, modeled separately. Variables from administrative claims and electronic health records are used for risk adjustment.

        • 1.5 Measure Type
          1.6 Composite Measure
          No
          1.7 Electronic Clinical Quality Measure (eCQM)
          1.8 Level Of Analysis
          1.9 Care Setting
          1.10 Measure Rationale

          Hospital readmission, for any reason, is disruptive to patients and caregivers, costly to the healthcare system, and puts patients at additional risk of hospital-acquired infections and complications. Readmissions are also a major source of patient and family stress and may contribute substantially to loss of functional ability, particularly in older patients. 

          Some readmissions are unavoidable and result from inevitable progression of disease or worsening of chronic conditions. However, readmissions may also result from poor quality of care or inadequate transitional care. Transitional care includes effective discharge planning, transfer of information at the time of discharge, patient assessment and education, and coordination of care and monitoring in the post-discharge period. Numerous studies have found an association between quality of inpatient or transitional care and early (typically 30-day) readmission rates for a wide range of conditions.1-8 

          Randomized controlled trials have shown that improvement in the following areas can directly reduce readmission rates: quality of care during the initial admission; improvement in communication with patients, their caregivers and their clinicians; patient education; predischarge assessment; and coordination of care after discharge.9-24 Successful randomized trials have reduced 30-day readmission rates by 20-40%. Widespread application of these clinical trial interventions to general practice has also been encouraging. Since 2008,32 Medicare Quality Improvement Organizations have been funded to focus on care transitions, applying lessons learned from clinical trials. Several have been notably successful in reducing readmissions within 30 days.31 Evidence that hospitals have been able to reduce readmission rates through these quality-of-care initiatives illustrates the degree to which hospital practices can affect readmission rates. 

          Despite these isolated successful interventions, the overall national readmission rate remains high, with a 30-day readmission following nearly one fifth of discharges. Furthermore, readmission rates vary widely across institutions.25-27 Both the high baseline rate and the variability across institutions speak to the need for a quality measure to prompt more concerted and widespread action. 

          Given that studies have shown readmissions within 30 days to be related to quality of care, that interventions have been able to reduce 30-day readmission rates for a variety of specific conditions, and that high and variable readmission rates indicate opportunity for improvement, it is reasonable to consider an all-condition 30-day readmission rate as a quality measure.

          Core Clinical Data Elements (CCDE) are included in the Hybrid HWR measure to improve upon case-mix risk-adjustment, using only claims-based comorbidity information, by adding laboratory values and vital signs to reflect patients' clinical status at the start of inpatient encounter.

          References

          1. Frankl SE, Breeling JL, Goldman L. Preventability of emergent hospital readmission. American Journal of Medicine. Jun 1991;90(6):667-674.
          2. Corrigan JM, Martin JB. Identification of factors associated with hospital readmission and development of a predictive model. Health Services Research. Apr 1992;27(1):81-101.
          3. Oddone EZ, Weinberger M, Horner M, et al. Classifying general medicine readmissions. Are they preventable? Veterans Affairs Cooperative Studies in Health Services Group on Primary Care and Hospital Readmissions. Journal of General Internal Medicine. Oct 1996;11(10):597-607.
          4. Ashton CM, Del Junco DJ, Souchek J, Wray NP, Mansyur CL. The association between the quality of inpatient care and early readmission: a meta-analysis of the evidence. Med Care. Oct 1997;35(10):1044-1059.
          5. Benbassat J, Taragin M. Hospital readmissions as a measure of quality of health care: advantages and limitations. Archives of Internal Medicine. Apr 24 2000;160(8):1074-1081.
          6. Courtney EDJ, Ankrett S, McCollum PT. 28-Day emergency surgical re-admission rates as a clinical indicator of performance. Annals of the Royal College of Surgeons of England. Mar 2003;85(2):75-78.
          7. Halfon P, Eggli Y, Pr, et al. Validation of the potentially avoidable hospital readmission rate as a routine indicator of the quality of hospital care. Medical Care. Nov 2006;44(11):972-981.
          8. Hernandez AF, Greiner MA, Fonarow GC, et al. Relationship between early physician follow-up and 30-day readmission among Medicare beneficiaries hospitalized for heart failure. JAMA. May 5 2010;303(17):1716-1722.
          9. Naylor M, Brooten D, Jones R, Lavizzo-Mourey R, Mezey M, Pauly M. Comprehensive discharge planning for the hospitalized elderly. A randomized clinical trial. Ann Intern Med. Jun 15 1994;120(12):999-1006.
          10. Naylor MD, Brooten D, Campbell R, et al. Comprehensive discharge planning and home follow-up of hospitalized elders: a randomized clinical trial. Jama. Feb 17 1999;281(7):613-620.
          11. Krumholz HM, Amatruda J, Smith GL, et al. Randomized trial of an education and support intervention to prevent readmission of patients with heart failure. Journal of the American College of Cardiology. Jan 2 2002;39(1):83-89.
          12. van Walraven C, Seth R, Austin PC, Laupacis A. Effect of discharge summary availability during post-discharge visits on hospital readmission. Journal of General Internal Medicine. Mar 2002;17(3):186-192.
          13. Conley RR, Kelly DL, Love RC, McMahon RP. Rehospitalization risk with secondgeneration and depot antipsychotics. Annals of Clinical Psychiatry. Mar 2003;15(1):23-31.
          14. Coleman EA, Smith JD, Frank JC, Min S-J, Parry C, Kramer AM. Preparing patients and caregivers to participate in care delivered across settings: the Care Transitions Intervention. Journal of the American Geriatrics Society. Nov 2004;52(11):1817-1825.
          15. Phillips CO, Wright SM, Kern DE, Singa RM, Shepperd S, Rubin HR. Comprehensive discharge planning with postdischarge support for older patients with congestive heart failure: a meta-analysis. JAMA. Mar 17 2004;291(11):1358-1367.
          16. Jovicic A, Holroyd-Leduc JM, Straus SE. Effects of self-management intervention on health outcomes of patients with heart failure: a systematic review of randomized controlled trials. BMC Cardiovasc Disord. 2006;6:43.
          17. Garasen H, Windspoll R, Johnsen R. Intermediate care at a community hospital as an alternative to prolonged general hospital care for elderly patients: a randomized controlled trial. BMC Public Health. 2007;7:68.
          18. Mistiaen P, Francke AL, Poot E. Interventions aimed at reducing problems in adult patients discharged from hospital to home: a systematic meta-review. BMC Health Services Research. 2007;7:47.
          19. Courtney M, Edwards H, Chang A, Parker A, Finlayson K, Hamilton K. Fewer emergency readmissions and better quality of life for older adults at risk of hospital readmission: a randomized controlled trial to determine the effectiveness of a 24-week exercise and telephone follow-up program. Journal of the American Geriatrics Society. Mar 2009;57(3):395-402.
          20. Jack BW, Chetty VK, Anthony D, et al. A reengineered hospital discharge program to decrease rehospitalization: a randomized trial. Ann Intern Med. Feb 3 2009;150(3):178-187.
          21. Koehler BE, Richter KM, Youngblood L, et al. Reduction of 30-day postdischarge hospital readmission or emergency department (ED) visit rates in high-risk elderly medical patients through delivery of a targeted care bundle. Journal of Hospital Medicine. Apr 2009;4(4):211-218.
          22. Weiss M, Yakusheva O, Bobay K. Nurse and patient perceptions of discharge readiness in relation to postdischarge utilization. Medical Care. May 2010;48(5):482-486.
          23. Stauffer BD, Fullerton C, Fleming N, et al. Effectiveness and cost of a transitional care program for heart failure: a prospective study with concurrent controls. Archives of Internal Medicine. Jul 25 2011;171(14):1238-1243. 
          24. Voss R, Gardner R, Baier R, Butterfield K, Lehrman S, Gravenstein S. The care transitions intervention: translating from efficacy to effectiveness. Archives of Internal Medicine. Jul 25 2011;171(14):1232-1237.
          25. Keenan PS, Normand SL, Lin Z, et al. An administrative claims measure suitable for profiling hospital performance on the basis of 30-day all-cause readmission rates among patients with heart failure. Circulation. Sep 2008;1(1):29-37. 
          26. Krumholz HM, Lin Z, Drye EE, et al. An administrative claims measure suitable for profiling hospital performance based on 30-day all-cause readmission rates among patients with acute myocardial infarction. Circulation. Mar 1 2011;4(2):243-252. 
          27. Lindenauer PK, Normand SL, Drye EE, et al. Development, validation, and results of a measure of 30-day readmission following hospitalization for pneumonia. Journal of Hospital Medicine. Mar 2011;6(3):142-150
          28. Keenan PS, Normand SL, Lin Z, et al. An administrative claims measure suitable for profiling hospital performance on the basis of 30-day all-cause readmission rates among patients with heart failure. Circ Cardiovasc Qual Outcomes. Sep 2008;1(1):29-37. doi:10.1161/circoutcomes.108.802686
          29. Krumholz HM, Lin Z, Drye EE, et al. An administrative claims measure suitable for profiling hospital performance based on 30-day all-cause readmission rates among patients with acute myocardial infarction. Circ Cardiovasc Qual Outcomes. Mar 2011;4(2):243-52. doi:10.1161/circoutcomes.110.957498
          30. Rothman MJ, Rothman SI, Beals J. Development and validation of a continuous measure of patient condition using the Electronic Medical Record. J Biomed Inform. Oct 2013;46(5):837-48. doi:10.1016/j.jbi.2013.06.011
          31. (CFMC) CFfMC. Care Transitions QIOSC. 2010; http://www.cfmc.org/caretransitions/ Hospital-wide Readmission Measure 68 July 2012 , 2011.
          32. Ashton CM, Del Junco DJ, Souchek J, Wray NP, Mansyur CL. The association between the quality of inpatient care and early readmission: a meta-analysis of the evidence. Med Care. Oct 1997;35(10):1044-1059.
          1.25 Data Sources

          The components of this HWR measure, as specified in this CBE submission, are comprised of data from the following sources:

          Cohort: Medicare fee-for-service claims and Medicare Advantage encounters; Medicare enrollment data.

          Outcome: Medicare enrollment data

          Risk adjustment: Medicare fee-for-service claims, Medicare Advantage encounters, supplemented with EHR data (core clinical data elements, or CCDE).

          Feasibility of data collection is addressed in Section 3.1, “Feasibility”. 

          Additional information on the data sources for this CBE submission can be found in Section 4.1 “Data and Samples” and in Table 7 of the Tables and Figures attachment.

        • 1.14 Numerator

          The outcome for this measure is 30-day readmission. We define readmission as an inpatient admission for any cause, with the exception of certain planned readmissions, within 30 days from the date of discharge from an eligible index admission. If a patient has more than one unplanned admission (for any reason) within 30 days after discharge from the index admission, only one is counted as a readmission for calculating the measure.

          1.14a Numerator Details

          The outcome for this measure is 30-day readmission. We define readmission as an inpatient admission for any cause, with the exception of certain planned readmissions, within 30 days from the date of discharge from an eligible index admission. If a patient has more than one unplanned admission (for any reason) within 30 days after discharge from the index admission, only one is counted as a readmission for calculating the measure. The outcome is a dichotomous yes or no indicating if each admitted patient has an unplanned readmission within 30 days. However, if the first readmission after discharge is considered planned, any subsequent unplanned readmission is not counted as an outcome for that index admission because the unplanned readmission could be related to care provided during the intervening planned readmission rather than during the index admission.

          The measure counts readmissions to an acute care hospital for any cause within 30 days of the date of discharge from the index admission, excluding planned readmissions as defined below.

          Planned Readmission Algorithm (Version 4.0)

          The planned readmission algorithm is a set of criteria for classifying readmissions as planned using Medicare claims and administrative data. The algorithm identifies admissions that are typically planned and may occur within 30 days of discharge from the hospital.

          The planned readmission algorithm has three fundamental principles:

          1. A few specific, limited types of care are always considered planned (transplant surgery, maintenance chemotherapy/immunotherapy, rehabilitation);

          2. Otherwise, a planned readmission is defined as a non-acute readmission for a scheduled procedure; and,

          3. Admissions for acute illness or for complications of care are never planned.

          The algorithm was developed in 2011 during the development of this measure. The measure uses version 4.0 of the algorithm (released in 2015 and updated annually); the algorithm is reviewed yearly to address coding changes.

          The planned readmission algorithm and associated code tables are attached (Data Dictionary). More details on the Planned Readmission Algorithm can be found in the Hybrid HWR Comprehensive Methodology Report, also attached.

        • 1.15 Denominator

          Admissions are included if all of the following criteria are met:

          •  Enrolled in Medicare fee-for-service (FFS) Part A or Medicare Advantage for the 12 months prior to the date of admission and during the index admission.
            • Rationale: The 12-month prior enrollment criterion ensures that the comorbidity data used in risk adjustment can be captured from inpatient claims data in the 12 months prior to the index admission. Enrollment during the index admission is needed to qualify for the cohort and to ensure availability of data from the index admission for risk adjustment.
          • Aged 65 or over.
            • Rationale: Medicare beneficiaries younger than 65 are not included in the measure because they are considered to be too clinically distinct from Medicare beneficiaries who are 65 or older. 
          • Discharged alive from a non-federal short-term acute care hospital.
            • Rationale: It is only possible for patients to be readmitted if discharged alive.
          • Not transferred to another acute care facility.
            • Rationale: Hospitalizations that result in a transfer to another acute care facility are not included in the measure because the measure’s focus is on admissions that result in discharge to a non-acute care setting (for example, to home or a skilled nursing facility).

          The measure aggregates the ICD-10 principal diagnosis and all procedure codes of the index admission into clinically coherent groups of conditions and procedures (condition categories or procedure categories) based on the v2019.1 Agency for Healthcare Research and Quality (AHRQ) Clinical Classification Software (CCS) beta maps. There are 285 mutually exclusive AHRQ condition categories, most of which are single, homogenous diseases such as pneumonia or acute myocardial infarction. Some are aggregates of conditions, such as “other bacterial infections.” There are also 231 mutually exclusive procedure categories.

          Using the AHRQ CCS procedure and condition categories, the measure assigns each index hospitalization to one of five mutually exclusive specialty cohorts: surgery/gynecology, cardiorespiratory, cardiovascular, neurology, and medicine. The rationale behind this organization is that conditions typically cared for by the same team of clinicians are expected to experience similar levels of readmission risk. Please see attached figure HWR Flow Diagram of Inclusion and Exclusion Criteria and Specialty Cohort Assignment for the Index Admission.

          1.15a Denominator Details

          Please see Figure 1 (in the attachment "HHWR Cohort Flow Chart") and Section 1.15b for an overview of cohort inclusions, and the attached data dictionary for further details that define the cohort.

          Defining the Specialty Cohorts

          The measure aggregates the ICD-10 principal diagnosis and all procedure codes of the index admission into clinically coherent groups of conditions and procedures (condition categories or procedure categories) based on the v2019.1 Agency for Healthcare Research and Quality (AHRQ) Clinical Classification Software (CCS) beta maps. There are about 300 mutually exclusive AHRQ condition categories, most of which are single, homogenous diseases such as pneumonia or acute myocardial infarction. Some are aggregates of conditions, such as “other bacterial infections.” There are also about 230 mutually exclusive procedure categories.

          Please see Figure 1 for a flow chart that shows how admissions are assigned to specialty cohorts. Using the AHRQ CCS procedure and condition categories, the measure assigns each index hospitalization to one of five mutually exclusive specialty cohorts: surgery/gynecology, cardiorespiratory, cardiovascular, neurology, and medicine. The rationale behind this organization is that conditions typically cared for by the same team of clinicians are expected to experience similar levels of readmission risk.

          The measure first assigns admissions with qualifying AHRQ procedure categories to the Surgical/Gynecological Cohort. This cohort includes admissions likely cared for by surgical or gynecological teams.

          The measure then sorts admissions into one of the four remaining specialty cohorts based on the AHRQ diagnosis category of the principal discharge diagnosis:

          The Cardiorespiratory Cohort includes several condition categories with very high readmission rates such as pneumonia, chronic obstructive pulmonary disease, and heart failure. These admissions are combined into a single cohort because they are often clinically indistinguishable, and patients are often simultaneously treated for several of these diagnoses.

          The Cardiovascular Cohort includes condition categories such as acute myocardial infarction that in large hospitals might be cared for by a separate cardiac or cardiovascular team.

          The Neurology Cohort includes neurologic condition categories such as stroke that in large hospitals might be cared for by a separate neurology team.

          The Medicine Cohort includes all non-surgical patients who were not assigned to any of the other cohorts.

          The full list of the specific diagnosis and procedure AHRQ CCS categories and ICD-10 codes used to define the specialty cohorts are attached in the Data Dictionary.

          1.15d Age Group
          Older Adults (65 years and older)
        • 1.15b Denominator Exclusions

          The measure excludes index admissions for patients who meet any of the following criteria:

          1. Admitted to Prospective Payment System (PPS)-exempt cancer hospitals;

          2. Without at least 30 days post-discharge enrollment in Medicare FFS or Medicare Advantage;

          3. Discharged against medical advice (AMA);

          4. Admitted for primary psychiatric diagnoses;

          5. Admitted for rehabilitation; 

          6. Admitted for medical treatment of cancer; or,

          7. Admitted with a principal or secondary diagnosis of COVID-19.

          8. With less than 7 of 13 CCDE reported.

          1.15c Denominator Exclusions Details

          This measure excludes index admissions for patients:

          1. Admitted to a Prospective Payment System (PPS)-exempt cancer hospital, identified by the Medicare provider ID.

          Rationale: These hospitals care for a unique population of patients that cannot reasonably be compared to the patients admitted to other hospitals.

          2. Without at least 30 days post-discharge enrollment in Medicare FFS, which is identified with enrollment data from the Medicare Enrollment Database (EDB).

          Rationale: The 30-day readmission outcome cannot be assessed in this group since claims data are used to determine whether a patient was readmitted.

          3. Discharged against medical advice (AMA)  (identified using the discharge disposition indicator in claims data).

          Rationale: Providers did not have the opportunity to deliver full care and prepare the patient for discharge.

          4. Admitted for primary psychiatric disease, identified by a principal diagnosis in one of the specific AHRQ CCS categories listed in the attached data dictionary.

          Rationale: Patients admitted for psychiatric treatment are typically cared for in separate psychiatric or rehabilitation centers which are not comparable to acute care hospitals.

          5. Admitted for rehabilitation care, identified by the specific ICD-10 diagnosis codes included in CCS 254 (Rehabilitation care; fitting of prostheses; and adjustment of devices).

          Rationale: These admissions are not typically admitted to an acute care hospital and are not for acute care.

          6. Admitted for medical treatment of cancer, identified by the specific AHRQ CCS categories listed in the attached data dictionary.

          Rationale: These admissions have a very different readmission profile than the rest of the Medicare population, and outcomes for these admissions do not correlate well with outcomes for other admissions.

          7. With a principal or secondary diagnosis of COVID-19.

          Rationale: Patients with a primary or secondary diagnosis of COVID-19 are excluded from the measure cohort in response to the COVID-19 Public Health Emergency. 

          8. Patients for whom less than 7 of 13 CCDE are reported.

          Rationale: Patients for whom a large portion of CCDE is missing are excluded from the measure as their status upon hospital arrival would not be complete.

        • OLD 1.12 MAT output not attached
          Attached
          1.12 Attach MAT Output
          1.13 Attach Data Dictionary
          1.13a Data dictionary not attached
          No
          1.16 Type of Score
          1.17 Measure Score Interpretation
          Better quality = Lower score
          1.18 Calculation of Measure Score

          Below we provide the individual steps to calculate the measure score: 

          Define Cohort

           

          1. Create five mutually exclusive specialty cohort using groups of related conditions or procedures. See Tab 1, “HWR Specialty Cohort Inclusions – Procedure and Diagnosis CCS Groups” of the data dictionary; and the inclusion/exclusion indicators.

          2. Apply the inclusions/exclusions criteria to construct the measure cohort:

          • Identify discharges meeting the inclusion criteria described in the denominator section above and assign to one of five specialty cohorts. Eligible discharges are from July 1-June 30 for any respective year.
          •  Exclude admissions meeting any of the exclusion criteria described in the exclusion section above, and patients for whom less than 7 of 13 CCDE are reported.

          Define outcome

          3. Derive the measure outcome of 30-day readmission, by identifying a binary flag for an unplanned hospital visit within 30 days of index admission as described above. 

          Define risk variables

          4. Use patients’ historical and index admission claims data, as well as CCDE values to create risk-adjustment variables. (Note: Risk variables from claims are based on secondary diagnoses with POA from index claims and all diagnosis codes from inpatient claims within  one year prior to the index admission.)

          Measure score calculation 

          5. For each specialty cohort group, estimate a separate hierarchical logistic regression model (HGLM) to produce a standardized risk ratio (SRR), calculated as the ratio of the number of “predicted” readmissions to the number of “expected” readmissions at a given hospital. The HGLM is adjusted for age, selected clinical covariates, and a hospital-specific effect. Details about the risk-adjustment model can be found in the original measure development methodology report: https://www.qualitynet.org/inpatient/measures/readmission/methodology.

          6. Pool each specialty cohort SRRs for each hospital using a volume-weighted geometric mean to create a hospital-wide SRR (or RSRR). Calculations can be found attached and posted at: https://www.qualitynet.org/inpatient/measures/readmission/methodology.

          7. Use statistical bootstrapping to construct a 95% confidence interval estimate for each facility’s RSRR. For more information about the measure methodology, please see the most recent Hybrid HWR Comprehensive Methodology Report attached and posted here: https://www.qualitynet.org/inpatient/measures/readmission/methodology.

          1.18a Attach measure score calculation diagram, if applicable
          1.19 Measure Stratification Details

          While the current measure is not yet stratified, the related claims-based measure has been stratified by dual eligibility, Area Deprivation Index, and race/ethnicity.  For details on how the claims-based measure has been stratified, please see this report: https://qualitynet.cms.gov/files/663cf02ecc07c26dc84863bf?filename=2024_DM_AUS_Report_v1.0.pdf.

          We note that while not currently stratified by social risk factors, testing for future potential stratification by social risk factors is ongoing. 

          1.26 Minimum Sample Size

          There is no minimum sample size for the calculation of this measure. 

          • 2.1 Attach Logic Model
            2.2 Evidence of Measure Importance

            The hospital-wide risk-standardized readmission rate (RSRR) measure is intended to inform quality-of-care improvement efforts, as individual process-based performance measures cannot encompass all the complex and critical aspects of care within a hospital that contribute to patient outcomes. As a result, many stakeholders, including patient organizations, are interested in outcomes measures that allow patients and providers to assess relative outcomes performance for hospitals. 

            A hospital-wide readmission measure captures a large cohort of patients admitted for a wide range of diagnoses, and also illuminates a broad range of performance among hospitals. According to internal analyses, from July 1, 2018 to June 30, 2019, there were about 11 million inpatient admissions nationally among Medicare FFS and Medicare Advantage beneficiaries aged 65 and older at about 4,800 US hospitals. This comprehensive cohort is inclusive of patients not currently captured by existing condition-specific mortality measures and provides stakeholders with a broad quality signal (in addition to more granular data to support quality improvement). In addition to capturing an expansive cohort of patients, variation in hospital-level readmission rates demonstrates a quality gap: hospital-level readmission rates ranged widely, from 10.4% to 47.2% (using data from July 1, 2018-June 30, 2019). The average hospital-level, risk standardized 30-day readmission rate was 15.5%. Overall, studies have estimated the rate of preventable readmissions to be as low as 12% and as high as 76%.18,19

            Randomized controlled trials demonstrate reduced readmission rates through the following: improvement of quality of care during the initial admission improvement in communication with patients, their caregivers, and their clinicians; patient education; pre-discharge assessment; and coordination of care after discharge. Evidence that hospitals have been able to reduce readmission rates through these quality-of-care initiatives illustrates the degree to which hospital practices can affect readmission rates. Successful randomized trials have reduced 30-day readmission rates by 20-40%.1-13, 21-24 Since 2008, 14 Medicare Quality Improvement Organizations have been funded to focus on care transitions, applying lessons learned from clinical trials. Several have been notably successful in reducing readmissions. Hospital processes that reflect the quality of inpatient and outpatient care such as discharge planning, medication reconciliation, and coordination of outpatient care have also been shown to reduce readmission rates.14. Although readmission rates are also influenced by hospital system characteristics, such as the bed capacity or hospitalist and nurse staffing levels, these hospital characteristics should not influence quality of care.15-17 Therefore, this measure does not risk adjust for such hospital characteristics.

            While the cost of a readmission varies widely, a recent study estimated an average cost of about $16,000 USD per readmission. There were more than 11 million admissions captured by the (claims-based) HWR measure (using data from July 1, 2018 through June 30, 2019) and the mean readmission rate was about 15.4%, representing a conservative estimate of about $27 billion US dollars in expenditures for the unplanned readmissions captured by this measure alone. 

            The quality gap described above, together with evidence that readmissions are related to quality of care, and that interventions have been able to reduce 30-day readmission rates, supports an all-cause unplanned hospital-wide readmission measure for quality measurement.19,20

            References:

            1. Patel PH, Dickerson KW. Impact of the Implementation of Project Re-Engineered Discharge for Heart Failure patients at a Veterans Affairs Hospital at the Central Arkansas Veterans Healthcare System. Hosp Pharm. 2018;53(4):266‐271. doi:10.1177/0018578717749925.
            2. Jack BW, Chetty VK, Anthony D, Greenwald JL, Sanchez GM, Johnson AE, et al. A reengineered hospital discharge program to decrease rehospitalization: a randomized trial. Ann Intern Med 2009;150(3):178-87.
            3. Coleman EA, Smith JD, Frank JC, Min SJ, Parry C, Kramer AM. Preparing patients and caregivers to participate in care delivered across settings: the Care Transitions Intervention. J Am Geriatr Soc 2004;52(11):1817-25.
            4. Courtney M, Edwards H, Chang A, Parker A, Finlayson K, Hamilton K. Fewer emergency readmissions and better quality of life for older adults at risk of hospital readmission: a randomized controlled trial to determine the effectiveness of a 24-week exercise and telephone follow-up program. J Am Geriatr Soc 2009;57(3):395-402.
            5. Garasen H, Windspoll R, Johnsen R. Intermediate care at a community hospital as an alternative to prolonged general hospital care for elderly patients: a randomised controlled trial. BMC Public Health 2007;7:68.
            6. Koehler BE, Richter KM, Youngblood L, Cohen BA, Prengler ID, Cheng D, et al. Reduction of 30-day postdischarge hospital readmission or emergency department (ED) visit rates in high-risk elderly medical patients through delivery of a targeted care bundle. J Hosp Med 2009;4(4):211-218
            7. Mistiaen P, Francke AL, Poot E. Interventions aimed at reducing problems in adult patients discharged from hospital to home: a systematic metareview. BMC Health Serv Res 2007;7:47.
            8.  Naylor M, Brooten D, Jones R, Lavizzo-Mourey R, Mezey M, Pauly M. Comprehensive discharge planning for the hospitalized elderly. A randomized clinical trial. Ann Intern Med 1994;120(12):999-1006.
            9. Naylor MD, Brooten D, Campbell R, Jacobsen BS, Mezey MD, Pauly MV, et al. Comprehensive discharge planning and home follow-up of hospitalized elders: a randomized clinical trial. Jama 1999;281(7):613-20.
            10. van Walraven C, Seth R, Austin PC, Laupacis A. Effect of discharge summary availability during post-discharge visits on hospital readmission. J Gen Intern Med 2002;17(3):186-92.
            11. Weiss M, Yakusheva O, Bobay K. Nurse and patient perceptions of discharge readiness in relation to postdischarge utilization. Med Care 2010;48(5):482-6.
            12. Krumholz HM, Amatruda J, Smith GL, et al. Randomized trial of an education and support intervention to prevent readmission of patients with heart failure. J Am Coll Cardiol. Jan 2 2002;39(1):83-89.
            13. Nelson EA, Maruish ME, Axler JL. Effects of Discharge Planning and Compliance With Outpatient Appointments on Readmission Rates. Psychiatr Serv. July 1 2000;51(7):885-889.
            14. Fisher ES, Wennberg JE, Stukel TA, Sharp SM. Hospital Readmission Rates for Cohorts of Medicare Beneficiaries in Boston and New Haven. New England Journal of Medicine. 1994;331(15):989-995.
            15. Al-Amin M. Hospital characteristics and 30-day all-cause readmission rates. J Hosp Med. 2016;11(10):682-687. doi:10.1002/jhm.2606.
            16. Hoyer EH, Padula WV, Brotman DJ, et al. Patterns of Hospital Performance on the Hospital-Wide 30-Day Readmission Metric: Is the Playing Field Level?. J Gen Intern Med. 2018;33(1):57-64. doi:10.1007/s11606-017-4193-9.
            17. Benbassat J, Taragin M. Hospital readmissions as a measure of quality of health care: advantages and limitations. Archives of Internal Medicine 2000;160(8):1074-81.
            18. Gil M, Mikaitis DK, Shier G, Johnson TJ, Sims S. Impact of a combined pharmacist and social worker program to reduce hospital readmissions. J Manag Care Pharm. 2013;19(7):558-563.
            19. Kansagara D, Ramsay RS, Labby D, Saha S. Post-discharge intervention in vulnerable, chronically ill patients. Journal of Hospital Medicine. 2012;7(2):124-130.
            20. Radhakrishnan K, Jones TL, Weems D, Knight TW, Rice WH. Seamless Transitions: Achieving Patient Safety Through Communication and Collaboration. J Patient Saf. 2018;14(1):e3-e5.
            21. Kamermayer A, Leasure A, Anderson L. The Effectiveness of Transitions-of-Care Interventions in Reducing Hospital Readmissions and Mortality. Dimensions of Critical Care Nursing. 2017; 36 (6): 311-316. doi: 10.1097/DCC.0000000000000266.
            22. Jason H. Wasfy, Corwin Matthew Zigler, Christine Choirat, et al. Readmission Rates After Passage of the Hospital Readmissions Reduction Program: A Pre–Post Analysis. Ann Intern Med.2017;166:324-331. [Epub 27 December 2016]. doi:10.7326/M16-0185
            23. De Oliveira G, Castro-Alves L, Kendall M, McCarthy R. Effectiveness of Pharmacist Intervention to Reduce Medication Errors and Health-Care Resources Utilization After Transitions of Care: A Meta-analysis of Randomized Controlled Trials. Journal of Patient Safety. 2021; 17 (5): 375-380. doi: 10.1097/PTS.0000000000000283.
            24. Cynthia Feltner, Christine D. Jones, Crystal W. Cené, et al. Transitional Care Interventions to Prevent Readmissions for Persons With Heart Failure: A Systematic Review and Meta-analysis. Ann Intern Med.2014;160:774-784. [Epub 3 June 2014]. doi:10.7326/M14-0083
          • 2.6 Meaningfulness to Target Population

            Hospital readmission, for any reason, is disruptive to patients and caregivers, costly to the healthcare system and policy holders, and puts patients at additional risk of hospital-acquired infections and complications. Readmissions are also a major source of patient and family stress and may contribute substantially to loss of functional ability and independence, particularly in older patients.1 CORE interviewed patients and caregivers for a Technical Expert Panel (TEP) related to readmissions; patients and caregivers shared their stories of frustration, confusion, and suffering, as they or their loved ones faced unexpected returns to the hospital after discharge. In our interviews they cited experiences such as return to the hospital following exacerbation of a condition caused by changes in medication after discharge, returns to the hospital due to infection after an inpatient procedure, and other signs of poor coordination of care including insufficient communication from providers and hospital staff.

            While some readmissions are unavoidable and result from inevitable progression of disease or worsening of chronic conditions, many readmissions may also result from poor quality of care or inadequate transitional care. Transitional care includes effective discharge planning, transfer of information at the time of discharge, patient assessment and education, and coordination of care and monitoring in the post-discharge period. Numerous studies have found an association between quality of inpatient or transitional care and early (typically 30-day) readmission rates for a wide range of conditions. 2-5 A study examined perspectives of patients, compared with registered nurses (RNs) and physicians, on the preventability of readmissions. Interestingly, the study found that compared with physicians, patients were more likely to identify a readmission as preventable, and patients were more likely than physicians to identify system issues as an underlying reason for their readmission (58% of cases vs 2%, respectively). Furthermore, RNs and patients had similar assessments as to the preventability of their readmission.6

            References:

            1. Covinsky KE, Palmer RM, Fortinsky RH et al. Loss of independence in activities of daily living in older adults hospitalized with medical illnesses: Increased vulnerability with age. J Am Geriatr Soc 2003; 51: 451–458.
            2.  CMS Announces Relief for Clinicians, Providers, Hospitals and Facilities Participating in Quality Reporting Programs in Response to COVID-19. 2020. https://www.cms.gov/Newsroom/Press-Releases/Cms-Announces-Relief-Clinicians-Providers-Hospitals-And-Facilities-Participating-Quality-Reporting.
            3. COVID-19 Quality Reporting Programs Guidance Memo (2020).
            4.  Hospital Inpatient Value I, and Quality Reporting Outreach and Education Support Contractor (Health Services Advisory Group, Inc.). CMS Announces Updates on Hospital Quality Reporting and Value-based Payment Programs due to the COVID-19 Public Health Emergency. 2020. Accessed 3/23/2024. https://qualitynet.cms.gov/files/5f0707a3b8112700239dca19?filename=2020-62-IP.pdf.
            5. (CMS) CfMMS. Frequently Asked Questions: COVID-19 Extraordinary Circumstances Exception for Inpatient Acute Care Hospitals. 2020. Accessed 3/13/2024.
            6. Smeraglio A, Heidenreich PA, Krishnan G, Hopkins J, Chen J, Shieh L. Patient vs provider perspectives of 30-day hospital readmissions. BMJ Open Qual. 2019 Jan 7;8(1):e000264. doi: 10.1136/bmjoq-2017-000264. PMID: 30687798; PMCID: PMC6327873.
            7. Gerhardt G, Yemane A, Hickman P, Oelschlaeger A, Rollins E, Brennan N. Medicare readmission rates showed meaningful decline in 2012. Medicare Medicaid Res Rev. 2013; 3(2): E1–E1
          • 2.4 Performance Gap

            We refer readers to Section 1.18 for information on how performance scores are calculated.

            As described in section 4.1.2, we provide results using a nationally representative dataset that includes both FFS and MA admissions and claims-based risk adjustment (but without the EHR-based data elements [CCDE] for enhanced case-mix risk adjustment), and separately, results from 2024 Voluntary Reporting (representing the measure as currently implemented, without Medicare Advantage (MA) admissions but with both the claims-based and clinical data elements from the EHR [CCDE] to enhance risk adjustment). 

            We characterize the degree of variation by reporting the distribution of RSRRs. 

             

            Measure Score Distribution

            The distribution of measure scores from the Claims-Only HWR (Medicare FFS + MA) dataset and Hybrid HWR 2024 Voluntary Reporting dataset is shown below in Tables 2 and 3, and Figures 3 and 4 (please see "Hybrid HWR All Tables and Figures" attachment).

            There is wide variation in measure scores in the national dataset (Claims Only HWR [Medicare FFS + MA]):  RSRRs for the 4,782 hospitals in the dataset range from 10.37% to 47.22% with a mean of 15.48% (standard deviation, 1.28%); the 25th percentile is 14.79% and the 75th percentile is 16.06% (Table 2). There is meaningful variation in performance across hospitals: the worst performing facility (RSMR 11.60%) is performing about 89% worse than the median (6.11%), while the best performing facility (RSMR 1.52%) is performing 75% better than the median. 

            As expected, we see less variation in the 1,162 hospitals within the Hybrid HWR 2024 Voluntary Reporting dataset. We see less variation in this dataset due to the voluntary nature of public reporting, where we expect that better performers may be more likely to choose to report. As shown in Table 3, RSRRs ranged from 10.21% to 16.90%, with a mean of 14.29% (standard deviation, 0.59%). The 25th percentile was 13.97% and the 75th percentile was 14.58%. 

            In summary, the variation in rates, especially in the Claims-Only (Medicare FFS + MA) dataset, which is nationally representative suggests there are differences in the quality of care received across hospitals. This evidence supports continued measurement to reduce the variation.

             

            Table 1. Performance Scores by Decile
            Performance Gap
            Overall Minimum Decile_1 Decile_2 Decile_3 Decile_4 Decile_5 Decile_6 Decile_7 Decile_8 Decile_9 Decile_10 Maximum
            Mean Performance Score 15.48 10.37 13.53 14.40 14.78 15.05 15.29 15.49 15.74 16.08 16.57 17.92 47.22
            N of Entities 4,782 1 478 478 478 479 478 478 479 478 478 478 1
            N of Persons / Encounters / Episodes 11,029,470 3,802 1,321,954 1,160,299 899,771 701,058 735,186 681,808 969,925 1,256,604 1,445,531 1,857,334 1,416
            • 3.1 Feasibility Assessment

              As part of broader measure development, we originally tested the feasibility of electronic extraction of the EHR-based data elements used to enhance risk adjustment (the core clinical data elements or CCDEs). The CCDE are a set of data elements that are captured on most adults admitted to acute care hospitals, are easily extracted from EHRs, and can be used to risk adjust hospital outcome measures for a variety of conditions and procedures. Feasibility testing included: 1) identification of potentially feasible clinical data through qualitative assessment, 2) empirical feasibility testing of several clinical data elements electronically extracted from two large multi-facility health systems, and 3) validity testing of the CCDE at an additional health system. Results from these analyses show conceptual feasibility by a Technical Expert Panel (TEP), while empiric feasibility demonstrates consistent capture and match rate of CCDE from the EHRs. For more information on our initial feasibility testing conducted during measure development, please see the Hybrid HWM methodology report and 2013 Core Clinical Data Elements Technical Report attached to this form.

              Prior to measure implementation, CMS received feedback through the FY2022 Inpatient Prospective Payment System Final Rule1 indicating concerns about reporting burden, in terms of variation in readiness and eCQM reporting capabilities across hospitals. This concern was addressed by delaying implementation for several years after rule finalization by adding one rounds of Confidential Reporting (and two rounds of Confidential Reporting for Hybrid Hospital-Wide Readmission, which shares similar data elements and submission/ collection processes) to allow hospitals and their vendors additional time to upgrade IT systems, improve data mapping and other capabilities, and increase staff training for measure reporting. This eCQM reporting cycle was delayed in comparison to reporting requirements for other Hospital IQR Program measures. 

              Also as described in Section 6.2.3, hospitals also provided feedback about challenges in meeting the IQR reporting threshold for submission of CCDE (within 24 hours before/after inpatient admission for 90% of discharges, and linking variable [used to merge EHR to claims data] for 95% of discharges) that are required to receive their Annual Payment Update. CMS was responsive to these comments and has proposed that the submission of CCDE remain voluntary for 2025 reporting.2 Additionally, CORE (the measure developer) is updating the data collection approach (effective with the 2025 Annual Update Cycle) to expand the CCDE lookback period beyond the 24 hours prior to/after inpatient admission, to the first result captured during the hospital encounter. By increasing the window from which CCDE can be extracted, hospitals are likely to report CCDE for a higher percentage of discharges, improving their ability to meet the IQR submission percentage.

              Finally, we also note that after initial feasibility testing of the CCDE during measure development we identified potential for barriers related to data collection for some data elements. For example, we found a lower capture rate for the “Bicarbonate” variable.  Because of this low capture rate, we expanded the Bicarbonate Lab Test value set to include carbon dioxide lab codes, which are often performed in lieu of bicarbonate lab tests. We refer readers to section 4.3.4 Validity Testing Results (Missing Data), where results show missingness for bicarbonate lab tests for 2024 Voluntary Reporting ranged from 5.70% to 13.09%, showing improvement for this data element from initial development testing.

              Analyses around missing data are presented in Section 4.3.4.

              The estimated costs of data collection are minimal, as this measure utilizes information from EHR systems already within the hospital. We estimate 12 hours for one employee to extract and submit patient files through the Quality Reporting Document Architecture (QRDA) Submission Portal, consistent with all eCQMs. This measure is not intended to influence clinical workflow, as CCDE were selected by a Technical Expert Panel (TEP) because they are routinely captured on all adult inpatients, and data are submitted electronically (CCDE); other measure components, including other risk adjustment variables, numerator and denominator inclusion and exclusion are captured using Medicare inpatient claims. 

              References

              1. Medicare Program; Hospital Inpatient Prospective Payment Systems for Acute Care Hospitals and the Long-Term Care Hospital Prospective Payment System and Policy Changes and Fiscal Year 2020 Rates; Quality Reporting Requirements for Specific Providers; Medicare and Medicaid Promoting Interoperability Programs Requirements for Eligible Hospitals and Critical Access Hospitals. Federal Register. Published August 16, 2019. Accessed April 9, 2024. https://www.federalregister.gov/documents/2019/08/16/2019-16762/medicare-program-hospital-inpatient-prospective-payment-systems-for-acute-care-hospitals-and-the
              2. Medicare and Medicaid Programs: Hospital Outpatient Prospective Payment and Ambulatory Surgical Center Payment Systems; Quality Reporting Programs, Including the Hospital Inpatient Quality Reporting Program; Health and Safety Standards for Obstetrical Services in Hospitals and Critical Access Hospitals; Prior Authorization; Requests for Information; Medicaid and CHIP Continuous Eligibility; Medicaid Clinic Services Four Walls Exceptions; Individuals Currently or Formerly in Custody of Penal Authorities; Revision to Medicare Special Enrollment Period for Formerly Incarcerated Individuals; and All-Inclusive Rate Add-On Payment for High-Cost Drugs Provided by Indian Health Service and Tribal Facilities. Published July 10, 2024. https://www.cms.gov/medicare/payment/prospective-payment-systems/ambulatory-surgical-center-asc/cms-1809-p  
              3.2 Attach Feasibility Scorecard
              3.3 Feasibility Informed Final Measure

              Based on results from testing shown below, the CCDE were selecting for the final specifications to be feasible to extract, and routinely collected for all adult inpatient EHRs. 

              To address CBE’s requirement for feasibility in relation to data elements and measure logic, we reevaluated this measure against the feasibility domains (see attached Feasibility Scorecard). The results of feasibility assessment for the 15 data elements are below:

              • The data elements are in a structured format within the EHR systems (scoring 1 for Availability), 
              • Some data elements were transmitted directly from other electronic systems into the EHR or resulted from clinician assessment or interpretation (scoring 1 for accuracy)
              • This measure’s data elements are coded using either RXNORM or SNOMED (scoring 1 for data standards)
              • The data elements required for this measure (lab values, vital signs, referral orders, problem list entries) are captured during the course of care and do not impact workflow (scoring 1 for workflow)
            • 3.4 Proprietary Information
              Not a proprietary measure and no proprietary components
              • 4.1.3 Characteristics of Measured Entities

                For this measure, hospitals are the measured entities. All non-federal, short-term acute care inpatient US hospitals (including territories) with Medicare fee-for-service (FFS) and Medicare Advantage (MA) beneficiaries aged 65 years or over are included. In addition, where we present testing results for the claims-only measure, the testing data presented includes data from patients 65 and older who were enrolled in Medicare FFS and Medicare Advantage. For Data Element Reliability and Validity testing, we present testing from the original development of this measure, using a historical dataset—21 Kaiser Permanente hospitals, that includes all-payer and all adults aged 18+. The number of measured entities varies by testing type: see Table 4 in the attachment entitled "Hybrid HWR All Tables and Figures."

                4.1.1 Data Used for Testing

                Please see Table 4 in the attachment entitled "Hybrid HWR All Tables and Figures."

                4.1.4 Characteristics of Units of the Eligible Population

                Please see Table 4 in the attachment entitled "Hybrid HWR All Tables and Figures."

                 

                4.1.2 Differences in Data

                For the updated testing in this CBE endorsement submission, we provide results from two datasets.  Each dataset is described in detail in Table 4 in the attachment entitled "Hybrid HWR All Tables and Figures."

                1. HWR Claims-Only Dataset. This dataset includes both FFS and MA admissions and was used to provide a national dataset that includes all Medicare beneficiaries. While this dataset includes all claims-based variables used for risk adjustment, it does not include the CCDE EHR elements; the addition of the EHR data elements in the CCDE provides risk adjustment supplemental to claims-based risk adjustment, therefore results derived from the HWR claims-only dataset provides a close approximation for nationally representative results.  Measure scores calculated with and without the CCDE are highly correlated. See Table 3 of Hybrid HWR Comprehensive Methodology Report.
                2. 2024 Hybrid HWR Voluntary Reporting Dataset: This dataset was used to provide information on the integration of EHR data elements (CCDE) for case-mix risk-adjustment that supplements the claims-based variables; this dataset includes both claims-based risk adjustment variables and EHR-based data elements (CCDE). This dataset, however, does not include MA admissions, as MA were not part of the measure specifications at the time of data collection; MA admissions will be added to the measure in 2026 Reporting per the Fiscal Year 2024 Inpatient Prospective Payment System Final Rule.   

                See dataset descriptions in Table 4 (in the attachment entitled "Hybrid HWR All Tables and Figures") for further details on each dataset. We note that we also include results derived from datasets from original measure development as this is the data that was used for risk variable selection.

              • 4.2.2 Method(s) of Reliability Testing

                Data Element Reliability (Patient/Encounter Level)

                Data element reliability for the EHR-based variables (CCDE) used in this measure were established during development. In this testing we used the capture rate to establish reliability. We refer readers to the attached 2013 Core Clinical Data Elements Technical Report (Version 1.1) for methodologic details.

                Measure Score Reliability: Split-Sample

                To ascertain measure score reliability we calculated the intra-class correlation coefficient (ICC) using a split-sample (also known as the split-half) method in both the Claims-Only HWR (Medicare FFS + MA)  (discharges July 1, 2022-June 30, 2023), and the Hybrid HWR 2024 Voluntary Reporting (July 1, 2022-June 30, 2023) datasets.  We did not calculate signal-to-noise reliability for the overall measure score because the signal-to-noise calculation should be based on a statistical model;1 the measure score (risk-standardized readmission rate or RSRR) for the HWR measure is a combined score that is not calculated from a single statistical model.

                The reliability of a measurement is the degree to which repeated measurements of the same entity agree with each other. For measures of hospital performance, the measured entity is the hospital, and reliability is the extent to which repeated measurements of the same hospital give similar results. Accordingly, our approach to assessing reliability is to consider the extent to which assessments of a hospital using different but randomly selected subsets of patients produce similar measures of hospital performance. Hospital performance is measured once using a random subset of patients from a defined dataset from a measurement period, and then measured again using a second random subset exclusive of the first from the same measurement period, and the agreement of the two resulting performance measures compared across hospitals.2

                For split-sample reliability of the measure, we randomly sampled half of patients within each hospital from a one-year measurement period, calculated the measure for each hospital, and repeated the calculation using the second half of patients. Thus, each hospital is measured twice, but each measurement is made using an entirely distinct set of patients. To the extent that the calculated measures of these two subsets agree, we have evidence that the measure is assessing an attribute of the hospital, not of the patients. As a metric of agreement, we calculated the intra-class correlation coefficient.3 Specifically, we used the Claims-Only Hospital Wide Readmission (Medicare FFS + MA) and 2024 Hybrid Hospital-Wide Readmission Voluntary Reporting datasets, randomly split each into two approximately equal subsets of patients, and then calculated the RSRR for each hospital for each sample. The agreement of the two RSRRs was quantified for hospitals in each sample using the intra-class correlation as defined by ICC (2,1).3

                Using two non-overlapping random samples provides a conservative estimate of the measure’s reliability, compared with using two random, but potentially overlapping samples which would exaggerate the agreement. Moreover, because our final measure is derived using hierarchical logistic regression, and a known property of hierarchical logistic regression models is that smaller volume hospitals contribute less 'signal', a split sample using a single measurement period would introduce extra noise. This leads to an underestimate in the actual split-sample reliability that would be achieved if the measure were reported using the full measurement period, as evidenced by the Spearman Brown prophecy formula.4 We used this formula to estimate the reliability of the measure if the whole cohort were used, based on an estimate from half the cohort.

                References

                1. Adams J, Mehrota, A, Thoman J, McGlynn, E. (2010). Physician cost profiling – reliability and risk of misclassification. NEJM, 362(11): 1014-1021.
                2. Rousson V, Gasser T, Seifert B. "Assessing intrarater, interrater and test–retest reliability of continuous measurements," Statistics in Medicine, 2002, 21:3431-3446.
                3. Shrout P, Fleiss J. Intraclass correlations: uses in assessing rater reliability. Psychological Bulletin, 1979, 86, 420-3428.
                4. Spearman, Charles, C. (1910). Correlation calculated from faulty data. British Journal of Psychology, 3, 271–295.
                4.2.3 Reliability Testing Results

                Measure Score Reliability Results

                In the Claims-Only HWR [Medicare FFS and Medicare Advantage] dataset (Dataset 2), there were 4,401 hospitals in the development sample and 4,402 hospitals in the validation sample. The intraclass correlation between the two RSRRs for each sample was 0.780, which meets current CBE thresholds for reliability (0.6).

                In the Hybrid HWR 2024 Voluntary Reporting dataset (Dataset 3), there were 1,058 hospitals in the development sample and 1,055 hospitals in the validation sample. The intraclass correlation between the two RSRRs for each sample was 0.645, which also meets current CBE thresholds for reliability (0.6).

                We note that we did not complete Table 5 in this CBE submission because the split-sample reliability calculation results in a single statistic, not a distribution.

                4.2.4 Interpretation of Reliability Results

                Data Element Reliability

                Based on prior testing of the CCDE for a related measure, data element reliability shows rate of clinical capture of each CCDE within each of the five cohorts to be well over 90%, with the exception of laboratory results in the Surgical/Gynecological cohort, which are not used in the measure.

                Measure Score Reliability Results

                The split-sample reliability score (using the Claims-Only HWR [Medicare FFS and Medicare Advantage] and Hybrid HWR 2024 Voluntary Reporting datasets) of 0.780 and 0.645, respectively, meet the current CBE threshold for split-sample reliability (0.6).1

                Reference:

                1. Batelle (2023). Endorsement & Maintenance (E&M) Guidebook. Partnership for Quality Measurement. October 2023. 
              • 4.3.3 Method(s) of Validity Testing

                Data Element Validity Testing (CCDE)

                Chart Abstraction:

                We developed electronic specifications (e-specifications) using the Measure Authoring Tool (MAT) and analyzed extracted data from EHRs. We assessed the ability of hospitals to use the e-specifications to query and electronically extract CCDEs from the EHR, within 24 hours before or up to 24 hours after inpatient admission for labs; within 24 hours before or up to 2 hours after inpatient admission for vital signs, for all adult inpatient admissions occurring over the course of one year. Validity testing assessed the accuracy of the electronically extracted CCDEs compared to the same CCDEs gathered through manual abstraction (from the EHR) in a subset of 368 charts identified in the data query in 3 hospitals that used Cerner as their EHR Vendor (Dataset 4), and 391 charts identified in the data query in data extracted from 1 hospital with 391 admissions that used GE Centricity as their clinical EHR (Dataset 5).

                We calculated the number of admissions that needed to be randomly sampled from the EHR dataset and manually abstracted to yield a statistical margin of error (MOE) of 5% and a confidence level of 95% for the match rates between the two data sources. Sites then used an Access-based manual abstraction tool provided (along with training) to manually abstract the CCDEs from the random samples of the medical records identified through the EHR data query. The manual chart abstraction data is considered the “gold standard” for the purpose of this analysis.

                We conducted validity testing on the critical EHR data elements in the Hybrid HWR measure. For each continuous data element, we were only interested in the case where the electronic abstraction value exactly matched the manual abstraction value. We therefore only calculated the raw agreement rate between data from electronic and manual chart abstraction. For simple data values, we believe taking this approach, as compared to reporting statistical tests of accuracy, better reflects the concept of matching exact data values rather than calculated measure results. Therefore, we do not report statistical testing of the accuracy of the EHR derived data value as compared with the abstracted value. Instead, we counted only exact matches in the data value as well as the time and date stamp associated with that value when we calculated the match rate. The 95% confidence level was established based on the sample size and reflects the exact match rate using these criteria. 

                Missing Data

                For the EHR data elements used in the measure’s risk models, we anticipate that there will be some missing data. We examined rates of missing data using the Hybrid HWR 2024 Voluntary Reporting dataset (discharges July 1, 2022-June 30, 2023), which includes CCDE submitted by 1,162 hospitals during Confidential Reporting, in Table 9 of the Tables and Figures attachment. Please note that not all CCDE are included in each specialty cohort, for example laboratory values are not included in the Surgical/Gynecological cohort Table 8 (see "Hybrid HWR All Tables and Figures" attachment). We characterize CCDE as “missing” when: 1) the hospital did not report any data for that value, or 2) when the reported value is unusable in risk-adjustment (e.g. missing units or data not able to be standardized [string data, or values which cannot be converted to primary Unified Code for Units of Measure (UCUM) units without additional information]). For measure calculation, where CCDE values are missing or unusable, we impute the median value reported across all hospitals for that CCDE, to profile a “typical” patient.

                 

                Measure Score Validity

                Empiric Validity 

                To assess the construct validity of the HWR measure, we identified and assessed the measure’s correlation with other measures with publicly reported data that we hypothesized would be related to readmission based on the evidence for similar or overlapping causal pathways. 

                Figure 2 in Section 2.1 (see "Hybrid HWR All Tables and Figures attachment, page 2) shows the logic model (causal pathway) for the HWR measure, where underlying processes, such as the delivery of timely care, ensuring proper communication during transitions of care including at discharge result in better downstream patient support and management (including patient education to support self-care), resulting in a decreased risk of readmission. 

                We reviewed measures with publicly available data on Medicare.gov and identified several outcome (readmission) and patient experience measures that fall within the same causal pathway (see Figure 2) including measures that summarize overall hospital quality (Overall Hospital Quality Star Rating). Outcome or patient experience measures in the same causal pathway are hypothesized to be correlated with the HWR measure because the same underlying processes of care are expected to impact both the HWR measure and the comparator outcome/patient experience measure.

                CMS is the steward for the readmission measures published on Medicare.gov as well as the steward of the Overall Hospital Quality Star Ratings, which includes a Summary Score for the readmission measures (Readmission Group Score) and a summary score that encompasses all of the measures (organized into five Groups: Readmission, Mortality, Safety, Patient Experience, and Timely & Effective Care) within the Overall Hospital Quality Star Ratings. The readmission measures are, by definition, within the same causal pathway as the HWR measure, and as an overall hospital-wide measure of quality, the Star Ratings Summary Score (which includes measure related to patient safety and prevention of infection, for example) is also within the causal pathway (see Figure 2). (More information about the Star Ratings-related measures can be found below under the heading “About the comparator measures.”) We note that prior to running these analyses, we removed the claims-based HWR measure currently implemented (FFS-only)  from the Star Ratings comparators, because it is one of the measures included in Overall Hospital Quality Star Rating.1

                We also identified a sub-set of the patient experience measures within the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) as candidate measures.  There are measures within HCAHPS related to care transitions, which are within the causal pathway for the HWR measure (Figure 2).  For example, the Care Transition Measure (CTM3) is a three-question measure that includes questions about patient understanding of managing their health after discharge, and about patient understanding of their medications.2 In addition, HCAHPS contains survey questions related to communication, and the discharge process; both of these domains are in the same causal pathway as HWR (Figure 2). (More information about the Star Ratings-related measures can be found below under the heading “About the comparator measures.”)

                We then hypothesized the strength and the direction of the relationship for each measure (Table 6, "Hybrid HWR All Tables and Figures" attachment).  For the HWR measure, a lower measures score means better performance, therefore for comparator measures where better performance is hypothesized to be related to be better performance on HWR (such as care transition, and communication), the direction of the association is shown as “negative.” 

                We then examined the relationship of performance of the HWR measure scores (RSRRs) with each of these external measures of hospital quality as measured by Pearson correlation coefficients (Table 4 of the Tables and Figures attachment). We also compared hospital performance on the HWR measure within quartiles of the comparator measures (Figures 5-8). For purposes of this testing, we used the Claims-Based HWR (Medicare FFS and MA) dataset (discharges July 1, 2018, through June 30, 2019) as it is a national sample, similar to the Star Ratings.

                 

                About the comparator measures:

                1. Overall Hospital Star Rating Readmission Group Score: CMS’s Overall Hospital Star Rating assesses hospitals’ overall performance (expressed on CMS’ Care Compare site, graphically as stars) based on a weighted average of group scores from five different domains of quality (mortality, readmissions, safety, patient experience, timely & effective care)). The Readmission Group is comprised of the readmission measures that are publicly reported on Care Compare. The Readmission Group Score is calculated using a simple average of the scores for the individual measures and is on a higher-is-better scale. For the validity testing presented in this testing form, we first removed the FFS-only claims-based HWR measure from the group of measures, and then re-calculated the Star Rating readmission group scores. We used Readmission Group Scores from the 2021 release of the Star Ratings. The full methodology for the Overall Hospital Star Rating can be found at https://www.qualitynet.org/dcs/ContentServer?c=Page&pagename=QnetPublic%2FPage%2FQnetTier3&cid=1228775957165
                2. Overall Hospital Star Rating Summary Score: CMS’s Overall Hospital Star Rating assesses hospitals’ overall performance (shown on Care Compare graphically, as stars) based on a weighted average of “group scores” from five different domains of quality (mortality, readmissions, safety, patient experience, timely & effective care,). Each group is comprised of individual measures that are reported on Care Compare. Group scores for each individual group are derived from a simple average of the scores for the individual measures.  The Summary Score is on a higher-is-better scale. For the validity testing presented in this testing form, we first removed the FFS-only claims-based HWR measure from the group of measures used to calculate the Readmission Group score and then re-calculated the summary scores. The measure scores include data from Medicare FFS hospitals from the 2020 and 2021 release of Medicare.gov .The full methodology for the Overall Hospital Star Rating can be found at https://www.qualitynet.org/inpatient/public-reporting/overall-ratings/resources
                3. HCAHPS: The Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey is an CBE-endorsed, publicly reported survey of patients' perspectives of hospital care. Discharged patients (all patients over 18; not limited to Medicare beneficiaries) are asked 29 questions about their hospital experiences, including questions related to communication with doctors and nurses, responsiveness of staff, and communication about medications, and discharge information. Scores used for this analysis are based on response categories (for example “No” or “Yes”), the score rate (which is based on the two top box responses), or the linear mean score, which is an average of item-level responses for each question. 

                The Discharge Information linear mean score is a composite calculated from responses to two discharge-related questions in HCAHPS: (1) During this hospital stay, did doctors, nurses or other hospital staff talk with you about whether you would have the help you needed when you left the hospital? (2) During this hospital stay, did you get information in writing about what symptoms or health problems to look out for after you left the hospital? 

                The Care Transition Measure (CTM3) is a three-question measure, administered within HCAHPS, that asks the following questions: (1) During this hospital stay, staff took my preferences and those of my family or caregiver into account in deciding what my healthcare needs would be when I left. (2) When I left the hospital, I had a good understanding of the things I was responsible for in managing my health. (3) When I left the hospital, I clearly understood the purpose for taking each of my medications. 

                For more information on HCAHPS, see: https://hcahpsonline.org/en/#AboutTheSurvey.  For more information on HCAHPS scoring, see: https://hcahpsonline.org/globalassets/hcahps/star-ratings/tech-notes/2017-10_star-ratings_tech-notes.pdf.  For more information on the CTM measure, see: https://secureservercdn.net/72.167.242.33/253.582.myftpupload.com/wp-content/uploads/2019/09/CTM-3.pdf The analysis presented here used HCAHPS data from calendar year 2018-2019.

                References:

                1. Centers for Medicare & Medicaid Services (CMS). Hospitals - Overall hospital quality star rating | Provider Data Catalog. Data.CMS.gov. https://data.cms.gov/provider-data/topics/hospitals/overall-hospital-quality-star-rating/
                2. Parry C, Mahoney E, Chalmers SA, Coleman EA. Assessing the quality of transitional care: further applications of the care transitions measure. Med Care. 2008;46(3):317-322. doi:10.1097/MLR.0b013e3181589bdc

                 

                4.3.4 Validity Testing Results

                Summary

                Our validity testing results, described below, provide strong evidence for the validity of the EHR based data elements (CCDE) (based on comparison of data elements through chart abstraction and analysis of missing data), and validity of the measure score, shown through construct validity, and face validity. 

                 

                Validity of EHR Data Elements

                We note that these CCDE were selected by a Technical Expert Panel to be routinely collected on all adult inpatients, in order to guide clinical decision making. See the 2013 Core Clinical Data Elements Technical Report (Version 1.1) attached for details.

                Of these candidate variables, chart abstraction for validity testing was done in Dataset 4 (Electronically extracted data from 3 hospitals with Cerner as their EHR) and Dataset 5 (Data element validity dataset- from one hospital using GE Centricity as their EHR vendor). Table 7 ("Hybrid HWR All Tables and Figures" attachment) demonstrates the comparison between electronic and manual abstraction of data in the two health systems. We found that the percent agreement between the EHR-based variables and chart-abstracted values ranged from 14.66% to 97.22%.

                A post-validation review of the code used by the hospital in Dataset 5 (one hospital with GE Centricity as EHR vendor), revealed that the hospital experienced a number of errors. The most significant of which was extracting data only within an incorrect two-hour window for laboratory test results (the correct window was 24 hours). Additionally, physical exam (vital signs) data were extracted based on the date/time that results were documented rather than the date/time the physical exams were performed, driving down the accuracy of these data.  However, post-validation review of the code used by the hospital in Dataset 4 (three hospitals with Cerner as their EHR vendor) showed no such errors in the query executed. As a result, the match rate for Dataset 4 was much higher.

                Our analysis of missing data (Table 9, "Hybrid HWR All Tables and Figures" attachment) shows that reporting of CCDE varied across cohorts, from 1.27% missing for Systolic Blood Pressure in the Neurology cohort, to 23.99% missing for White Blood Cell Count in the Cardiovascular cohort. 

                 

                Measure Validity Testing Results

                Empiric Measure Score Validity

                Table 11 of the Tables and Figures attachment shows the results of the analyses examining associations between the HWR measure and quality metrics in the same causal pathway, as described in Section 2.1. All analyses were performed with the Medicare FFS + MA claims-only dataset.

                Readmission Group Score and Star Rating Summary Score: As hypothesized, the HWR measure score was moderately, negatively correlated with both the Readmission Group Score, and the Summary Score, meaning that higher scores (better performance) on the comparator measures were associated with lower scores (better performance) on the HWR measure. This is expected because the star ratings quality measures focus on, or contain a portion of, the same domain of quality as the HWR measure (readmission). 

                Patient Experience: The HCAHPS measures related to transitions of care, communication about medications, doctor and nurse communication, and discharge instructions, were also correlated with HWR in the expected direction (Table 10, "Hybrid HWR All Tables and Figures" attachment). For example, the HCAHPS discharge information linear mean score had a Pearson correlation coefficient of -0.324, indicating that better performance on the discharge measure was correlated with lower measure scores (better performance) on the HWR measure. Similar relationships are shown in Table 10 below for the HCAHPS Care Transition composite score.

                Box plots (whisker plots) that visualize these relationships are shown below in Figures 5-6.

                 

                Association between HWR and Quality Measures (box plots)

                Figure 5 (see "Hybrid HWR All Tables and Figures" attachment, page 10)  shows the box-whisker plots of the HWR measure RSRRs within each quartile of Star-Rating Readmission Group Scores. The blue diamonds represent the mean RSRRs of the HWR measure, within each quartiles of the Star Rating Readmission Group Scores. The correlation between HWR RSRRs and Star Rating Readmissions Group Score is -0.520, which suggests that hospitals with lower HWR RSRRs (better performance) are more likely to have higher Star Rating Readmission Group Scores (better performance).
                Figure 6 (see "Hybrid HWR All Tables and Figures" attachment, page 11) shows the box-whisker plots of HWR measure RSRRs within each quartile of Star Rating Summary scores. The blue diamonds represent the mean RSRRs of the HWR measure within each quartile of Star Rating Summary Scores. The correlation between HWR RSRRs and Star-Rating summary score is -0.398, which suggests that hospitals with lower HWR RSRRs (better performance) are more likely to have higher Star-Rating summary scores (better performance).

                 

                Association between HWR and HCAHPS Discharge-related measures (box plots)

                Figures 7-8 (see "Hybrid HWR All Tables and Figures" attachment, pages 11-12) show the box-whisker plots of the HWR measure RSRRs within each quartile of the transitions of care and discharge-related HCAHPS items.  For each figure, the blue diamonds represent the mean RSRRs within quartiles of the comparator measures. The mean RSRR for the HWR measure trends in the expected direction across each quartile of the comparator measures.  For example, the relationship with HWR measure scores is in the negative direction for the discharge information linear mean score; better performance, or lower scores on the HWR measure are associated with higher (better performance) on the discharge information linear mean score. 

                 

                4.3.5 Interpretation of Validity Results

                The combination of data element, and measure score validity testing supports the validity of the Hybrid HWR Measure. We discuss each category of testing below.

                Data Element Validity

                Our chart abstraction results show a high percent agreement for most variables. We note that the lower capture rate (see reliability section) and lower % agreement rate for the bicarbonate value have been addressed in measure updates that were made to the accepted value for this data element (see Section 3.1). The rate of missing values continues to be low for most data elements and not likely to introduce bias. We note that the impact of missing values for the White Blood Cell laboratory test is very low. As we employ an imputation strategy for missing data, missingness is unlikely to have a meaningful impact on measure scores. We refer readers to section 4.3.4 Testing Results (Missing data) which shows improved missingness from development testing to 2024 Voluntary Reporting. We note that for missing CCDE values, multiple imputation is used to impute a value based on the characteristics of the CCDE reported. To minimize any small potential for bias from CCDE values, we account for potential outlier values, using winsorization, as well as account for missing values in our risk models. It is expected that CCDE reporting will continue to improve in future years, when the CCDE lookback period is expanded beyond 24 hours, and as hospitals gain familiarity with the measures. 

                Measure Score Validity

                Measure score validity testing supports the validity of the Hybrid HWR measure. Measure score validity testing between the Hybrid HWR RSRR and related HCAHPS and Star Ratings Scores show statistically significant, moderate negative agreement, as expected, validating that the measure score correlates with other metrics related to readmission. 

                Taken together, these results support the validity of the Hybrid HWR measure.

              • 4.4.1 Methods used to address risk factors
                4.4.2 Conceptual Model Rationale

                This section addresses clinical risk variables; please see Section 5.1 for a discussion of social risk factors.

                Approach to Variable Selection

                Our approach to risk adjustment was tailored to and appropriate for a publicly reported outcome measure, as articulated in the American Heart Association (AHA) Scientific Statement, “Standards for Statistical Models Used for Public Reporting of Health Outcomes.”1 The measure estimates hospital-level 30-day all-cause RSRRs using hierarchical logistic regression models. In brief, the approach simultaneously models data at the patient and hospital levels to account for variance in patient outcomes within and between hospitals.2

                The approach to risk adjustment is the only component of the Hybrid HWR measure that differs from the original HWR measure methodology. The original HWR measure uses claims data to adjust for two aspects of risk: 1) case mix or how sick individual admitted patients are; and, 2) service mix or the proportion of admitted patients with various different principal discharge diagnoses. Different claims data are used to assess each of these.

                To select candidate variables for the Hybrid risk model, we began with the list of all administrative claims-based risk-adjustment variables included in the claims-only HWR measure, described below.  We then added EHR-based risk variables, also described below.

                Claims-based Risk Variables

                In order to select the comorbid risk variables during the original development of this measure, we developed a “starter” set of 30 variables drawn from previous readmission measures (AMI, heart failure, pneumonia, hip and knee arthroplasty, and stroke). Next, we reviewed all the remaining CMS-CCs and determined on a clinical basis whether they were likely to be relevant to an all-condition measure. We selected 11 additional risk variables to consider.

                Using data from the index admission and any admission in the prior 12 months, we ran a standard logistic regression model for every discharge condition category with the full set of candidate risk adjustment variables. We compared odds ratios for different variables across different condition categories (excluding condition categories with fewer than 700 readmissions due to the number of events per variable constraints). We selected the final set of comorbid risk variables based on the following principles: 

                • We excluded risk variables that were statistically significant for very few condition categories, given that they would not contribute much to the overall models.
                • We excluded risk variables that behaved in clinically incoherent ways. For example, we dropped risk variables that sometimes increased risk and sometimes decreased risk, when we could not identify a clinical rationale for the differences. 
                • We excluded risk variables that were predominantly protective when we felt this protective effect was not clinically reasonable but more likely reflected coding factors. For example, drug/alcohol abuse without dependence (CC 53) and delirium and encephalopathy (CC 48) were both protective for readmission risk although clinically they should increase patients’ severity of illness. 
                • Where possible, we grouped together risk variables that were clinically coherent and carried similar risks across condition categories. For example, we combined coronary artery disease (CCs 83-84) with cerebrovascular disease (CCs 98, 99, and 103).
                • We examined risk variables that had been combined in previous CMS publicly reported measures, and in one instance separated them: for cancers, the previous measures generally pool 5 categories of cancers (CCs 8 to 12), together. In our analysis, lung cancer (CC 8) and other severe cancers (CC 9) carried higher risks, so we separated them into a distinct risk variable and grouped other major cancers (CC 10), benign cancers (CC 11), and cancers of the urinary and GI tracts (CC 12) together. Consistent with other publicly reported measures, we also left metastatic cancer/leukemia (CC 7) as a separate risk variable.

                Complications occurring during hospitalization are not comorbid illnesses, may reflect hospital quality of care, and therefore should not be used for risk adjustment. Hence, conditions that may represent adverse outcomes due to care received during the index hospital stay are not included in the risk-adjusted model (see the current list in Hybrid Risk-Variable Complications of Care in the Data Dictionary). CCs on this list were not counted as a risk variable in our analyses if they appeared only on the index admission.

                This resulted in a final risk-adjustment model that included 32 CC-based variables. Additional variables related to service line adjustment are described below.

                Service mix adjustment: 

                The measure includes many different discharge condition categories that differ in their baseline readmission risks. In addition, hospitals differ in their relative distribution of these condition categories (service mix). To adjust for service mix, the measure uses an indicator variable for the discharge condition category in addition to risk variables for comorbid conditions. The models include a condition-specific indicator for all condition categories with sufficient volume (defined as those with more than 1,000 admissions nationally in a given year for Medicare FFS data) as well as a single indicator for conditions with insufficient volume in each model.

                EHR-based risk variables

                The CCDE specific to the risk adjustment for the HWR measure consists of patients’ age, weight, the first set of vital signs captured within 2 hours of the start of the episode of care, and the results of the first complete blood count and basic chemistry panel drawn within 24 hours of the start of the episode of care.  If the patient has values captured prior to admission, for example from the emergency department, pre-operative, or other outpatient area within the hospital, the logic also supports extraction of the first resulted vital signs and laboratory tests within 24 hours prior to the start of the inpatient admission. Preliminary work had established that the CCDE could be used to risk adjust measures of 30-day readmission across a variety of common and costly medical conditions. Application of these same data elements to the original HWR measure allows us to examine the use of the CCDE in a broader cohort of hospitalized medical and surgical patients as well as to examine its utility in predicting hospital readmission. Therefore, CORE specifically sought to determine whether the use of clinical data for risk adjustment in place of, or in combination with, comorbidity data from Medicare claims would improve the discrimination of the HWR models or the reliability of the measure. 

                As described in the original methodology report, to determine if adding the CCDE improved risk adjustment, we compared four risk-adjustment strategies: the original HWR approach that used claims-only data; and three new approaches that used the CCDE in various combinations with claims data. One model applied the CCDE to the full HWR risk-adjustment model, which include the Principal Diagnosis CCSs. We assumed that this model would out-perform models that used only clinical or only claims data because it is the most comprehensive model. A second model used only the CCDE for risk adjustment. A third model used the CCDE in addition to the principal discharge diagnoses CCS from the original HWR risk-adjustment model.  We selected the best-performing alternative model based on discrimination in terms of the C-statistic. Based on superior model discrimination (see Table 11 of the Tables and Figures attachment ), the CCDE with Original HWR model was identified as the best-performing model of those evaluated and this model was carried forward for measure development and testing using hierarchical logistic regression. The other two approaches that included the CCDE were discarded.

                Although the 5 risk models use a common set of claims variables, the CCDE variables and principal discharge diagnoses CCSs are not the same across specialty cohort models. Only those data elements that are statistically significant in each individual model are included. We estimate a hierarchical logistic regression model for each specialty cohort separately, and the coefficients associated with each variable may vary across specialty cohorts. 

                The final set of risk-adjustment variables with their frequencies for each specialty cohort, including service-line adjustments, can be found in the Data Dictionary.

                References:

                1. Krumholz HM, Brindis RG, Brush JE, et al. 2006. Standards for Statistical Models Used for Public Reporting of Health Outcomes: An American Heart Association Scientific Statement From the Quality of Care and Outcomes Research Interdisciplinary Writing Group: Cosponsored by the Council on Epidemiology and Prevention and the Stroke Council Endorsed by the American College of Cardiology Foundation. Circulation 113: 456-462
                2. Normand S-LT, Shahian DM. 2007. Statistical and Clinical Aspects of Hospital Outcomes Profiling. Stat Sci 22 (2): 206-226.   
                4.4.2a Attach Conceptual Model
                4.4.3 Risk Factor Characteristics Across Measured Entities

                We refer the reader to risk variable frequencies in the attached data dictionary. 

                We provide results from both the claims-only HWR dataset (MA + FFS cohort, claims-based risk adjustment), and the 2024 Voluntary Reporting dataset (FFS cohort plus CCDE enhanced risk adjustment) to demonstrate results using a national sample, and to include the CCDE variable.

                CORE’s Approach to Annual Model Validation

                CORE’s measures undergo an annual measure reevaluation process, which ensures that the risk-standardized models are continually assessed and remain valid, given possible changes in clinical practice and coding standards over time. Modifications made to measure cohorts, risk models, and outcomes are informed by review of the most recent literature related to measure conditions or outcomes, feedback from various stakeholders, and empirical analyses, including assessment of coding trends that reveal shifts in clinical practice or billing patterns. Input is solicited from a workgroup composed of up to 20 clinical and measure experts, inclusive of internal and external consultants and subcontractors. 

                We provide a link to the 2024 measure re-evaluation report for the Hybrid HWR measure. The report describes what CORE did for 2024 Voluntary Reporting; we:

                • Updated the ICD-10 code-based specifications used in the measures. Specifically, we:
                  • Incorporated the code changes that occurred in the FY 2019 version of the ICD-10-CM/PCS (effective with October 1, 2018+ discharges) into the cohort definitions and risk models; and,
                  • Applied a modified version of the FY 2019 V22 CMS-Hierarchical Condition Category (HCC) crosswalk that is maintained by RTI International to the risk models.
                  • Monitored code frequencies to identify any warranted specification changes due to possible changes in coding practices and patterns;
                • Evaluated the stability of the risk-adjustment model over the three-year measurement period by examining the model variable frequencies, model coefficients, and the performance of the risk-adjustment model in each year.
                • For each of the conditions, we assessed logistic regression model performance in terms of discriminant ability for each year of data and for the three-year combined period. We computed two summary statistics to assess model performance: the predictive ability and the area under the receiver operating characteristic (ROC) curve (c-statistic). 
                4.4.4 Risk Adjustment Modeling and/or Stratification Results

                Please see attached data dictionary the final variables for each of the 15 risk models with associated odds ratios.

                 

                The data dictionary details the risk variables assessed for the Claims-Only HWM [Medicare FFS and MA], which is a national sample. However, the Hybrid HWM 2024 Voluntary Reporting results detail including an assessment of the CCDE. We provide results from both samples to demonstrate results using a national sample, and to include the CCDE variable (however results only include hospitals that participated in voluntary reporting n=~1,600). 

                4.4.4a Attach Risk Adjustment Modeling and/or Stratification Specifications
                4.4.5 Calibration and Discrimination

                 

                Model Testing Methods

                To assess model performance, we assessed model discrimination, calibration, and overfitting. To assess discrimination, we computed two discrimination statistics, the c-statistic and predictive ability. For all analyses, we provide results from both the claims-only (Medicare FFS and MA) and hybrid 2024 VR datasets. 

                The c-statistic is the probability that predicting the outcome is better than chance, which is a measure of how accurately a statistical model can distinguish between a patient with and without an outcome. 

                Predictive ability measures the ability to distinguish high-risk subjects from low-risk subjects; therefore, for a model with good predictive ability, we would expect to see a wide range in observed outcomes between the lowest and highest deciles of predicted outcomes. To calculate the predictive ability, we calculated the range of mean observed outcomes between the lowest and highest predicted deciles of outcome probabilities. 

                For assessments of model calibration, we provide calibration plots, with mean predicted and mean observed outcomes plotted against deciles of predicted outcomes. The closer the predicted outcomes are to the observed outcomes, the better calibrated the model is. 

                In addition, we provide an analysis of overfitting. Overfitting refers to the phenomenon in which a model accurately describes the relationship between predictive variables and outcome in the development dataset but fails to provide valid predictions in new patients. Estimated calibration values of γ0 close to 0 and estimated values of γ1 close 1 provide evidence of good calibration of the model.

                 

                Please see the attachment "Hybrid HWR All Tables and Figures" for the model testing results which are described below.

                 

                Model Performance Results

                Discrimination and Calibration 

                As shown in Table 12 and 13 (see attachment “Hybrid HWR All Tables and Figures”, pages 12-13), across specialty cohorts, c-statistics range from 0.600 to 0.695, and 0.642 to 0.680 in the Claims-Only HWR (Medicare FFS + MA) (discharges July 1, 2018-June 30, 2019) dataset and Hybrid HWR 2024 Voluntary Reporting (discharges July 1, 2022-June 30, 2023) dataset, respectively.

                 

                Model testing shows a wide range of predictive ability (Table 12 and 13), and risk decile plots (Figures 9 and 10 in the attachment “Hybrid HWR All Tables and Figures”) show that higher deciles of the predicted outcomes are associated with higher observed outcomes, demonstrating good calibration of the models across both datasets (Claims only, and HWR 2024 Voluntary Reporting). 

                Overfitting

                Table 14 and 15 (see attachment “Hybrid HWR All Tables and Figures”, page 13), show overfitting results, demonstrating that γ0 in the validation samples is close to zero, and γ1 is close to one across specialty cohorts, for both datasets (Claims only, and HWR 2024 Voluntary Reporting).

                4.4.6 Interpretation of Risk Factor Findings

                Our model testing results provide evidence for adequate discrimination across specialty cohorts. The c-statistic should be interpreted in the context of this particular measure (a readmission measure). If an outcome is more strongly related to quality of care rather than patient characteristics, patient factors are less predictive of the outcome. The results from our variable selection suggest that for some cohorts within this measure, patient comorbidities have a relatively limited relationship to the occurrence of the outcome; the outcome is also predicted by other factors, such as the quality of care delivered by the facility.

                 

                Our calibration plots show that higher deciles of predicted outcomes are associated with higher observed outcomes, which show good calibration of the models. The models also show a wide range of predictive ability. The overfitting values close to 0 at one end and close to 1 to the other end indicate good calibration of each of the models. The overfitting statistics are satisfactory across specialty cohorts.

                 

                Interpreted together, our diagnostic results demonstrate the risk-adjustment model adequately controls for differences in patient characteristics.

                4.4.7 Final Approach to Address Risk Factors
                Risk adjustment approach
                On
                Risk adjustment approach
                Off
                Conceptual model for risk adjustment
                Off
                Conceptual model for risk adjustment
                On
                • 5.1 Contributions Towards Advancing Health Equity

                  Please see Social Risk Factors attachment. 

                  Social Risk Factors

                  We weigh social risk factor adjustment using a comprehensive approach that evaluates the following:

                  •   Well-supported conceptual model for influence of social risk factors on measure outcome (detailed below);

                  •   Feasibility of testing meaningful social risk factors in available data: and

                  •   Empiric testing of social risk factors. 

                  Below, we summarize the findings of the literature review and conceptual pathways by which social risk factors may influence risk of the outcome, as well as the statistical methods for social risk factor empiric testing. Our conceptualization of the pathways by which patients’ social risk factors affect the outcome is informed by the literature cited below and IMPACT Act–funded work by the National Academy of Science, Engineering and Medicine (NASEM) and the Department of Health and Human Services Assistant Secretary for Policy and Evaluation (ASPE 2016; ASPE 2020).

                   

                  Causal Pathways for Social Risk Variable Selection

                  Although some recent literature evaluates the relationship between patient social risk factors and the readmission outcome, few studies directly address causal pathways or examine the role of the hospital in these pathways.1, 2, 4, 7, 8, 11, 14, 17Moreover, the current literature examines a wide range of conditions and risk variables with no clear consensus on which risk factors demonstrate the strongest relationship with readmission.

                  The social risk factors that have been examined in the literature can be categorized into three domains: (1) patient-level variables, (2) neighborhood/community-level variables, and (3) hospital-level variables.

                  Patient-level variables describe characteristics of individual patients and include the patient’s income or education level.8 Neighborhood/community-level variables use information from sources such as the ACS as either a proxy for individual patient-level data or to measure environmental factors. Studies using these variables use one dimensional measures such as median household income or composite measures such as the Area Deprivation Index (ADI).12, 15, 16 Some of these variables may include the local availability of clinical providers.5-6 Hospital-level variables measure attributes of the hospital which may be related to patient risk. Examples of hospital-level variables used in studies are ZIP code characteristics aggregated to the hospital level or the proportion of Medicaid patients served in the hospital.3, 9-10

                  The conceptual relationship, or potential causal pathways by which these possible social risk factors influence the risk of readmission following an acute illness or major surgery, like the factors themselves, are varied and complex. There are at least four potential pathways that are important to consider:

                  1. Patients with social risk factors may have worse health at the time of hospital admission. Patients who have lower income/education/literacy or unstable housing may have a worse general health status and may present for their hospitalization or procedure with a greater severity of underlying illness. These social risk factors, which are characterized by patient-level or neighborhood/community-level (as proxy for patient-level) variables, may contribute to worse health status at admission due to competing priorities (restrictions based on job), lack of access to care (geographic, cultural, or financial), or lack of health insurance. Given that these risk factors all lead to worse general health status, this causal pathway should be largely accounted for by current clinical risk-adjustment.
                  2. Patients with social risk factors often receive care at lower quality hospitals. Patients of lower income, lower education, or unstable housing have inequitable access to high quality facilities, in part, because such facilities are less likely to be found in geographic areas with large populations of poor patients. Thus, patients with low income are more likely to be seen in lower quality hospitals, which can explain increased risk of readmission following hospitalization.
                  3. Patients with social risk factors may receive differential care within a hospital. The third major pathway by which social risk factors may contribute to readmission risk is that patients may not receive equivalent care within a facility. For example, patients with social risk factors such as lower education may require differentiated care (e.g. provision of lower literacy information – that they do not receive).
                  4. Patients with social risk factors may experience worse health outcomes beyond the control of the health care system. Some social risk factors, such as income or wealth, may affect the likelihood of readmissions without directly affecting health status at admission or the quality of care received during the hospital stay. For instance, while a hospital may make appropriate care decisions and provide tailored care and education, a lower-income patient may have a worse outcome post-discharge due to competing financial priorities which don’t allow for adequate recuperation or access to needed treatments, or a lack of access to care outside of the hospital.

                  Although we analytically aim to separate these pathways to the extent possible, we acknowledge that risk factors often act on multiple pathways, and as such, individual pathways can be complex to distinguish analytically. Further, some social risk factors, despite having a strong conceptual relationship with worse outcomes, may not have statistically meaningful effects on the risk model. They also have different implications on the decision to risk adjust or not.

                  Based on this model and that the Area Deprivation Index (ADI) and dual-eligibility variables aim to capture the social risk factors that are likely to influence these pathways (income, education, housing, and community factors) - the following social risk variables were considered for risk-adjustment:

                  • Dual-eligible status: Dual eligibility for Medicare and Medicaid is available at the patient level in the Medicare Master Beneficiary Summary File. The eligibility threshold for over 65-year-old Medicare patients considers both income and assets. For the dual-eligible (DE) indicator, there is a body of literature demonstrating differential health care and health outcomes among beneficiaries.18 
                  • High Area Deprivation Index (ADI): The ADI, initially developed by Health Resources & Services Administration (HRSA), is based on 17 measures across four domains: income, education, employment, and housing quality.12, 16

                  The 17 components are listed below:

                  • Population aged ≥ 25 y with < 9 y of education, %
                  • Population aged ≥ 25 y with at least a high school diploma, %
                  • Employed persons aged ≥ 16 y in white collar occupations, %
                  • Median family income, $
                  • Income disparity
                  • Median home value, $
                  • Median gross rent, $
                  • Median monthly mortgage, $
                  • Owner occupied housing units, % (home ownership rate)
                  • Civilian labor force population aged ≥16 y unemployed, % (unemployment rate)
                  • Families below poverty level, %
                  • Population below 150% of the poverty threshold, %
                  • Single parent households with children aged < 18 y, %
                  • Households without a motor vehicle, %
                  • Households without a telephone, %
                  • Occupied housing units without complete plumbing, % (log)
                  • Households with more than 1 person per room, % (crowding)

                  ADI scores were derived using beneficiary’s 9-digit ZIP Code of residence, which is obtained from the Medicare Enrollment Database, and is linked to 2017-2021 US Census/American Community Survey (ACS) data. In accordance with the ADI developers’ methodology, an ADI score is calculated for the census block group corresponding to the beneficiary’s 9-digit ZIP Code using 17 weighted Census indicators. Raw ADI scores were then transformed into a national percentile ranking ranging from 1 to 100, with lower scores indicating lower levels of disadvantage and higher scores indicating higher levels of disadvantage. Percentile thresholds established by the ADI developers were then applied to ADI percentile to dichotomize neighborhoods into more disadvantaged (high ADI areas=ranking equal to or greater than 85) or less disadvantaged areas (Low ADI areas= ranking of less than 85).

                  References:

                  1. Buntin MB, Ayanian JZ. Social Risk Factors and Equity in Medicare Payment. New England Journal of Medicine. 2017;376(6):507-510.
                  2. Chang W-C, Kaul P, Westerhout C M, Graham M. M., Armstrong Paul W., “Effects of Socioeconomic Status on Mortality after Acute Myocardial Infarction.” The American Journal of Medicine. 2007; 120(1): 33-39.
                  3. Gilman M, Adams EK, Hockenberry JM, et al. California safety-net hospitals likely to be penalized by ACA value, readmission, and meaningful-use programs. Health Aff (Millwood). Aug 2014; 33(8):1314-22.
                  4. Gopaldas R R, Chu D., “Predictors of surgical mortality and discharge status after coronary artery bypass grafting in patients 80 years and older.” The American Journal of Surgery. 2009; 198(5): 633-638.
                  5. Herrin J, Kenward K, Joshi MS, Audet AM, Hines SJ. Assessing Community Quality of Health Care. Health Serv Res. 2016 Feb;51(1):98-116. doi: 10.1111/1475-6773.12322. Epub 2015 Jun 11. PMID: 26096649; PMCID: PMC4722214.
                  6. Herrin J, St Andre J, Kenward K, Joshi MS, Audet AM, Hines SC. Community factors and hospital readmission rates. Health Serv Res. 2015 Feb;50(1):20-39. doi: 10.1111/1475-6773.12177. Epub 2014 Apr 9. PMID: 24712374; PMCID: PMC4319869.
                  7. Hamadi H, Moody L, Apatu E, Vossos H, Tafili A, Spaulding A. Impact of hospitals' Referral Region racial and ethnic diversity on 30-day readmission rates of older adults. J Community Hosp Intern Med Perspect. 2019;9(3):181-188.
                  8. Imran A, Rawal MD, Botre N, Patil A. Improving and Promoting Social Determinants of Health at a System Level. Jt Comm J Qual Patient Saf. 2022;48(8):376-384. 
                  9. Jha AK, Orav EJ, Epstein AM. Low-quality, high-cost hospitals, mainly in South, care for sharply higher shares of elderly black, Hispanic, and medicaid patients. Health affairs 2011; 30:1904-11.
                  10. Joynt KE, Jha AK. Characteristics of hospitals receiving penalties under the Hospital Readmissions Reduction Program. JAMA. Jan 23 2013; 309(4):342-3.
                  11. Kim C, Diez A V, Diez Roux T, Hofer P, Nallamothu B K, Bernstein S J, Rogers M, “Area socioeconomic status and mortality after coronary artery bypass graft surgery: The role of hospital volume.” Clinical Investigation Outcomes, Health Policy, and Managed Care. 2007; 154(2): 385-390.
                  12. Kind AJH, Buckingham W. Making Neighborhood Disadvantage Metrics Accessible: The Neighborhood Atlas. New England Journal of Medicine, 2018. 378: 24562458. DOI: 10.1056/NEJMp1802313. PMCID: PMC6051533. AND University of Wisconsin School of Medicine Public Health. 2023 Area Deprivation Index v4.0. Downloaded from https://www.neighborhoodatlas.medicine.wisc.edu/. 
                  13. LaPar D J, Bhamidipati C M, et al. “Primary Payer Status Affects Mortality for Major Surgical Operations.” Annals of Surgery. 2010; 252(3): 544-551.
                  14. Lindenauer PK, Lagu T, Rothberg MB, et al. Income inequality and 30 day outcomes after acute myocardial infarction, heart failure, and pneumonia: retrospective cohort study. BMJ. 2013 Feb 14; 346:f521. doi: 10.1136/bmj.f521.
                  15. Powell WR, Sheehy AM, Kind AJ. The area deprivation Index is the most scientifically validated social exposome tool available for policies advancing health equity. Health Affairs Forefront. 2023;
                  16. Singh, G. K. (2003). Area Deprivation and Widening Inequalities in US Mortality, 1969–1998. American Journal of Public Health, 93(7), 1137–1143. https://doi.org/10.2105/ajph.93.7.1137
                  17. Trivedi AN, Nsa W, Hausmann LR, et al. Quality and equity of care in U.S. hospitals. The New England journal of medicine 2014; 371:2298-308.
                  18. Office of the Assistant Secretary for Planning and Evaluation, U.S. Department of Health & Human Services. Second Report to Congress on Social Risk Factors and Performance in Medicare’s Value-Based Purchasing Program. 2020. https://aspe.hhs.gov/social-risk-factors-and-medicares-value-basedpurchasing-programs

                   

                  Social Risk Factors Summary

                  While our testing results (see below, and in the attachment of figures and tables) show that patients with social risk factors (DE or high ADI) have higher unadjusted rates of the outcome, we find that that the impact of each social risk factor on measure scores is minimal: measure scores calculated with and without each social risk factor are highly correlated, and differences between measure scores calculated with and without each social risk factor are small.  These empiric results, together with the measure’s use in a pay-for reporting (not pay for performance) program and CMS’s desire to not mask disparities, support the decision to not adjust the measure for social risk factors. To better understand disparities related to readmission, CMS instead reports readmission measures stratified by social risk factors (DE, high ADI) and by race/ethnicity. For more information on the stratification approach, please see Section 1.19.

                  We note that the existing FFS-only, claims-only HWR measure has been stratified by DE, high ADI, and that while the current hybrid MA+FFS HWR measure is not currently stratified by social risk factors, testing for future potential stratification by social risk factors is ongoing.

                   

                  Analysis #1: Variation in prevalence of the factor across measured entities in the Claims Only HWR (Medicare FFS and MA) dataset

                  The prevalence of social risk factors at hospital-level in the HWR cohort varies widely across hospitals (Table 16, "Hybrid HWR All Tables and Figures" attachment). In the Claims Only HWR (Medicare FFS and MA) dataset, the median percentage of dual-eligible patients was 15.8% (Interquartile Range [IQR]:  10.8%-23.3%) and the median percentage of patients with high ADI variable [score equal to or above 85] was 12.4% (IQR: 2.4%-30.8%). 

                  Analysis #2:  Observed outcome rates in patients with social risk factors

                  In the Claims Only HWR (Medicare FFS and MA) dataset, patient-level observed readmission rates were higher for dual-eligible patients (19.6%) compared with non-dual-eligible patients (14.7%) (Table 17, "Hybrid HWR All Tables and Figures" attachment). Similarly, the observed readmission rate for patients with the high ADI variable are higher (17.2%) compared to patients without the social risk factor (15.2%). 

                  Analysis #4: Impact of social risk factor on hospital-level measure scores

                  To determine the impact of adding social risk factors on measure scores, we compared correlation coefficients of measures scores calculated with and without the social risk factors in the models (Table 19, and Figures 11 and 12,  "Hybrid HWR All Tables and Figures" attachment), and we compared differences in measure scores (Table 19). Hospitals’ risk-standardized readmission rates (RSRRs) are highly correlated: the correlation coefficient of RSRRs between hospitals using the Claims-Based HWR (Medicare FFS and MA) dataset, calculated with and without the high ADI variable is 0.999 (Figure 11) and correlation coefficient between measure scores calculated with and without the dual-eligible variable is 0.995 (Figure 12). The median difference in hospitals’ RSRRs when adding either social risk factor is small (0.001% for high ADI and 0.000% for dual-eligibility for the Claims-Only HWR (Medicare FFS and MA) dataset (Table 19).

                  • 6.1.1 Current Status
                    Yes
                    6.1.3a Please specify the other current use
                    While this re-specified measure with MA and FFS admissions is currently not in use, the prior hybrid HWR measure (with FFS admissions only) is in use for public reporting and for quality improvement.
                    6.1.4 Program Details
                    Hospital inpatient quality reporting program (IQR), CMS, https://qualitynet.cms.gov/inpatient, The measure has been implemented as part of CMS's Hospital Inpatient Quality Reporting (IQR) Program, which is a national pay-for-quality-data-reporti, The Hospital IQR program includes acute care hospitals across the nation with nearly 4,500 hospitals and 70 million Medicare Beneficiaries, The level of measurement is the facility; the setting is the Hospital Inpatient.
                  • 6.2.1 Actions of Measured Entities to Improve Performance

                    The outcome of unplanned hospital visits following discharge from an inpatient admission is a widely accepted measure of care quality. The HWR measure provides the opportunity to improve the quality of care and to lower rates of adverse events that result in unplanned readmission after an inpatient stay.

                     There are evidence-based interventions that can reduce readmission rates. These interventions often address inadequate transitions of care, including patient education at discharge and coordination of outpatient care. For example, a 2021 systematic review that analyzed 60 trials, including 19 randomized controlled trials, concluded, in agreement with prior systematic reviews, that interventions that focus on communication at discharge were statistically significantly associated with lower rates of hospital readmissions (Becker et al., 2021). Within the 19 trials,10 focused on medication counselling, and six focused on patient education about their condition: the other three focused on other specific communication strategies. A 2022 systematic review found that post-discharge care including home care, telephone, and/or clinic visits resulted in lower rates of readmission compared with “usual care” for cardiac patients (Chauhan & McAlister, 2022). A systematic review published in 2023 pooled the results from 73 different studies to compare transitional care interventions with different levels of complexity and their impact on improving outcomes and found that low- and medium-complexity interventions were the most effective at reducing 30-day readmissions (Tyler et al., 2023). Study authors found that compared with usual care, readmission rates were reduced by 18 percent to 55 percent for these types of interventions. Complexity was categorized by the number of components of the intervention, and the number of stages of the hospitalization that the intervention was implemented. Finally, CMS has published a guide for hospitals, aimed at leadership, staff, and clinicians, which outlines effective strategies for reducing readmissions and reducing disparities. Strategies covered in the guide include: ensuring that patients understand discharge instructions and have appropriate follow-up visits, improving accessibility (transportation) for post-discharge care, ensuring patients have a primary care provider, starting post-discharge visit planning early in the discharge process, ensuring transfer of information to the post-discharge provider, and strategies to address language barriers and low health literacy (CMS Office of Minority Health, 2024).

                    References

                    1. Becker, C., Zumbrunn, S., Beck, K., Vincent, A., Loretz, N., Müller, J., Amacher, S. A., Schaefert, R., & Hunziker, S. (2021). Interventions to Improve Communication at Hospital Discharge and Rates of Readmission: A Systematic Review and Meta-analysis. JAMA network open, 4(8), e2119346. https://doi.org/10.1001/jamanetworkopen.2021.19346
                    2. Chauhan, U., & McAlister, F. A. (2022). Comparison of Mortality and Hospital Readmissions Among Patients Receiving Virtual Ward Transitional Care vs Usual Post discharge Care: A Systematic Review and Meta-analysis. JAMA network open, 5(6), e2219113. https://doi.org/10.1001/jamanetworkopen.2022.19113
                    3. CMS Office of Minority Health (2024). Guide for Reducing Disparities in Readmissions. Accessed April 23, 2024; https://www.cms.gov/about-cms/agency-information/omh/downloads/omh_readmissions_guide.pdf
                    4. Tyler, N., Hodkinson, A., Planner, C., Angelakis, I., Keyworth, C., Hall, A., Jones, P. P., Wright, O. G., Keers, R., Blakeman, T., & Panagioti, M. (2023). Transitional Care Interventions From Hospital to Community to Reduce Health Care Use and Improve Patient Outcomes: A Systematic Review and Network Meta-Analysis. JAMA network open, 6(11), e2344825. https://doi.org/10.1001/jamanetworkopen.2023.44825

                     

                     

                    6.2.2 Feedback on Measure Performance

                    CMS receives feedback on all its measures through the publicly available Q&A tool on Quality Net. Through this tool, we have received, since the last submission, only basic questions about the measure, including the cohort definition, the outcome definition, and specific questions about a facility’s data. We did not receive any suggestions for changes to the claims-based portion of this measure.

                    Additionally, the EHR portion of this measure goes through the Annual Updates Process (required for all eCQMs), which includes coding and logic review. Since 2023 Voluntary Reporting of the measure, we have also received suggestions from stakeholders regarding logic and coding updates through this process, as well as through JIRA.

                    6.2.3 Consideration of Measure Feedback

                    Major changes to the Hybrid HWR measure since it was last endorsed in 2019 include the addition of Medicare Advantage patients, which was finalized in the 2024 Inpatient Prospective Payment System (IPPS) Rule ¹ to be incorporated in the measure for discharges June 30, 2024-July 1-2025, for 2026 Reporting (FY 2027 payment determination). Details regarding the impact of this change are in Appendix E of the Hybrid HWR comprehensive Methodology Report.

                    Minor measure updates to the EHR portion of the measure include:

                    • Annual digital quality measure maintenance, including coding, value set, and logic updates.
                    • Excluding patients with a primary or secondary diagnosis of COVID-19 from the measure cohort.
                    • Risk-adjustment for patients with history of COVID-19.

                    Measure developers carry out annual cycles of measure reevaluation, aiming to make continuous improvement on the measure, and to be responsive to stakeholder input. Through stakeholder Q&A, developers have been made aware of implementation challenges faced by hospitals from 2024 Voluntary Reporting:

                    Hospitals provided feedback regarding the topic of acceptable CCDE units for submission:

                    • Hospitals provided feedback in having difficulty determining which units for CCDE are acceptable for submission and ultimately used for measure calculation. We note the current strategy for missing or unusable data is to substitute the median value reported for that CCDE, assuming a somewhat typical patient. Developers review units submitted by hospitals for data pre-processing each year, with the goal of including as many units as possible for measure calculation. We note limitations with units unable to be standardized to a common unit without additional lab values, and unusable data such as text/string data (e.g. “high- see Dr. John”), an ongoing challenge in the eCQM community.   

                    Hospitals provided feedback regarding submission of linking variables used to merge claims to EHR data, as well as IQR threshold requirements (which are not used towards measure calculation):

                    • Hospitals expressed concern reaching IQR program requirements, in which hospitals must submit CCDE (within 24 hours before or up to 24 hours after inpatient admission for labs; within 24 hours before or up to 2 hours after inpatient admission for vital signs) for 90% of discharges, and linking variable (used to merge EHR to claims data) for 95% of discharges in order to receive their Annual Payment Update. These comments were heard by CMS and the measure developer, and the proposal for the submission of CCDE to remain voluntary for 2025 reporting was included as a rider to the Outpatient Prospective Payment System Proposed Rule.² Additionally, an update to expand the CCDE lookback period beyond the 24 hours prior to/after inpatient admission is being finalized through the 2025 Annual Updates Cycle.
                    • Additionally, through stakeholder Q&A, hospitals voiced difficulty submitting a linking variable, Medicare Beneficiary Identifier (MBI), for Medicare Advantage patients. While MBI is available for patients for the claims portion of this measure, hospitals note its collection is not fully integrated into hospitals EHR programs. CMS and the measure developer are aware of this limitation for hospitals beginning with Reporting Year 2026, and have been in contact with multiple hospitals to address this issue for future reporting. 

                    Reference

                    1. Medicare Program; Hospital Inpatient Prospective Payment Systems for Acute Care Hospitals and the Long- Term Care Hospital Prospective Payment System and Policy Changes and Fiscal Year 2024 Rates; Quality Programs and Medicare Promoting Interoperability Program Requirements for Eligible Hospitals and Critical Access Hospitals; Rural Emergency Hospital and Physician-Owned Hospital Requirements; and Provider and Supplier Disclosure of Ownership; and Medicare Disproportionate Share Hospital (DSH) Payments: Counting Certain Days Associated with Section 1115 Demonstrations in the Medicaid Fraction. https://www.govinfo.gov/content/pkg/FR-2023-08-28/pdf/2023-16252.pdf
                    2. Medicare and Medicaid Programs: Hospital Outpatient Prospective Payment and Ambulatory Surgical Center Payment Systems; Quality Reporting Programs, Including the Hospital Inpatient Quality Reporting Program; Health and Safety Standards for Obstetrical Services in Hospitals and Critical Access Hospitals; Prior Authorization; Requests for Information; Medicaid and CHIP Continuous Eligibility; Medicaid Clinic Services Four Walls Exceptions; Individuals Currently or Formerly in Custody of Penal Authorities; Revision to Medicare Special Enrollment Period for Formerly Incarcerated Individuals; and All-Inclusive Rate Add-On Payment for High-Cost Drugs Provided by Indian Health Service and Tribal Facilities. Proposed on July 22, 2024. https://www.federalregister.gov/documents/2024/07/22/2024-15087/medicare-and-medicaid-programs-hospital-outpatient-prospective-payment-and-ambulatory-surgical
                    6.2.4 Progress on Improvement

                    The Hybrid HWR measure, although slated for 2025 Reporting in IQR, has undergone two rounds of Voluntary Reporting, in which a small subset of hospitals participated (n=724 [2023], n= 1,162 [2024]). As such, progress on improvements cannot be generalized due to limited sample, years for comparison, and the self-selecting nature of hospitals that participate in voluntary reporting (in which they typically score better on the readmission outcome). 

                    However, we note that there has been improvement in outcomes for Medicare FFS patients based on data from the related claims-only HWR measure that is currently in use and has been publicly reported in IQR since 2013. We compared national unadjusted outcomes for the claims-only HWR measure (that differs from the hybrid version only in that it does not include the CCDE for risk adjustment) and found that observed outcomes have improved both at the patient level across all cohorts and the overall measure (Figure 13, "Hybrid HWR All Tables and Figures" attachment), and at the hospital level across the distribution (Figure 14, "Hybrid HWR All Tables and Figures" attachment). For example, during the 2018 reporting period (discharges July 1, 2016—June 30, 2017), average patient-level observed (unadjusted) readmission rates across all cohorts (HWR) were 15.3%, compared with 14.6% in the 2024 reporting period (discharges July 1, 2022—June 30, 2023) (Table 20, "Hybrid HWR All Tables and Figures" attachment).  Patient-level observed readmission rates also decreased in all five cohorts in this same timeframe (2024 compared with 2018). In addition, hospital-level observed outcomes were lower in 2024 compared with 2018 (Table 21) with a median of 13.1% [IQR 10.5%-15.2%] and 14.6%, [IQR 12.0%-16.8%] respectively.

                    6.2.5 Unexpected Findings

                    There were no unintended impacts during implementation of this measure on patients or in care delivered by hospitals. However, there were some challenges with respect to EHR data (CCDE), and IQR threshold requirements (not used for measure calculation) as described in the Section 6.2.3.

                    • First Name
                      Harold
                      Last Name
                      Miller

                      Submitted by Harold Miller on Fri, 12/06/2024 - 16:53

                      Permalink

                      Endorsement should be removed from this measure.  It is not a valid measure of the outcomes of care for hospitalized patients since it only measures post-discharge complications treated during an inpatient admission to the hospital, not complications treated during an observation stay or in an emergency department.  In addition, death is not considered a negative outcome, despite research showing that reductions in readmissions for patients may result in a significant increase in mortality. The use of electronic health record variables appears to reduce rather than improve the effectiveness of the risk adjustment methodology compared to use of variables from claims data alone, and many hospitals and patients are excluded from the measure entirely because the EHR variables are not reported for them.  The risk adjustment methodology fails to adjust for a patient’s income or access to outpatient care after discharge, even though these are two of the most important factors affecting readmission and they are outside the control of the hospital that provided inpatient care.  Insufficient data were provided to assess the reliability of the measure.  Use of the measure could harm patients and could harm hospitals that serve a higher proportion of patients who have limited access to care. 

                       

                       

                      Problems with the Numerator and Denominator

                       

                      The numerator of the measure includes events that have nothing to do with the original hospitalization, while excluding events that represent undesirable outcomes and avoidable costs, so it cannot be used to assess whether quality care is being delivered or quality is being improved.  Research has demonstrated that hospital readmission measures are problematic because of these concerns (see, for example, Gupta A, Fonarow G. “The Hospital Readmissions Reduction Program – Learning from Failure of a Healthcare Policy,” European Journal of Heart Failure 20:1169-1174), and as a result, there have been growing calls to stop using readmission measures to evaluate hospital performance (e.g., Figueroa JF and Wadhera RK.  “A Decade of Observing the Hospital Readmission Reductions Program – Time to Retire an Ineffective Policy,” JAMA Network Open 5(11): e2242593).

                      • Exclusion of Patients Treated Without a Formal Inpatient Admission.  The measure only counts inpatient admissions to a hospital, not observation stays, emergency department visits, or visits to urgent care centers for treatment of a reoccurrence of the condition treated during the index hospital admission or of complications of the treatment received during the index admission.  A patient could have a serious complication following hospital discharge, but if the complication can be successfully treated in an emergency department without admitting the patient to the hospital, or if the hospital treatment is classified as an observation stay rather than an inpatient admission, the complication will not be counted by the measure.  Hospitals that are experiencing shortages of inpatient beds may have to treat patients in the ED, and they will have lower “readmission” rates as a result. Moreover, because observation stays are not included, the measure definition creates an incentive to treat patients with complications in observation stays rather than admitting them as inpatients, since this will reduce the calculated readmission rate.  In addition, the proposed measure would include patients with Medicare Advantage plans; many of these plans frequently deny payment for inpatient stays and force the hospital to classify them as observation stays.  This means that hospitals with a higher percentage of patients on Medicare Advantage could appear to have lower readmission rates simply because of the way the readmissions are coded in claims data.

                        The measure developer provided no information at all about the proportion of discharged patients who were treated in other settings during the 30 days after discharge, and no analysis was provided regarding the variation across hospitals in the proportion of patients treated in other settings, even though that variation could be the primary reason for variation in the readmission rates.

                        This measure did not have to be limited to inpatient readmissions.  The “Excess Days in Acute Care (EDAC)” measures that were created by the same developer for acute myocardial infarction, heart failure, and pneumonia count ED visits and observation stays as well as inpatient admissions in the numerators.  Moreover, the measure developer’s methodology reports for those measures (e.g., Excess Days in Acute Care after Hospitalization for Heart Failure, August 2015) explain why using a measure limited to hospital readmissions is undesirable: “Suboptimal transitions contribute to a variety of adverse outcomes post-discharge, including ED evaluation, need for observation, and readmission. Measures of unplanned readmission already exist, but there are no current measures for ED and observation stay utilization. It is thus difficult for providers and consumers to gain a complete picture of post-discharge outcomes. Moreover, separately reporting each outcome encourages “gaming,” such as recategorizing readmission stays as observation stays to avoid a readmission outcome. By capturing a range of outcomes that are important to patients, we can produce a more complete picture of post-discharge outcomes that better informs consumers about care quality and incentivizes global improvement in transitional care.”
                      • Failure to Include Deaths.  The numerator does not include deaths resulting from the initial hospital stay, it only includes hospital admissions.  Not only is death a more serious complication than a hospital admission, a hospital with a high rate of deaths after discharge could appear to have better performance on this measure.  Moreover, excluding deaths from the measure creates a perverse incentive to avoid admitting a patient to the hospital for treatment during the 30 day post-discharge period, even if failure to admit the patient could result in their death. Research has shown that reductions in readmission rates for patients with heart failure and pneumonia following implementation of the Hospital Readmissions Reduction Program were accompanied by increases in mortality rates (Wadhera RK et al.  “Association of the Hospital Readmissions Reduction Program with Mortality Among Medicare Beneficiaries Hospitalized for Heart Failure, Acute Myocardial Infarction, and Pneumonia,” JAMA 320(24), 2018), but under this measure, a hospital with a lower readmission rate and higher post-discharge mortality would be classified as having better performance.
                      • Inclusion of Unrelated Admissions.  The measure defines a readmission as any unplanned admission to any hospital that occurs within 30 days after the patient was discharged.  However, patients are admitted to hospitals for many reasons, and a patient who is discharged from the hospital after treatment for one condition could have an injury or acute illness unrelated to that condition that requires hospital treatment within 30 days after the initial discharge.  It is inappropriate to treat all such hospital admissions as “readmissions” for the condition treated during the index hospital admission and to imply that all or most such admissions reflect poor quality of care by the hospital staff and physicians who treated the patient during the initial admission.  

                        The measure developer provided no information at all about how many of the readmissions were for diagnoses that were related to the condition treated during the index admission.
                      • Inclusion of Patients Discharged to Post-Acute Care Facilities.  The measure makes no distinction as to where the patient was discharged.  A patient who is discharged to a Skilled Nursing Facility could experience an injury or poor care in the SNF and have to be admitted to the hospital for treatment of the resulting problems. However, under this measure, that admission will treated as a result of poor care of the patient by the hospital, rather than poor care by the SNF. 
                      • Exclusion of Patients for Whom EHR Data Are Not Available.  Patients are only included in the denominator of the measure if a majority of the EHR variables are available for them.  Since hospitals are not required to submit EHR data for every patient, failure by a hospital to report data for patients at high risk of readmission could result in calculation of an incorrectly low readmission rate for that hospital.

                       

                       

                      Problems with the Risk Adjustment Methodology

                      • Worse Performance from Use of EHR Variables.  The key difference between this measure and current claims-based measures is the inclusion of 13-14 “core clinical data element (CCDE)” variables in the risk adjustment model that describe the patient’s condition during the hospital stay, such as heart rate, blood pressure, and hematocrit.  However, Table 11 indicates that the c-statistics for the models using the CCDE variables are only very slightly higher than the models without the CCDE variables; in fact, some increase in the c-statistic would be expected simply because more variables are being used.  Tables 14-15 show that calibration of the model is worse when the CCDE variables are used, and Figures 9-10 show larger gaps between observed and predicted readmission rates when the CCDE variables are used.  In addition, the detailed methodology report shows that the odds ratios for most of the CCDE variables are 1 or only slightly different from 1, meaning that they have little impact on the predicted readmission risk.
                      • Failure to Adjust for Limited Access to Outpatient Services and Inability to Afford Medications.  It is obvious that patients who cannot afford their medications or cannot obtain primary care will be more likely to be admitted to a hospital, both for problems that are related to the original admission and for unrelated problems.  For example, patients who live in isolated rural areas and areas with shortages of primary care physicians will have greater difficulty receiving follow-up care after discharge for the condition treated during the index admission and also greater difficulty receiving both primary care and specialty care for their other conditions, but there is nothing in the risk adjustment methodology that addresses this.  For many health conditions, effective management of the condition after discharge requires that patients use prescribed medications, but if these medications are expensive and patients cannot afford them, they will experience complications that can require hospitalization.  Hospitals that have a higher proportion of patients who are unable to access primary care or afford their medications will likely have higher readmission rates regardless of what the hospital does to improve discharge planning and coordination.  However, there are no variables in the risk adjustment model to adjust for this.
                      • Failure to Adjust for Low Income Status.  The analysis of social risk factors reported by the developer demonstrated that lower income individuals are much more likely to be readmitted to the hospital after discharge even after controlling for other factors.  As shown in Table 17, dual eligible individuals (i.e., Medicare beneficiaries who are also eligible for Medicaid) have an average readmission rate of 19.6% versus 14.7% for non-dual eligible beneficiaries.  The developer says that “differences between measure scores calculated with and without each social risk factor are small,” but no data are provided on the size of these differences, and there is no information on the magnitude or statistical significance of dual eligibility (DE) or the area deprivation index variables when they were included in the risk adjustment models.  The correlation analysis presented in Table 19 is not a valid way to determine whether a risk adjustment variable should be included in the model; if it were, the same analysis should have been provided for all of the variables in the model, not just those the developer wanted to exclude.

                        The developer says that instead of adjusting the measure for social risk factors, the Centers for Medicare and Medicaid Services (CMS) reports the readmission measure stratified by social risk factors.  However, there would be no purpose to doing this unless the social risk factor variables had a significant impact on the predicted rates.  In fact, the  2024 CMS Disparity Methods Updates and Specifications Report prepared by the developer confirms that dual eligible status has a significant impact on a patient’s probability of readmission at all hospitals.  It shows that for patients in the cardiorespiratory, cardiovascular, neurology, and surgery categories, 96% - 100% of hospitals had higher readmission rates for dual eligible patients than non-dual eligibles.  

                        This means it is likely that the readmission rate for a hospital with more dual eligible patients will be higher than the readmission rate for a hospital with fewer dual eligible patients simply because of the patient mix, not because of any differences in the quality of care delivered by the hospital.  Consequently, it is inappropriate to use this measure for public reporting or quality improvement without adjusting for dual eligibility status in the prediction model. Failure to make this adjustment could discourage hospitals and clinicians from treating low income patients, which could even further reduce low income patients’ access to care and increase disparities in outcomes for disadvantaged patients.

                       

                      Failure to Adequately Assess the Reliability of the Measure

                       

                      The developer did not report signal-to-noise (inter-unit) reliability for the measure, claiming that the signal-to-noise calculation “should be based on a statistical model” but that this measure “is not calculated from a single statistical model.”  Signal-to-noise reliability is a function of the variance between hospitals compared to the total variance, so it can be calculated regardless of how the predicted values are determined.  Nothing in the reference cited by the developer supports the contention that signal-to-noise reliability cannot be calculated for this measure.  Moreover, signal-to-noise reliability could have been determined for each of the individual models created for the five subgroups of hospital discharges, since failure to reliably classify a hospital in one category diminishes the accuracy of the overall classification resulting from the combination of the individual models. 

                       

                      Instead, the developer calculated an intraclass correlation coefficient (ICC) based on a single split sample.  A single split sample is not an adequate way of estimating an ICC (Nieser KJ and Harris AHS, “Split-sample reliability estimation in health care quality measurement: Once is not enough.”  Health Services Research 2024;50:e14310) and an overall ICC does not assess the reliability of the measure for individual hospitals.

                       

                      The most appropriate way to assess the reliability of the measure is to calculate the misclassification probability for individual hospitals, and that was not done.  This measure should not be endorsed without a better assessment of its reliability.

                       

                      Lack of Business Case for Using the Measure and Undesirable Effects of Doing So

                       

                      This measure imposes a significant burden on hospitals to submit the CCDE variables on all of their patients and it makes computing the measure significantly more complicated because of the need to match claims and clinical data.  However, as described above, all of the effort to collect and use the additional data does not result in a more accurate measure. 

                       

                      Moreover, large numbers of patients and hospitals are excluded from this measure.  The majority of hospitals currently do not provide the CCDE data, so none of their patients are included in the measure.  In addition, even at the hospitals that do provide EHR data, the variables necessary to calculate the measure are not provided for all patients, and patients are excluded from the measure calculations if a majority of the EHR variables are not reported for them. 

                       

                      It makes no sense to use a measure that is more burdensome and less accurate than a purely claims-based measure, and so there is no good reason to endorse it.

                       

                      Organization
                      Center for Healthcare Quality and Payment Reform

                      Commentor: Center for Healthcare Quality and Payment Reform

                      We thank the commenter for his interest in the Hybrid Hospital-Wide Readmission (HWR) measure. Overall, the HWR measure is a valid and reliable measure that meets all of Battelle’s endorsement criteria. We address each of the reviewer’s comments below which we have categorized by measure component. 

                      Measure Cohort

                      Comment: Patients are only included in the denominator of the measure if a majority of the EHR variables are available for them. Since hospitals are not required to submit EHR data for every patient, failure by a hospital to report data for patients at high risk of readmission could result in calculation of an incorrectly low readmission rate for that hospital.

                      Response: We note that the measure only excludes patients that are missing 7 of the 13 CCDE because their clinical status on hospital arrival would not be complete. However, this exclusion removes less than 2 percent of admissions, thus has minimal impact on the denominator. 

                      The measure specifications should not be confused with the data targets for data submission and payment, which is an implementation issue. In the 2024 Voluntary Reporting (VR), hospitals had been required to submit electronic health record (EHR) data for 90% of discharges and linking variable (used to merge EHR to claims data) for 95% of discharges to receive their Annual Payment Update. Internal data showed that in 2024 Hybrid Voluntary Reporting (VR) data, about 70% of hospitals submitted more than 70% of CCDEs matching claims. 

                      We note that this measure is in voluntary reporting (which has been extended) and voluntary reporting (where results are not made public) is intended to allow hospitals to become familiar with the reporting requirements. This is why, throughout the submission, we also provide the measure results from the claims-only version. We note that the CCDE enhances risk adjustment, but adjustment also continues to include claims-based variables and service mix variables that are in common across the claims-only and hybrid versions of this measure. We further note that for measure calculation during voluntary reporting the measure accommodates for missing data via an imputation approach. Our testing results, together with the results from the claims-based measure for validation, show that the data that was submitted for the hybrid version of the measure are sufficient to calculate a statistically reliable and valid measure for hospitals during voluntary reporting. We will surveil the differences between patients with and without CCDE submission to ensure no selection bias is introduced for the measure.

                       

                      Comment: The measure makes no distinction as to where the patient was discharged. A patient who is discharged to a Skilled Nursing Facility (SNF) could experience an injury or poor care in the SNF and have to be admitted to the hospital for treatment of the resulting problems. However, under this measure, that admission will be treated as a result of poor care of the patient by the hospital, rather than poor care by the SNF. 

                      Response: Inappropriate readmission due to discharge to a low-quality post-acute care facility is a quality signal and we do not want to adjust for this in the readmission measure. SNF quality is measured separately by readmission measures that attribute the outcome to the SNF. First, adjusting for this factor could inadvertently mask differences in care quality, as discharge to a nursing home may reflect variability in discharge planning, transitions of care, or follow-up processes that impact readmission risk. By including it as an adjustment, there is a risk of “adjusting away” meaningful differences in performance related to post-acute care coordination. Second, adjusting for nursing home discharge might reduce accountability for ensuring safe and effective transitions, as providers could shift responsibility to post-acute settings rather than improving their own discharge processes. In addition, discharge to a nursing home can reflect patient clinical severity, which are already captured through clinical risk adjustment, reducing the need for further adjustment.

                       

                      Measure Outcome

                      Comment: Failure to Include Deaths. The numerator does not include deaths resulting from the initial hospital stay, it only includes hospital admissions. 

                      Response

                      We want to clarify that patients who die during the initial inpatient hospitalization are not included in the measure. We assume that the reviewer is referring to death after discharge following the index admission.

                      There are several reasons why death (after discharge) is not included in the outcome which we outline below.

                      • The measure concepts and underlying logic model of death and readmission are not the same. Mortality reflects the inability of a hospital to manage a patient’s condition or complications upon admission, deliver care in a timely manner, and provide organized care; readmission reflects additional critical aspects of care, such as communication between providers, prevent of and response to complications, patient safety, and coordinated transitions to the outpatient environment. These two types of measures are even evaluated by different projects and committees within the PQM’s E&M process. For this reason, and the additional reasons outlined below, to complement the Hospital-Wide Readmission (HWR) measure, CMS has also developed a Hospital-Wide Mortality (HWM) measure. 
                      • The underlying predictive variables for mortality and readmission also differ for mortality; patient comorbidities and severity of illness are much more predictive compared with readmission. The inclusion of mortality complicates risk adjustment because the factors influencing mortality differ significantly from those influencing readmission risk.
                      • The underlying mechanisms behind improvement of quality also differ between the two – there are post-discharge procedures and practices tied to improved outcomes for readmission, and there are specific interventions during the hospitalization (for example, the identification of patients in decline) that are tied to mortality. 
                      • There could be unintended consequences of combining mortality and readmission outcomes. If mortality is included in a readmission measure, hospitals may avoid admitting critically ill patients or provide overly aggressive care to prevent death, which may not align with patient-centered goals. Alternatively, hospitals might delay discharge to prevent post-discharge death from being counted in a combined outcome measure, potentially increasing the length of stay unnecessarily. 
                      • Contrary to the results cited by the commentor, research has shown that reductions in hospital 30-day readmission rates were weakly but significantly correlated with reductions in hospital 30-day post-discharge mortality rates providing evidence that reducing hospital readmissions is not related to increasing post discharge mortality [1]. We note that in-hospital mortality is already removed from the measure, as the cohort only includes patients that are discharged alive from acute care hospitals. Prior analyses indicate that the competing risk of mortality after discharge is more limited and does not substantially change hospital rankings. 
                      • Similarly, a study by MedPAC found that mortality rates were generally flat or declining after implementation of HRRP [2].
                      • Finally, there are differences surrounding how CMS holds hospitals accountable for the different outcomes. For example, the condition-specific readmissions are used by one quality program (HRRP) and mortality by a different program (Hospital Value-Based Purchasing Program (HVBP)). We also note that the HWR measure is not in HRRP, but rather in the Inpatient Quality Reporting (IQR) program.

                       

                      Comment: The numerator of the measure includes events that have nothing to do with the original hospitalization, while excluding events that represent undesirable outcomes and avoidable costs, so it cannot be used to assess whether quality care is being delivered or quality is being improved. 

                      Response: The all-cause readmission approach is widely used for several reasons. First, from the patient perspective, readmission for any cause is a key concern. Second, while some readmissions are clearly attributable to the index admission or procedure, such as a deep-wound infection following a surgical procedure; other readmissions, such as dizziness and fainting after discharge, could be due to medication mismanagement, hypoglycemia, or many other peri-discharge complications, which makes it more difficult to adjudicate but still may be related to the original admission. Finally, while the measure does not presume that each readmission is preventable, interventions have generally shown reductions in all-cause readmission. CMS wishes to incentivize hospitals to implement broad strategies that should improve all-cause readmission, including improving discharge processes, ensuring accurate medication reconciliation, among others. This approach fosters accountability for overall patient management, care coordination, and discharge planning, ultimately driving improvements across the healthcare system and improving patient outcomes.

                       

                      Comment: Research has demonstrated that hospital readmission measures are problematic because of these concerns (see, for example, Gupta A, Fonarow G. “The Hospital Readmissions Reduction Program – Learning from Failure of a Healthcare Policy,” European Journal of Heart Failure 20:1169-1174), and as a result, there have been growing calls to stop using readmission measures to evaluate hospital performance (e.g., Figueroa JF and Wadhera RK. “A Decade of Observing the Hospital Readmission Reductions Program – Time to Retire an Ineffective Policy,” JAMA Network Open 5(11): e2242593).

                      Response: The focus of the HWR measure is inpatient hospital readmissions, not emergency department (ED) visits or observation stays. This is a deliberate focus on the most severe and costly of the post-discharge hospital-based acute care events. CMS does, however, have broader, condition- and procedure-specific measures that capture days in acute care, which include inpatient admissions, ED visits, and observation stays (the Excess Days in Acute Care measures). 

                      We have examined rates of unadjusted readmission, observation, and ED visits for Medicare Advantage (MA) and Medicare Fee-For-Service (FFS) patients and have found they are similar. We ran these analyses in the condition-specific measures, due to data availability considerations. For example, for the Pneumonia Readmission measure, the utilization rates for the different admission dispositions within 30 days of discharge were (for FFS vs. MA patients, in each case): 16.51% vs. 16.31% for (unplanned) inpatient readmissions, 4.39% vs. 5.69% for observation stays, and 12.27% vs. 12.93% for ED visits. As mentioned previously, CMS has additional measures that capture risk-standardized days in acute care after discharge that include these three types of admission. Readmission measures focus on inpatient hospital admissions because they are reserved for the most acute clinical situations and are the costliest.

                       

                      Clinical Risk Adjustment

                      Comment: The use of electronic health record variables appears to reduce rather than improve the effectiveness of the risk adjustment methodology compared to use of variables from claims data alone, and many hospitals and patients are excluded from the measure entirely because the EHR variables are not reported for them. 

                      Response: There is no evidence in the measure submission that shows that the addition of the electronic health record variables reduces the effectiveness of risk adjustment. Risk adjustment is improved following the addition of electronic health record variables. Table 11 in the attachment of tables and figures shows meaningful increases in the c-statistic when comparing the models with and without CCDE (for example, the c-statistic without the CCDE was 0.713 for the cardiovascular cohort, and 0.731 with the addition of CCDE). Numerically small increases in the c-statistic are meaningful when starting from a high baseline, such as those values shown in the first column in table 11 (0.646-0.800).

                      Tables 14-15 do not compare models with and without CCDE, they are comparing data from two groups of hospitals (all hospitals, and hospitals in the voluntary reporting dataset) that are fundamentally different (different patient populations, and  selective reporting in the voluntary dataset). Figures 9-10 show good calibration of the model for each cohort, for those two different sets of hospitals. There are no “large gaps” between observed and predicted rates.

                      CCDE odds ratios (ORs) are not expected to be comparable to principal discharge diagnosis categories as they are inherently different, based on different units and form (e.g. spline versus linear). The CCDE variables are on a different scale and are not directly comparable in terms of magnitude. 

                      Additionally, the inclusion of CCDE was in response to stakeholder input for the Claims-Based All-Cause Hospital-Wide Readmission measure to account for patients with worse disease severity using electronic health record data.

                       

                      Social Risk Factor Adjustment

                      Comment The risk adjustment methodology fails to adjust for a patient’s income or access to outpatient care after discharge, even though these are two of the most important factors affecting readmission and they are outside the control of the hospital that provided inpatient care. 

                      Response: Our testing results show that the addition of social risk variables that capture income, education, and housing, do not have meaningful impacts on measure scores. Furthermore, CORE, for CMS, has developed a stratification approach that allows examination of performance for patients with dual eligibility (DE) and low area deprivation index (ADI).

                       

                      Comment: The measure does not adjust for limited Access to Outpatient Services and Inability to Afford Medications. It is obvious that patients who cannot afford their medications or cannot obtain primary care will be more likely to be admitted to a hospital, both for problems that are related to the original admission and for unrelated problems.

                      Response: We agree that is to the role of hospitals, often with the most community resources, to ensure that patients have adequate post-discharge instruction, e.g. to assess patient’s ability to obtain their medications or medical equipment, schedule follow-up appointments and physical therapy, access home health services etc.

                       We do not dispute that patients with access difficulties would be likely to have higher outcome rates – in fact that is what our unadjusted analyses show – that patients with DE or high ADI have higher unadjusted unplanned readmission rates. However, clinical risk factors often overlap with social risk factors in their contribution to risk of the outcome, and when we add the clinical variables in the risk model and compare measure scores calculated with and without social risk factors, measure scores are highly correlated. These results suggest that the clinical risk variables in the model are accounting for most of the differences in outcome rates between the two populations. We provide more details below.

                      The commentor erroneously states that we show differences in outcomes for patients with social risk factors after adjusting for other variables in the model; Table 17 shows unadjusted outcome rates for patients with and without DE/low ADI. When we compare measure scores calculated with and without social risk factors, we find that hospitals’ risk-standardized readmission rates (RSRRs) are highly correlated: the correlation coefficient of RSRRs between hospitals using the Claims-Based HWR (Medicare FFS and MA) dataset, calculated with and without the high ADI variable is 0.999 (Figure 11) and correlation coefficient between measure scores calculated with and without the dual-eligible(DE) variable is 0.995 (Figure 12). The median difference in hospitals’ RSRRs when adding either social risk factor is small (0.001% for high ADI and 0.000% for dual-eligibility) for the Claims-Only HWR (Medicare FFS and Medicare Advantage (MA)) dataset (Table 19). 

                      We are not stating that there is no impact of adjustment on measure scores but rather that these empiric results showing minimal impact. The measure’s use in the Inpatient Quality Reporting, a pay-for reporting (not pay for performance) program and CMS’s desire to not mask disparities, support the decision to not adjust the measure for social risk factors. 

                      Rather than adjust for social risk factors, CORE has developed, for CMS, a stratification approach that has been applied to the claims-only version of the measure. CMS reports (confidentially, to hospitals) readmission measures stratified by social risk factors (DE, high ADI) and by race/ethnicity. While the current hybrid MA+FFS HWR measure is not currently stratified by social risk factors, testing for future potential stratification by social risk factors is ongoing. 

                       

                      Reliability

                      Comment: Insufficient data were provided to assess the reliability of the measure. Use of the measure could harm patients and could harm hospitals that serve a higher proportion of patients who have limited access to care. 

                      Response: Battelle staff have rated this measure as “Met” on the criterion of reliability. We have provided split-sample reliability that shows the measure score reliability to be above the CBE threshold of 0.6. Split-sample is an appropriate and acceptable method of determining reliability which Battelle has recommended to developers. Furthermore, our empirical results show that our split-sample approach (for the condition-specific readmission measures) yields results very similar to the Nieser and Harris approach cited by the commentor. 

                      The HWR measure is a weighted average of five individual models. While we can calculate signal to noise reliability for each model separately, we have consulted with external statistical experts who have advised that we cannot combine those results into a measure-wide result, as it results in an inflated assessment of reliability. This sort of analysis would play in our favor, but we did not feel it was appropriate to use the signal to noise approach. 

                       

                      Burden on Hospitals 

                      Comment: This measure imposes a significant burden on hospitals to submit the CCDE variables on all of their patients and it makes computing the measure significantly more complicated because of the need to match claims and clinical data. However, as described above, all of the effort to collect and use the additional data does not result in a more accurate measure. 

                       Moreover, large numbers of patients and hospitals are excluded from this measure.  The majority of hospitals currently do not provide the CCDE data, so none of their patients are included in the measure. In addition, even at the hospitals that do provide EHR data, the variables necessary to calculate the measure are not provided for all patients, and patients are excluded from the measure calculations if a majority of the EHR variables are not reported for them. 

                       It makes no sense to use a measure that is more burdensome and less accurate than a purely claims-based measure, and so there is no good reason to endorse it.

                      Response:

                      Development of a hospital-wide readmission measure that includes electronic health record (EHR) data addressed stakeholder preference for the use of patient-level clinical EHR data to support risk adjustment in assessing hospital performance by using data from claims as well as clinical data elements pulled from the EHR for risk adjustment. 

                      The measure is currently in voluntary reporting, so we do not expect all hospitals to be participating or reaching their full potential for CCDE submission. As mandatory reporting approaches, these metrics will improve. For example, we have seen a large increase in the proportion of hospitals submitting data when comparing the 2025 Voluntary Reporting period to the 2024 Voluntary Reporting period: 1,162 (~25%) hospitals participated in the 2024 Voluntary Reporting. For 2025 reporting, hospital participation almost tripled to ~3,250 (~70%) as announcement in the CY2025 Final OPPS Rule [3] of additional years of voluntary submission for CCDE. In addition, the majority of participating hospitals continue to submit >70% CCDEs that can be matched to claims. We note that the ability to meet reporting thresholds, which are established by CMS for payment purposes, does not impact a hospital’s ability to receive a measure score. Any hospital participating in IQR with a minimum case threshold of 25 patients in any specialty cohort may receive a Hybrid HWR measure score.

                       

                      References:

                      1. Dharmarajan, K., Wang, Y., Lin, Z., Normand, S. T., Ross, J. S., Horwitz, L. I., Desai, N. R., Suter, L. G., Drye, E. E., Bernheim, S. M., & Krumholz, H. M. (2017). Association of Changing Hospital Readmission Rates With Mortality Rates After Hospital Discharge. JAMA, 318(3), 270–278. https://doi.org/10.1001/jama.2017.8444

                      2. Update: MedPAC’s evaluation of Medicare’s Hospital Readmission Reduction Program, 2019. https://www.medpac.gov/update-medpac-s-evaluation-of-medicare-s-hospital-readmission-reduction-program/#:~:text=Our%20updated%20analysis%20of%20hospital,mortality%20is%20difficult%20to%20quantify

                      3. CY 2025 Medicare Hospital Outpatient Prospective Payment System and Ambulatory Surgical Center Payment System Final Rule (CMS 1809-FC) | CMS. (2024, November 1). https://www.cms.gov/newsroom/fact-sheets/cy-2025-medicare-hospital-outpatient-prospective-payment-system-and-ambulatory-surgical-center-0


                       

                       

                      Organization
                      Yale/CORE

                      Submitted by Koryn Rubin (not verified) on Tue, 12/10/2024 - 15:03

                      Permalink

                      The American Medical Association (AMA) has concerns regarding validity of this measure and its intended use for accountability purposes. As noted in the final rule for the Inpatient Prospective Payment System, many hospitals alerted the Center for Medicare & Medicaid Services (CMS) to the challenges with data collection and submission of measures that leveraged data from electronic health record systems (EHRs).[1] Specifically, hospitals identified discrepancies in the data related to the timing of vital signs, patient body weight, and various laboratory tests. The current measure specifications do not align with current workflows, and we believe that this measure requires additional work to ensure that the data used is reliable and valid. 

                       

                      [1] https://www.federalregister.gov/d/2024-17021

                      Organization
                      American Medical Association

                      We thank the AMA for their comment.

                       

                      CMS and developers have been made aware of implementation challenges faced by hospitals from 2024 Voluntary Reporting and have updated measure specifications and program requirements in response to this feedback.

                       

                      Hospitals provided feedback regarding submission of linking variables used to merge claims to electronic health records (EHR) data, as well as IQR threshold requirements (which are not used towards measure calculation). Specifically, hospitals expressed concern reaching IQR program requirements, in which hospitals must submit CCDE (within 24 hours before or up to 24 hours after inpatient admission for labs; within 24 hours before or up to 2 hours after inpatient admission for vital signs) for 90% of discharges and linking variable (used to merge EHR to claims data) for 95% of discharges in order to receive their Annual Payment Update. These comments were heard by CMS and the measure developer, and the proposal for the submission of CCDE to remain voluntary for 2025 reporting was finalized as a rider to the Outpatient Prospective Payment System Final Rule.¹ Additionally, an update to expand the CCDE lookback period from the 24 hours prior to/after inpatient admission to the start of the hospital encounter is being finalized through the 2025 Annual Updates Cycle. Measure specifications were updated for discharges July 1, 2023, through June 30, 2024, to accommodate collection of weight reported as the first during the hospital encounter. The period for discharges July 1, 2026, through June 30, 2027, will be expanded in the same manner. Stakeholders have noted that ED and observation stays have changed in the past several years, with longer ED and observation visit lengths of stay, making it more difficult to submit CCDE within the 24 hours. By increasing the window of time from which CCDE can be extracted, hospitals are likely to report CCDE for a higher percentage of discharges, increasing their ability to meet the IQR submission requirements. 

                       

                      Regarding the validity of the Hybrid HWR measure, the combination of face validity, data element, and measure score validity testing, in which the correlation between the standardized rates from the claims-only risk-adjustment model and the Hybrid risk-adjustment model among hospitals is 0.990, supports the validity of this measure. Measure score validity testing between the Hybrid HWR RSRR and related HCAHPS and Star Ratings scores show statistically significant, moderate negative agreement, as expected, validating that the measure score correlates with other metrics related to readmission. Taken together, these results support the validity of the Hybrid HWR measure.

                      Organization
                      Yale/CORE

                      Submitted by Joshua Lapps (not verified) on Fri, 12/13/2024 - 16:51

                      Permalink

                      The Society of Hospital Medicine (SHM) has consistently raised concern with the 30 day readmission measure window as being too long to provide actionable and meaningful feedback to hospitals and providers. Research from 2018 (Graham, KL, et al. Preventability of Early Versus Late Hospital Readmissions in a National Cohort of General Medicine Patients, Annals of Internal Medicine. May 2018) showed that early readmissions were more likely to be preventable and affected by hospital-based interventions than later readmissions. Other research from 2016 (Chin, DL, et al. Rethinking Thirty-Day Hospital Readmissions: Shorter Intervals Might be Better Indicators of Quality of Care, October 2016) suggested that a shorter measure window may better reflect accuracy and equity of readmissions metrics, and give a stronger signal on the quality of in-hospital interventions. We strongly support efforts to reassess the utility and acceptability of the a 30-day window for readmissions and would advocate for a shorter window such as 7 days. We believe a shorter window will better target the assessment of hospital interventions at preventing readmissions, and provide more actionable information for hospitals and the providers who work there. 

                       

                      SHM has also raised concern in the past over the double counting of patients across similar measures. In this case, the inclusion of all-cause readmissions overlaps significantly with other existing condition-specific readmissions that also assess all-cause readmissions. This issue is particularly salient given these related measures exist together in federal programs that affect payments. 

                      Organization
                      Society of Hospital Medicine
                      First Name
                      Afrida
                      Last Name
                      Faria

                      Submitted by Yale-CORE CBE on Mon, 12/23/2024 - 19:28

                      In reply to by Joshua Lapps (not verified)

                      Permalink

                      We thank the commenter for their input and address their questions below.

                       

                      30-day outcome:

                      We chose 30 days because it is a clinically meaningful timeframe for hospitals, in collaboration with their medical communities, to take actions to reduce readmissions, such as: ensure patients are clinically ready at discharge; reduce risk of infection; reconcile medications; improve communication among providers in transitions of care; and encourage strategies that promote disease management principles and educate patients on what symptoms to monitor, whom to contact with questions, and where and when to seek follow-up care. The measure concept of a 30-day outcome was deemed appropriate by a Technical Expert Panel.

                       

                      Overlapping measures

                      While the Hospital-Wide Readmission measure captures some of the same admissions that are also captured within the condition- and procedure=specific readmission measures, these measures are not used in the same programs. The condition- and procedure-specific measures are used within the Hospital Readmission Reduction Program (HRRP) which is a pay-for-performance program, and the HWR measure is used within the Hospital Inpatient Quality Reporting (IQR) Program, which is a pay-for-reporting program. In addition, the HWR measure captures many other diagnoses that are currently not captured by any other readmission measures

                      Organization
                      Yale/CORE

                      Submitted by Tilithia McBride (not verified) on Mon, 12/16/2024 - 17:47

                      Permalink

                      While Federation of American Hospitals (FAH) agrees with the potential for this measure to support quality improvement efforts, we have several concerns regarding the measure and its intended use for accountability purposes. The FAH and our members alerted the Center for Medicare & Medicaid Services (CMS) to the challenges with data collection and submission of measures that leveraged data from electronic health record systems (EHRs). Specifically, hospitals identified deficits in the data related to the timing of vital signs, patient body weight, and various laboratory tests as discussed in this measure submission. Specific examples were outlined in FAH’s comments here (https://assets.fah.org/uploads/2024/06/FAH-2025-IPPS-LTCH-letter-6-10-24.pdf). Based on analyses of what appeared to be missing data, our members found that most patients did receive the necessary assessment and lab values. However, work on the measure specification is needed to ensure that they are aligned with clinical workflows. While the FAH appreciates that CMS delayed moving from voluntary reporting in the Hospital Inpatient Quality Reporting Program, we believe that this measure requires additional work to align the specifications with clinical workflows and ensure that the data used is reliable and valid.  

                      Organization
                      Federation of American Hospitals

                      We thank the commenter for their input. 

                       

                      We appreciate the efforts of hospitals to submit collected CCDE on their admissions so that the measure can better reflect, using electronic health record (EHR) data as stakeholders requested, the severity of patient illness on admission.

                       

                      These comments were heard by CMS and the measure developer, and the proposal for the submission of CCDE to remain voluntary for 2025 reporting was finalized as a rider to the Outpatient Prospective Payment System Final Rule [1]. Additionally, to address clinical workflow, an update to expand the CCDE lookback period from the 24 hours prior to/after inpatient admission to the start of the hospital encounter is being finalized through the 2025 Annual Updates Cycle.

                       

                      The measure does not aim to impact clinical workflow. CCDE were selected by stakeholders on a Technical Expert Panel (TEP) as routinely collected for all adult inpatients based on already-established standards of care. However, in terms of data submission, EHR data are submitted through the HQR portal via QRDA files, well after patient care, and by an authorized/trained measure submission expert, using automated measure logic specifications. The use of electronic and claims data is meant to reduce burden on hospitals, where possible.

                       

                      References:

                      1. CY 2025 Medicare Hospital Outpatient Prospective Payment System and Ambulatory Surgical Center Payment System Final Rule (CMS 1809-FC) | CMS. (2024, November 1). https://www.cms.gov/newsroom/fact-sheets/cy-2025-medicare-hospital-outpatient-prospective-payment-system-and-ambulatory-surgical-center-0
                      Organization
                      Yale/CORE