Skip to main content

Hospital 30-Day Risk-Standardized Readmission Rates following Percutaneous Coronary Intervention (PCI)

CBE ID
0695
Endorsed
New or Maintenance
Is Under Review
No
Measure Description

This measure estimates a hospital-level risk-standardized readmission rate (RSRR) following PCI for Medicare Fee-for-Service (FFS) patients who are 65 years of age or older. The outcome is defined as unplanned readmission for any cause within 30 days following hospital stays. The measure includes both patients who are admitted to the hospital (inpatients) for their PCI and patients who undergo PCI without being admitted (outpatient or observation stay). A specified set of planned readmissions do not count as readmissions. The measure uses clinical data available in the National Cardiovascular Disease Registry (NCDR) CathPCI Registry for risk adjustment and Medicare claims to identify readmissions. Additionally, the measure uses direct patient identifiers including Social Security Number (SSN) and date of birth to link the datasets. 

  • Measure Type
    Composite Measure
    Yes
    Electronic Clinical Quality Measure (eCQM)
    Level Of Analysis
    Care Setting
    Measure Rationale

    Not applicable

    MAT output not attached
    Attached
    Data dictionary not attached
    Yes
    Numerator

    The outcome for this measure is 30-day all-cause readmission. We define readmission as an acute care inpatient hospital admission for any cause, with the exception of certain planned readmissions, within 30 days from the discharge date of the index PCI hospitalization or PCI outpatient claim end date (hereafter referred to as discharge). If a patient has more than one unplanned admission within 30 days of discharge from the index admission, only the first one is counted as a readmission. The measure looks for a dichotomous yes or no outcome of whether each admitted patient has an unplanned readmission within 30 days. However, if the first readmission after discharge is considered planned, then no readmission is counted, regardless of whether a subsequent unplanned readmission takes place. We use this approach because it would potentially be unfair to attribute an unplanned readmission that follows a planned readmission back to the care received during the initial index admission.

    Numerator Details

    The measure counts readmissions to any acute care hospital for any cause within 30 days of PCI discharge, excluding planned readmissions as defined below.  
     
    Planned Readmission Algorithm:  
    The Planned Readmission Algorithm is a set of criteria for classifying readmissions as planned among the general Medicare population using Medicare administrative claims data. The algorithm identifies admissions that are typically planned and may occur within 30 days of discharge from the hospital.  
     
    The Planned Readmission Algorithm has three fundamental principles:  
    1. A few specific, limited types of care are always considered planned (obstetric delivery, transplant surgery, maintenance chemotherapy/radiotherapy/immunotherapy, rehabilitation);  
    2. Otherwise, a planned readmission is defined as a non-acute readmission for a scheduled procedure; and  
    3. Admissions for acute illness or for complications of care are never planned.  
     
    The algorithm was developed in 2011 as part of the Hospital-Wide Readmission measure. In 2013, Centers for Medicare & Medicaid Services (CMS) applied the algorithm to its other readmission measures. NQF reviewed and endorsed the planned readmission algorithm as applied to the AMI readmission measure during an Ad Hoc review completed in January 2013. The Planned Readmission Algorithm replaced the definition of planned readmissions in the original PCI measure because the algorithm uses a more comprehensive definition. In applying the algorithm to condition- and procedure-specific measures, teams of clinical experts reviewed the algorithm in the context of each measure-specific patient cohort and, where clinically indicated, adapted the content of the algorithm to better reflect the likely clinical experience of each measure’s patient cohort. For the AMI readmission measure, CMS used the Planned Readmission Algorithm without making any changes.  
     
    Customization for PCI Readmission Measure:  
    Yale New Haven Health Service Corporation Center for Outcomes Research and Evaluation (YNHHSC/CORE) updated the approach to identifying planned readmissions in the PCI readmission measure by replacing the original NQF-endorsed approach, which only identified revascularization procedures as planned, with a more comprehensive planned readmission algorithm. The revised approach uses a modified version of the Planned Readmission Algorithm Version 2.1 – General Population that has been customized for the PCI patient population. The approach takes into account differences in the likelihood that a procedure is planned depending on whether a coronary stent was implanted during the index PCI procedure.  
     
    A working group of YNHHSC/CORE cardiologists and clinicians that developed the Planned Readmission Algorithm reviewed the list of potentially planned procedures in the context of the PCI population. Patients who receive a stent during their PCI require at least four weeks of therapy with aspirin and a platelet inhibitor. During that time period, it is unusual to perform procedures that would require interruption of dual antiplatelet therapy. In contrast, if no stent is deployed, dual antiplatelet therapy is not required, and patients are more likely to undergo planned surgical procedures. Given these considerations, the working group developed different sets of potentially planned procedures for patients with and without stent implantation.  
     
    For all readmissions, the measure first identifies readmissions for procedures that are always considered planned (e.g., chemotherapy or organ transplantation. In the next step, the approach changes depending on whether or not a patient had a stent during the index PCI procedure. If a stent was deployed, the algorithm uses a smaller set of potentially planned procedures than if a stent was not deployed. All potentially planned procedures identified in both patient populations are then checked for an accompanying primary discharge diagnosis that would more likely than not reflect an acute condition or complication of care.  
     
    Analyzing Medicare Fee-For-Service data from July 2008 to June 2011, the crude 30-day measured readmission rate decreased by 0.5% to 11.8%, from 12.3% using the original planned readmission methodology.  
     

    Denominator

    The target population for this includes hospital stays for patients who are 65 years of age or older who receive a PCI and who have matching records in the CathPCI Registry and Medicare claims.

    Denominator Details

    This outcome measure does not have a traditional numerator and denominator like a core process measure (e.g., percentage of adult patients with diabetes aged 18-75 years receiving one or more hemoglobin A1c tests per year); thus, we use this field to define the measure cohort.  
     
    The time window can be specified for two years. The index cohort includes hospital stays for patients aged 65 or older who receive a PCI and who have matching records in the CathPCI Registry and Medicare claims.  
     
    In the CathPCI Registry, eligible admissions are identified in the data collection form when PCI=yes.  
     
    In the Medicare claims, the patient cohort is defined by having one or more of the ICD-10-CM procedure codes and Current Procedural Terminology (CPT) procedure codes.  
     
    CPT codes:  
    92973 Percutaneous transluminal coronary thrombectomy 92980 Coronary Stents (single vessel)  
    92981 Coronary Stents (each additional vessel) 92982 Coronary Balloon Angioplasty (single vessel)  
    92984 Coronary Balloon Angioplasty (each additional vessel) 92995 Percutaneous Atherectomy  
    92996 Percutaneous Atherectomy 

    Denominator Exclusions

    The following exclusions were applied to data during the merging of NCDR CathPCI and Medicare datasets:  
    1. Patients younger than 65 years of age.  
    Rationale: Patients younger than 65 in the Medicare dataset represent a distinct population that qualifies for Medicare due to disability. The characteristics and outcomes of these patients may be less representative of the larger population of PCI patients. Additionally, patients younger than 65 in the NCDR CathPCI dataset will not have corresponding data in the Medicare claims dataset to obtain the readmission outcome.  
     
    2. Patient stays with duplicate fields (NCDR CathPCI and Medicare datasets).  
    Rationale: Two or more patient stays that have identical information for SSN, admission date, discharge date, and hospital MPN are excluded to avoid making matching errors upon merging of the two datasets.  
     
    3. Unmatched patient stays.  
    Rationale: The measure requires information from both the CathPCI Registry and corresponding Medicare claims data. Accordingly, the measure cannot be applied to patient stays that are not matched in both datasets.  
     
    Exclusions applied to the linked dataset:  
    1. Patients not enrolled in Medicare FFS at the start of the episode of care.  
    Rationale: Readmission data are currently available only for Medicare FFS patients.  
     
    2. Not the first claim in the same claim bundle.  
    Rationale: Multiple claims from an individual hospital can be bundled together. To ensure that the selected PCI is the index PCI, we exclude those PCI procedures that were not the first claim in a specific bundle. Inclusion of additional claims could lead to double counting of an index PCI procedure.  
     
    3. Instances when PCI is performed more than 10 days following admission.  
    Rationale: Patients who undergo PCI late into their hospitalization represent an unusual clinical situation in which it is less likely that the care delivered at the time of or following the PCI would be reasonably assumed to be associated with subsequent risk of readmission.  
     
    4. Transfers out.  
    Rationale: Patient stays in which the patient received a PCI and was then transferred to another hospital are excluded because the hospital that performed the PCI procedure does not provide discharge care and cannot fairly be held responsible for their outcomes following discharge.  
     
    5. In-hospital deaths (the patient dies in the hospital).  
    Rationale: Subsequent admissions (readmissions) are not possible.  
     
    6. Discharges Against Medical Advice (AMA).  
    Rationale: Physicians and hospitals do not have the opportunity to deliver the highest quality care.  
     
    7. PCI in which 30-day follow-up is not available.  
    Rationale: Patients who are not enrolled for 30 days in fee-for-service Medicare following their hospital stay are excluded because there is not adequate follow-up data to assess readmissions.  
     
    8. Admissions with a PCI occurring within 30-days of a prior PCI already included in the cohort.  
    Rationale: We do not want to count the same admission as both an index admission and an outcome. 

    Denominator Exclusions Details

    Exclusions applied to data during the merging of NCDR CathPCI and Medicare datasets:  
    1. Patients younger than 65 years of age are identified through the date of birth and the date of admission in both the Medicare claims data and CathPCI data.  
     
    2. Patient stays with duplicate fields (NCDR CathPCI and CMS datasets) are identified through the linking fields in the matching process.  
     
    3. Unmatched patient stays are identified during the matching process.  
     
    Exclusions applied to the linked dataset:  
    1. Patients not enrolled in Medicare FFS at the start of the episode of care are identified through the indicator carried over from the Medicare claims data.  
     
    2. Not the first claim in the same claim bundle are identified by an indicator carried over from the Medicare claims when a patient is admitted within one day of the discharge date to the same hospital with the same diagnosis code and in the same group of procedure of PCI.  
     
    3. Instances when PCI is performed more than 10 days following admission are identified through the admission date and the procedure date carried over from the CathPCI data.  
     
    4. Transfers out to other acute care facilities are identified by indicators carried over from the Medicare claims when a patient with a qualifying admission is discharged from an acute care hospital and admitted to another acute care hospital on the same day or next day.  
     
    5. In-hospital deaths are identified using the discharge disposition vital status indicator indicators carried over from the Medicare claims data.  
     
    6. Discharges AMA are identified using the discharge disposition indicator carried over from the Medicare claims data.  
     
    7. PCI in which 30-day follow-up is not available is identified by patient enrollment status in the CMS’ Enrollment Database (EDB).  
     
    8. Admissions with a PCI occurring within 30 days of a prior PCI already included in the cohort are identified by comparing the discharge date from the index admission with the readmission date for PCI using indicators carried over from the CathPCI data. 

    Type of Score
    Measure Score Interpretation
    Better quality = Lower score
    Calculation of Measure Score

    The measure employs a hierarchical logistic regression model to create a hospital-level 30-day RSRR. In brief, the approach simultaneously models two levels (patient and hospital) to account for the variance in patient outcomes within and between hospitals (Normand & Shahian, 2007). At the patient level, the model adjusts the log-odds of readmission within 30 days of discharge for age, sex, and selected clinical covariates. The second level models the hospital-specific intercepts as arising from a normal distribution. The hospital intercept represents the underlying risk of readmission at the hospital, after accounting for patient risk. The hospital-specific intercepts are given a distribution in order to account for the clustering (non-independence) of patients within the same hospital. If there were no differences among hospitals, then after adjusting for patient risk, the hospital intercepts should be identical across all hospitals.  
     
    The RSRR is calculated as the ratio of the number of “predicted” to the number of “expected” readmissions, multiplied by the national unadjusted readmission rate. For each hospital, the numerator of the ratio (“predicted”) is the number of readmissions within 30 days predicted on the basis of the hospital’s performance with its observed case mix, and the denominator (“expected”) is the number of readmissions expected on the basis of the nation’s performance with that hospital’s case mix. This approach is analogous to a ratio of “observed” to “expected” used in other types of statistical analyses. It conceptually allows for a comparison of a particular hospital’s performance given its case mix to an average hospital’s performance with the same case mix. Thus, a lower ratio indicates lower-than-expected readmission or better quality and a higher ratio indicates higher-than-expected readmission or worse quality.  
     
    The predicted hospital outcome (the numerator) is the sum of the predicted probabilities of readmission for all patients at a particular hospital. The predicted probability of each patient in that hospital is calculated using the hospital-specific intercept and patient risk factors. The expected number of readmissions (the denominator) is the sum of the expected probabilities of readmission for all patients at a hospital. The expected probability of each patient in a hospital is calculated using a common intercept and patient risk factors.  
     
    Reference:  
    Normand S-LT, Shahian DM. 2007. Statistical and Clinical Aspects of Hospital Outcomes Profiling. Stat Sci 22(2): 206-226. 

    Measure Stratification Details

     

    Results of this measure will not be stratified. 

    All information required to stratify the measure results
    Off
    All information required to stratify the measure results
    Off
    Testing Data Sources
    Data Sources

    This measure relies on claims data. As of Fall 2023 claims data use is currently restricted and unavailable to support performance measures. Legislation to change this has been introduced.  

     

    We used the following data sources for initial model development:  
    1) Medicare Part A data  

    IMPORTANT NOTE: ACC is not currently able to use this data source as Medicare claims are not currently available for performance measure reporting. This has limited our ability to update and report this measure.  

     
    Part A data refers to claims paid for Medicare inpatient hospital care, outpatient services, skilled nursing facility care, some home health agency services, and hospice care. For this measure, we used Part A data to identify patient stays with a PCI performed either as an inpatient admission or outpatient service. For model development, we used 2007 Medicare Part A data to match patient stays associated with a PCI with comparable data from the CathPCI Registry. For validation, we used 2006 Medicare Part A data to match patient stays with a PCI performed with the corresponding 2006 data from the CathPCI Registry.  
     
    2) Medicare Enrollment Database  
    This database contains Medicare beneficiary demographic, benefit/coverage, and vital status information. This dataset was used to obtain information on several inclusion/exclusion indicators such as Medicare status on admission as well as vital status. These data have previously been shown to accurately reflect patient vital status (Fleming et al., 1992).  
     
    3) NCDR CathPCI Registry  
    The CathPCI Registry is the largest voluntary cardiovascular data registry in the United States. The registry captures detailed information about patients at least 18 years of age undergoing cardiac catheterization and PCI. Information collected by the registry includes demographics, comorbid conditions, cardiac status, and coronary anatomy. Hospitals that join the CathPCI Registry agree to submit data for 100% of patients undergoing cardiac catheterization and PCI procedures. These data are collected by hospitals and submitted electronically on a quarterly basis to NCDR.  

    Reference:  
    Fleming C, Fisher ES, Chang CH, Bubolz TA, Malenka DJ. Studying outcomes and hospital utilization in the elderly: The advantages of a merged data base for Medicare and Veterans Affairs hospitals. Medical Care. 1992; 30(5): 377-91. Data sources for the all-payer update 

    Minimum Sample Size

    This measure requires a minimum sample of 25 patients per facility.

  • Evidence of Measure Importance

    Numerous studies have demonstrated that differences in both PCI technique and subsequent hospital care affect patient outcomes following PCI. For example, the choice of procedural anticoagulation has been shown to affect both immediate and midterm outcomes following PCI (Giugliano 2005, Lincoff 2004). Similarly, a number of studies have demonstrated that appropriate device choice (such as intracoronary stents and thrombectomy) can improve patient outcomes. Finally, prior research has suggested that patients treated at hospitals with active PCI quality improvement programs have better outcomes than patients treated at hospitals that do not have these processes in place (Moscucci 2006). 

     

    Research has also shown that readmission rates for many conditions and procedures are influenced by the quality of inpatient and outpatient care, as well as hospital system characteristics, such as bed capacity of the local health care  

    system (Fisher 1994). In addition, specific hospital processes such as discharge planning, medication reconciliation, and coordination of outpatient care have been shown to positively affect readmission rates (Nelson 2000). Post-discharge follow-up and management during the transition to home also contribute to reducing readmissions (Mols 2019, Wu 2019). A recent meta-analysis of 39 studies recommended several interventions that could be considered to reduce 30-day readmissions including discharge checklists and ensuring compliance with medications during follow-up (Kwok 2020).  

     

    References:  

    Fisher ES, Wennberg JE, Stukel TA, Sharp SM. Hospital readmission rates for cohorts of Medicare beneficiaries in Boston and New Haven. N Engl J Med. 1994;331(15):989-995. 

     

    R.P. Giugliano, L.K. Newby and R.A. Harrington et al., The early glycoprotein IIb-IIIa inhibition in non-ST-segment elevation acute coronary syndrome (EARLY ACS) trial: a randomized placebo-controlled trial evaluating the clinical benefits of early front-loaded eptifibatide in the treatment of patients with non-ST-segment elevation acute coronary syndrome—study design and rationale, Am Heart J 149 (2005), pp. 994–1002. 

     

    Kwok CS, Narain A, Pacha HM, et al. Readmissions to Hospital After Percutaneous Coronary Intervention: A Systematic Review and Meta-Analysis of Factors Associated with Readmissions. Cardiovasc Revasc Med. 2020;21(3):375-391. doi:10.1016/j.carrev.2019.05.016 

     

    Lincoff AM, Kleiman NS, Kereiakes DJ, Feit F, Bittl JA, Jackman JD, Sarembock IJ, Cohen DJ, Spriggs D, Ebrahimi R, et al. (2004) Long-term efficacy of bivalirudin and provisional glycoprotein IIb/IIIa blockade vs heparin and planned glycoprotein IIb/IIIa blockade during percutaneous coronary revascularization: REPLACE-2 randomized trial. J Am Med Assoc 292: 696-703. 

     

    Mols RE, Hald M, Vistisen HS, Lomborg K, Maeng M. Nurse-led Motivational Telephone Follow-up After Same-day Percutaneous Coronary Intervention Reduces Readmission and Contacts to General Practice. J Cardiovasc Nurs. 2019;34(3):222-230. doi:10.1097/JCN.0000000000000566 

     

    Moscucci M, Rogers EK, Montoye C; et al. Association of a continuous quality improvement initiative with practice and outcome variations of contemporary percutaneous coronary interventions. Circulation. 2006;113(6):814-822. 

     

    Nelson EA, Maruish ME, Axler JL. Effects of discharge planning and compliance with outpatient appointments on readmission rates. Psychiatr Serv. 2000;51(7):885-889. 

     

    Wu Q, Zhang D, Zhao Q, et al. Effects of transitional health management on adherence and prognosis in elderly patients with acute myocardial infarction in percutaneous coronary intervention: A cluster randomized controlled trial. PLoS One. 2019;14(5):e0217535. Published 2019 May 31. doi:10.1371/journal.pone.0217535 

    Table 1. Performance Scores by Decile
    Performance Gap
    Overall Minimum Decile_1 Decile_2 Decile_3 Decile_4 Decile_5 Decile_6 Decile_7 Decile_8 Decile_9 Decile_10 Maximum
    Mean Performance Score 6.2%/5.5% 7.2%/7.0% 8.2%/8.1% 9.3%/9.5% 10.6%/11.1% 12.2%/12.8% 14.3%/14.8% 17.6%/18.6% 27.1%/26.1%
    N of Entities 1197 1197 1197 1197 1197 1197 1197 1197 1197
    N of Persons / Encounters / Episodes 27751 27751 27751 27751 27751 27751 27751 27751 27751
    Meaningfulness to Target Population

    This measure was developed with input from a technical expert panel that includes patient and caregiver representation. Generally, patients indicate that outcomes such as readmission rates in the 30 days following a procedure are useful for decision-making purposes and we believe that this measure would be found meaningful by them.  

    • Feasibility Assessment

      Not applicable during the Fall 2023 cycle.

      Feasibility Informed Final Measure

      The data elements required to generate this measure are coded by an individual other than the person obtaining the original information (e.g., DRG, ICD-10 codes on claims) or abstracted from a record by someone other than person obtaining original information (e.g., chart abstraction for quality measure or registry). All data elements are available in defined fields in electronic clinical data (e.g., clinical registry). This measure uses clinical data from the NCDR CathPCI Registry for risk adjustment and that data are linked to CMS administrative claims data to identify the readmissions.  

       

      As noted, the PCI readmission measure was successfully implemented with voluntary public reporting of hospitals that participate in the NCDR CathPCI Registry. In March 2013, all NCDR CathPCI hospitals received a hospital-specific report (HSR) detailing the results of the measure which included their RSRR, interval estimate, the reasons why patients were readmitted, and the hospitals to which their patients were readmitted. The measure uses data that is already routinely being collected by hospitals participating in the registry. Furthermore, the administrative data used to identify readmissions are routinely collected as part of the billing process. As such this measure did not add any incremental cost to participating sites. We did not experience any feedback from sites regarding issues of patient confidentiality or our approach to missing data.  
       
      We received feedback from several hospitals who stated that they erroneously received HSRs stating that they had no eligible cases. After inspecting the data, we determined that these represented cases in which several hospitals submitted cases using the same MPN. In our existing methodology, we excluded these cases due to concerns that we would be unable to accurately attribute cases to a specific hospital. In the future, however, we will be able to overcome this hurdle by using the NCDR’s hospital identifiers to correctly attribute cases to hospitals. 

      Proprietary Information
      Not a proprietary measure and no proprietary components
      Fees, Licensing, or Other Requirements

      The ACCF’s program the National Cardiovascular Data Registry (NCDR) provides evidence based solutions for cardiologists and other medical professionals committed to excellence in cardiovascular care. NCDR hospital participants receive confidential benchmark reports that include access to measure macro specifications and micro specifications, the eligible patient population, exclusions, and model variables (when applicable). In addition to hospital sites, NCDR Analytic and Reporting Services provides consenting hospitals’ aggregated data reports to interested federal and state regulatory agencies, multi-system provider groups, third-party payers, and other organizations that have an identified quality improvement initiative that supports NCDR-participating facilities. Lastly, the ACCF also allows for licensing of the measure specifications outside of the Registry.  

       

      There are no fees associated with the use of this measure. However, the measure is specified in a manner that requires participation in the NCDR CathPCI registry. Theoretically, one could create a parallel pathway for data submission that would not require registry participation, but this process has not been initiated and would be challenging. For example, there would not be the same efforts to ensure the quality and accuracy of submitted data. 

       

      Measures that are aggregated by ACCF and submitted to the CBE are intended for public reporting and therefore there is no charge for a standard export package. However, on a case by case basis, requests for modifications to the standard export package will be available for a separate charge.  

    • Data Used for Testing

      The specifications for this measure have not changed since the prior review.

       

      Several sections of this application for this measure could not be updated, including information on reliability and validity. This uniquely valuable measure was developed when access to CMS claims data was feasible. Current data access restrictions to the same CMS claims data prevents ACC from conducting the patient matching to assess long term follow up. Imposing requirements upon hospitals to acquire follow up data themselves during the past two plus years of the pandemic-induced hospital crisis was not possible. Thus, ACC-NCDR was prevented from linking claims data to our registry data.  

       

      The dataset used for testing included Medicare Part A claims, the National Cardiovascular Data Registry (NCDR) CathPCI Registry, and the Medicare Enrollment Database.  

       

      1. NCDR CathPCI Registry Data  

      This is a national quality improvement registry with more than 1200 participating U.S. hospitals.  Participation is largely voluntary though some states and healthcare systems mandate participation. Rigorous quality standards are applied to the data and both quarterly and ad hoc performance reports are generated for participating centers to track and improve their performance.  

       

      1. Medicare Data 

      Medicare Enrollment Database (EDB): This database contains Medicare beneficiary demographic, benefit/coverage, and vital status information. This dataset was used to obtain information on several inclusion/exclusion indicators, such as Medicare status on admission, and provided the ability to retrieve 90 days follow-up, linking patient Health Insurance Claim (HIC) number to the Part A data. These data have previously been shown to accurately reflect patient vital status (Fleming Fisher et al. 1992). 

      The datasets, dates, number of measured entities, and number of admissions used in each type of testing are as follows. For most testing requirements, we used data and analyses from the original submission to the CBE of the PCI readmission measure. These analyses used a cohort of patients undergoing PCI in 2006-2007 for whom NCDR CathPCI Registry data had been successfully linked with corresponding administrative claims data. However, we also conducted additional analyses to meet newer testing requirements, and these analyses were performed using comparable linked data from 2010-2011. Details are provided below.  

       

      Reliability testing and exclusions testing 

      The measure reliability dataset linked the CathPCI and Medicare Part A claims data from 2010-2011. The combined two-year sample included 277,512 PCIs on Medicare FFS patients aged 65 years and older performed in 1,197 hospitals (mean age 75.15 years; % female=39.95%). We then randomly split the sample, leaving 138,756 admissions to 1,190 hospitals in one randomly selected sample and 138,756 admissions to 1,193 hospitals in the remaining sample for patients aged 65 years and older. After excluding hospitals with fewer than 25 cases in each sample, the first sample contained 970 hospitals and the second sample contained 969 hospitals. 

       

      For data element reliability, we utilized data and analyses from the original measure NQF submission as part of initial measure development. For these analyses, we identified PCI procedures in the CathPCI Registry in which the patient was released from the hospital between January and December 2007. This development sample consisted of 128,745 patient stays at 766 hospitals. For measure testing, we identified a cohort of PCIs in which the patient was released from the hospital between January and December 2006. This validation sample consisted of 117,375 patient stays at 618 hospitals. 

       

      Validity testing 

      Results of validity testing use data from the original measure NQF submission. For measure development, we identified PCI procedures in the CathPCI Registry in which the patient was released from the hospital between January and December 2007. This development sample consisted of 128,745 patient stays at 766 hospitals. For measure testing, we identified a cohort of PCIs in which the patient was released from the hospital between January and December 2006. This validation sample consisted of 117,375 patient stays at 618 hospitals. 

       

      Measure development and risk-adjustment dataset 

      In measure development, we identified PCI procedures in the CathPCI Registry in which the patient was released from the hospital between January and December 2007. We merged PCI admissions in the NCDR CathPCI Registry data and PCI admissions in Medicare claims data to derive cohorts for development using probabilistic matching methodology. There were 128,745 cases discharged from the 766 hospitals in the validation sample. This development sample had a crude readmission rate of 11.1%. 

       

      Differences in Data

      Information on the differences with the data used is described above.  

       

      Characteristics of Measured Entities

      For this measure, hospitals are the measured entities. All non-federal, acute inpatient US hospitals (including territories) that participate in the American College of Cardiology (ACC) NCDR’s CathPCI Registry and care for Medicare Fee-for-Service (FFS) beneficiaries who are 65 years of age or older are included. The number of measured entities (hospitals) varies by testing type as described in the question above.  

      Characteristics of Units of the Eligible Population

      The number of admissions varies by testing type as described in the question above. 

  • Method(s) of Reliability Testing

    The specifications for this measure have not changed since the prior review.

     

    Patient or Encounter-Level Reliability 

    In constructing the measure we aim to utilize only those data elements from the claims that have both face validity and reliability. We avoid the use of fields that are thought to be coded inconsistently across hospitals or providers. Specifically, we use fields that are consequential for payment and which are audited. We identify such variables through empiric analyses and our understanding of CMS auditing and billing policies and seek to avoid variables which do not meet this standard. For example, “discharge disposition” is a variable in Medicare claims data that is not thought to be a reliable variable for identifying a transfer between two acute care facilities. Thus, we derive a variable using admission and discharge dates as a surrogate for “discharge disposition” to identify hospital admissions involving transfers. This allows us to identify these admissions using variables in the claims data which have greater reliability than the “discharge disposition” variable. In addition, CMS has in place several hospital auditing programs used to assess overall claims code accuracy, to ensure appropriate billing, and for overpayment recoupment. CMS routinely conducts data analysis to identify potential problem areas and detect fraud, and audits important data fields used in our measures, including diagnosis and procedure codes and other elements that are consequential to payment. 

     

    In addition, as an example of some of the methods that could be used to ensure data quality, we describe the NCDR’s existing Data Quality Program (DQP). The two main component of the DQP are complementary and consist of the Data Quality Report (DQR) and the Data Audit Program (DAP). The DQR process assesses the completeness and validity of the electronic data submitted by participating hospitals. Hospitals must achieve >95% completeness of specific data elements identified as ‘core fields’ to be included in the registry’s data warehouse for analysis. The ‘core fields’ include the variables included in 25 our risk adjustment models. The process is iterative, providing hospitals with the opportunity to correct errors and resubmit data for review and acceptance into the data warehouse. The DAP consists of annual on-site chart review and data abstraction. Among participating hospitals that pass the DQ random charts of 10% of submitted cases. The CathPCI Registry audit focuses on variables used for the existing PCI mortality models.  

     

    Finally, we assess the reliability of the data elements by comparing model variable frequencies and odds ratios in two years of data. 

     

    Accountable Entity-Level Reliability 

    The reliability of a measurement is the degree to which repeated measurements of the same entity agree with each other. For measures of hospital performance, the measured entity is naturally the hospital, and reliability is the extent to which repeated measurements of the same hospital give similar results. In line with this thinking, our approach to asses reliability is to consider the extent to which assessments of a hospital using different, but randomly selected subsets of patients, produce similar measures of hospital performance. That is, we take a "test-retest" approach in which hospital performance is measured once using a random subset of patients, then measured again using a second random subset exclusive of the first, and finally comparing the agreement between the two resulting performance measures across hospitals (Rousson et al., 2002). 

     

    For test-retest reliability, we combined index admissions from successive measurement periods into one dataset (2010 and 2011), randomly sampled half of patients within each hospital, calculated the measure for each hospital, and repeated the calculation using the second half. Thus, each hospital is measured twice, but each measurement is made using an entirely distinct set of patients. To the extent that the calculated measures of these two subsets agree, we have evidence that the measure is assessing an attribute of the hospital, not of the patients. As a metric of agreement we calculated the intra-class correlation coefficient (ICC) (Shrout and Fleiss, 1979), and assessed the values according to conventional standards (Landis and Koch, 1977). Specifically, we used the two data samples and calculated the risk-standardized readmission rate (RSRR) for each hospital for each sample. The agreement of the two RSRRs was quantified for hospitals in each sample using the intra-class correlation (ICC) as defined by Shrout and Fleiss (1979). 

     

    Using two independent samples provides an honest estimate of the measure’s reliability, compared with using two random, but potentially overlapping samples, which would exaggerate the agreement. Moreover, because our final measure is derived using hierarchical logistic regression, and a known property of hierarchical logistic regression models is that small volume hospitals contribute less ´signal´. As such a split sample using a single measurement period likely introduces extra noise; potentially underestimating the actual test-retest reliability that would be achieved if the measures were reported using additional years of data. Furthermore, the measure is specified for the entire PCI population, but we tested it only in the subset of Medicare FFS patients for whom information about vital status was available. This reduced the cohort available for testing by approximately 40%. 

     

    References: 

    1) Rousson V, Gasser T, Seifert B. Assessing intrarater, interrater and test–retest reliability of continuous measurements. Statistics in Medicine 2002;21:3431-3446. 

    2) Shrout P, Fleiss J. Intraclass correlations: uses in assessing rater reliability. Psychological Bulletin 1979;86:420-428. 

    3) Landis J, Koch G, The measurement of observer agreement for categorical data. Biometrics 1977;33:159-174 

     

    Reliability Testing Results

    Patient or Encounter-Level Reliability 

     

    Overall, risk factor frequencies changed little across years, and there were no notable differences in the odds ratios across years of data (Table 1 in attachment). 

     

    Accountable Entity-Level Reliability 

     

    In the most recent years of data (2010-2011), there were 277,512 admissions in the combined two-year sample, with 138,756 admissions to 1,190 hospitals in the first randomly selected sample (mean RSRR 12.5%), and 138,756 admissions to 1,193 hospitals in the second randomly-selected sample (mean RSRR 12.1%).  The agreement between the two RSRRs for each hospital was 0.3711, which according to the conventional interpretation is “fair” (Landis & Koch, 1977). The intra-class correlation coefficient is based on a split sample of 2 years of data, resulting in a volume of patients in each sample equivalent to only 1 year of data, whereas the measure is likely to be reported with a full two years of data.  

     

    The stability over time of the risk factor frequencies and odds ratios indicate that the underlying data elements are reliable. Additionally, the ICC score demonstrates fair agreement across samples using a “strict” approach to assessment that would likely improve with greater sample size. 

     

    Reference.  

    Landis J, Koch G. The measurement of observer agreement for categorical data, Biometrics 1977;33:159-174. 

    Interpretation of Reliability Results

    The reliability testing demonstrated that the measure data elements are repeatable, and capable of producing the same results a high proportion of the time when assessed in the same population in the same time period.

  • Method(s) of Validity Testing

    The specifications for this measure have not changed since the prior review.

     

    Measure validity is demonstrated through prior validity testing done on our other measures, through use of established measure development guidelines, by systematic assessment of measure face validity by a technical expert panel (TEP) of national experts and stakeholder organizations, and through registry data validation. 

     

    Validity of Registry Data 

    Data element validity testing was done on the specified measure by comparing with variables in the ACC audit program. The NCDR CathPCI Registry has an established DQP that serves to assess and improve the quality of the data submitted to the registry. There are two complementary components to the Data Quality Program- the Data Quality Report (DQR) and the Data Audit Program (DAP). The DQR process assesses the completeness of the electronic data submitted by participating hospitals. Hospitals must achieve >95% completeness of specific data elements identified as “core fields” to be included in the registry’s data warehouse for analysis. The “core fields” encompass the variables included in our risk adjustment models. The process is iterative, providing hospitals with the opportunity to correct errors and resubmit data for review and acceptance into the data warehouse. All data for this analysis passed the DQR completeness thresholds.  

     

    The DAP consists of annual on-site chart review and data abstraction. Among participating hospitals that pass the DQR for a minimum of two quarters, at least 5% are randomly selected to participate in the DAP. At individual sites, auditors review charts of 10% of submitted cases. The audits focus on variables that are used in the NCDR risk-adjusted in-hospital mortality model including demographics, comorbidities, cardiac status, coronary anatomy, and PCI status. However, the scope of the audit could be expanded to include additional fields. The DAP includes an appeals process for hospitals to dispute the audit findings. The NCDR DAP was accepted by the National Quality Forum as part of its endorsement of the CathPCI Registry’s in-hospital risk-adjusted mortality measure.  

     

    Additionally, we compared the model performance in the development sample with its performance in a similarly derived sample from patients discharged in 2006 who had undergone PCI. There were 117,375 cases discharged from the 618 hospitals in the 2006 validation dataset. This validation sample had a crude readmission rate of 10.7%. The performance was not substantively different in this validation sample (ROC=0.663), as compared to the development sample (ROC=0.665). The results show the 2006 and 2007 models are similarly calibrated (see table 2 in the attachment). 

     

    We also examined the temporal variation of the standardized estimates and frequencies of the variables in the development and validation models.  

     

    To assess the predictive ability of the model, we grouped patients into deciles of predicted 30-day readmission and compared predicted readmission with observed readmission for each decile in the derivation cohort. 

     

    To evaluate model performance after the re-specification to Version 4 variables, we compared the odds ratios (OR) and c-statistics in 2008 Version 3 data and 2010 Version 4 data.  

     

    Validity as Assessed by External Groups 

    During original measure development and in alignment with the CMS Measures Management System (MMS), we released a public call for nominations and convened a TEP when originally developing the measure. The purpose of convening the TEP was to provide input and feedback during measure development from a group of recognized experts in relevant fields. The TEP represented physician, consumer, hospital, and purchaser perspectives, chosen to represent a diverse of perspectives and backgrounds. 

     

     

    Validity Testing Results

    The performance of the development and validation samples is similar. The areas under the receiver operating characteristic (ROC) curve are 0.665 and 0.663, respectively, for the two samples. In addition, they are similar with respect to predictive ability. For the development sample, the predicted readmission rate ranges from 4% in the lowest predicted decile to 25% in the highest predicted decile, a range of 21%. For the validation sample, the corresponding range is 4% to 24%, a range of 20%. 

    Additionally, the frequencies and regression coefficients are fairly consistent over the two years of data. Also, there was excellent correlation between predicted and observed readmission.  

     

    We estimated hospital-level RSRRs using the corresponding hierarchical logistic regression samples for the linked patient sample with cases performed in 2010-2011. We then examined the linear relationship between the two sets of estimates using regression techniques and weighting by the total number of cases in each hospital. The correlation coefficient of the standardized rates from the administrative and medical record models is 0.999. 

     

    The c-statistic for the 2010, Version 4 model was 0.680. This is a negligible change from the 2008, Version 3 model, which had a c-statistic of 0.676. Odds ratios in both data years are comparable, further indicating that model performance was not significantly altered by re-specification to Version 4 variables. The current model can use the Version 4 registry data. 

    Interpretation of Validity Results

    The audits conducted by the ACC support the overall validity of the data elements included in this measure. The data elements used for risk adjustment were consistently found for all patients and were accurately extracted from the medical record.  

     

    Additionally, the results between the development and validation samples proved to be similar in each of the model testing that was performed. The ROC results were nearly identical. The correlation between the resulting RSRRs calculated from both models was 0.999 which demonstrates observed readmission is similar to predicted readmission.   

  • Methods used to address risk factors
    Conceptual Model Rationale

    We sought to develop a model that included key variables that were clinically relevant and based on strong association with 30-day readmission.  

     

    To create a model with increased usability while retaining excellent model performance, we tested the performance of the model without those variables considered to be questionably feasible. To select candidate variables, a team of clinicians reviewed all variables in the NCDR CathPCI Registry database (a copy of the data collection form and the complete list of variables collected and submitted by hospitals can be found at www.ncdr.com). We did not consider as candidate variables those that we would not want to adjust for in a quality measure, such as potential complications, certain patient demographics (e.g., race, socioeconomic status), and patients‟ admission path (e.g., admitted from a skilled nursing facility [SNF]). 

     

    Based on careful clinical review and further informed by a review of the literature, a total of 29 variables were determined to be appropriate for consideration as candidate variables (Table 3 in attachment).  

     

    For categorical variables with missing values, the value from the reference group was added. The percentage of missing values for all categorical variables was very small (<1%). There were three continuous variables with missing values: body mass index (BMI, 0.1%), glomerular filtration rate (GFR, 3.7%), and left ventricular ejection fraction (LVEF, 28.5%); we considered the missing of GFR and LVEF as an independent category of “unmeasured” and for BMI; we stratified by gender and imputed the missing values to the median of the corresponding groups. 

     

    We used logistic regression with stepwise selection (entry p<0.05; retention with p<0.01) for variable selection. We also assessed the direction and magnitude of the regression coefficients. This resulted in a final risk-adjusted readmission model that included 20 variables (table 4 in attachment). There were variables for demographics (age and gender), history and risk factors, cardiac status (heart failure, symptoms present on admission), cath lab visits (ejection fraction percentage), and PCI procedure (PCI status, highest risk lesion, highest pre-procedure TIMI flow). 

    Risk Factor Characteristics Across Measured Entities

    Information on the descriptive statistics on the distribution across the measured entities of the risk variables identified was not previously required and as a result, we do not have these data to share.  

    Risk Adjustment Modeling and/or Stratification Results

     Please see table 5 in the attachment.

    Calibration and Discrimination

    Approach to assessing model performance 

    During measure development, we computed three summary statistics for assessing model performance (Harrell and Shih, 2001) for the development and validation cohort: 

     

    Discrimination Statistics: 

    (1) Area under the receiver operating characteristic (ROC) curve (the c statistic (also called ROC) is the probability that predicting the outcome is better than chance, which is a measure of how accurately a statistical model is able to distinguish between a patient with and without an outcome.) 

    (2) Predictive ability (discrimination in predictive ability measures the ability to distinguish high-risk subjects from low-risk subjects. Therefore, we would hope to see a wide range between the lowest decile and highest decile) 

     

    Calibration Statistics: 

    (3) Over-fitting indices (over-fitting refers to the phenomenon in which a model accurately describes the relationship between predictive variables and outcome in the development dataset but fails to provide valid predictions in new patients) 

     

    We compared the model performance in the development sample with its performance in a similarly derived sample from patients discharged in 2006 who had undergone PCI. There were 117,375 cases discharged from the 618 hospitals in the 2006 validation dataset. This validation sample had a crude readmission rate of 10.7%. We also computed statistics (1) and (2) for the current measure cohort, which includes discharges from 2010-2011. 

    For the development cohort the results are summarized below: 

    C-statistic=0.665 

    Predictive ability (lowest decile %, highest decile %): 4.05%, 25.08% 

     

    For the validation cohort the results are summarized below: 

    C statistic=0.663 

    Predictive ability (lowest decile %, highest decile %): 3.80%, 23.80% 

     

    For the current measure cohort (combined data from 2010 and 2011) the results are summarized below:  

    C statistic=0.668 

    Predictive ability (lowest decile %, highest decile %): 4.2%, 26.1% 

     

    For the development cohort the results are summarized below: 

    Calibration: (0.00,1.00) 

     

    For the validation cohort the results are summarized below: 

    Calibration: (-0.06, 0.99) 

     

    For the current measure cohort the results are summarized below:  

    Calibration: (-0.004, 1.008) 

     

    The risk decile plot is a graphical depiction of the deciles calculated to measure predictive ability. Below, we present the risk decile plot showing the distributions for the current measure cohort.  See figure 2 in the attachment. 

     

    Discrimination Statistics 

    The C-statistics of 0.665, 0.663 and 0.668 indicate good model discrimination. Readmission, as opposed to other outcomes such as mortality consistently has a lower c-statistic, even in medical record models. This is likely because readmission is less determined by patient comorbidities and more by health system factors. The model indicated a wide range between the lowest decile and highest decile, indicating the ability to distinguish high-risk patients from low-risk patients. 

     

    Calibration Statistics 

    Over-fitting (Calibration γ0, γ1)  

    If the γ0 in the validation samples are substantially far from zero and the γ1 is substantially far from 1, there is potential evidence of over-fitting. The calibration value close to 0 at one end and close to 1 on the other end indicates good calibration of the model. 

     

    Risk Decile Plots 

    Higher deciles of the predicted outcomes are associated with higher observed outcomes, which show a good calibration of the model. This plot indicates excellent discrimination of the model and good predictive ability. 

     

    Overall Interpretation  

    Interpreted together, our diagnostic results demonstrate the risk-adjustment model adequately controls for differences in patient characteristics (case mix). 

    Interpretation of Risk Factor Findings

    See fields above.

    Final Approach to Address Risk Factors
    Risk adjustment approach
    On
    Risk adjustment approach
    Off
    Specify number of risk factors

    20

    Conceptual model for risk adjustment
    Off
    Conceptual model for risk adjustment
    On
  • Contributions Towards Advancing Health Equity

    optional question

  • Current Use(s)
    Why the measure is not in use
    Efforts have been unsuccessful to gain access to CMS claims data. Should these become available to the ACC we will provide a detailed plan of implementation.
    • Name of the program and sponsor
      N/A
      Purpose of the program
      N/A
      Geographic area and percentage of accountable entities and patients included
      N/A
      Applicable level of analysis and care setting

      N/A

    Actions of Measured Entities to Improve Performance

    In general, registry participants receive feedback though quarterly benchmark reports. These reports contain detailed analyses of the institution’s performance. In these reports, institutional performance is compared to national aggregates. A thorough understanding of performance can be gleaned from the executive summary dashboard which contains visual displays of metric performance as well as patient level drill downs. Sites have the capability to export their information into their own excel spreadsheets to conduct their own analysis. Supporting documentation in the form of the coder’s dictionary and outcome reports guide includes additional support for the sites. Monthly Registry Site Manager (RSM) Calls, sessions at the NCDR annual conference and access to Clinical Quality Associates are other ways that sites can get updates on data interpretation.  

    Feedback on Measure Performance

     Because ACC is unable to currently report on the measure, we have not received any new feedback on measure performance and implementation.  

     

    Once implemented, feedback will be collected during monthly RSM calls, ad hoc phone calls tracked with Salesforce software, and during registry –specific break-out sessions at the NCDR’s annual meeting. Registry Steering Committee members may also provide feedback during regularly scheduled calls. 

    Consideration of Measure Feedback

    Because ACC is unable to currently report on the measure, we have not received any new feedback on measure performance and implementation.  

    Progress on Improvement

    Because ACC is unable to currently report on the measure, we have not received any new feedback on measure performance and implementation.  

    Unexpected Findings

    In the first year of measure implementation, we did not find evidence of unintended negative consequences to individuals or populations.  
     
    Ensuring data quality is critical so that the RSRRs can provide fair and accurate estimates of outcomes across hospitals. However, all data sources are potentially prone to misclassifications. Accordingly, adequate mechanisms need to be implemented to ensure data quality (such as monitoring data for variances in case mix), chart audits, and possibly adjudicating cases that are vulnerable to systematic misclassification). The NCDR CathPCI registry has successfully implemented methods to ensure the quality of data used for the risk adjustment methodology.  
     
    Studies suggest that public reporting of the outcomes of cardiovascular procedures may have unintended consequences. Moscucci and colleagues compared the characteristics and outcomes of patients undergoing PCI in states with (New York) and without (Michigan) public reporting and found that patients undergoing PCI in New York were substantially lower risk than PCI patients in Michigan. Determining the underlying causes and appropriateness of these differences is impossible, but there is concern that physicians in states that publicly report PCI outcomes would either refer high risk cases to states without public reporting or avoid such cases altogether. Implementing a national measure of PCI outcomes would avoid the former problem in that public reporting would be consistent across states. Nevertheless, the measure requires close attention to the possibility that high risk patients are not receiving PCI when clinically indicated.  
     
    Continued measure implementation will require close attention to data quality. Potential solutions include continued chart audits and attention to variances in case mix.  
     
    Reference  
    Moscucci M, Eagle KA, Share D, et al. Public Reporting and Case Selection for Percutaneous Coronary InterventionsAn Analysis From Two Large Multicenter Percutaneous Coronary Intervention Databases. Journal of the American College of Cardiology. 2005;45(11):1759-1765. 

  • Most Recent Endorsement Activity
    Cost and Efficiency Fall 2023
    Initial Endorsement
    Endorsement Status
    E&M Committee Rationale/Justification

    Endorsement was removed due to no consensus. The committee raised concern with the lack of updated data to determine whether a gap exists and for scientific acceptability. The measure is also not in use, which makes it challenging to know if the measure is improving over time.

    Removal Date
  • Do you have a secondary measure developer point of contact?
    On
    Measure Developer Secondary Point Of Contact

    Katie Goodwin
    American College of Cardiology
    2400 N St NW
    Washington , DC 20037
    United States

    Measure Developer Secondary Point Of Contact Phone Number
    The measure developer is NOT the same as measure steward
    Off
    Steward Address

    United States

  • Detailed Measure Specifications
    Yes
    Logic Model
    On
    Impact and Gap
    Yes
    Feasibility assessment methodology and results
    Yes
    Measured/accountable entity (reliability and/or validity) methodology and results (if available)
    Address health equity
    Yes
    Measure’s use or intended use
    Yes
    Risk-adjustment or stratification
    Yes, risk-adjusted only
    508 Compliance
    On
    If no, attest that all information will be provided in other fields in the submission.
    Off
    • Submitted by MPickering01 on Mon, 01/08/2024 - 19:51

      Permalink

      Importance

      Importance Rating
      Importance

      Strengths:

      • Literature review shows that patient outcomes following PCI can be affected by clinician decisions, including choice of anticoagulant or device type, and that patients treated at hospitals with active PCI QI programs have better outcomes than those treated in hospitals with no such program. Readmission rates are affected by quality of inpatient and outpatient care.
      • Performance scores range from minimum of 8.5% to max of 15.4%, median 11.7, showing variation between hospitals.
      • Patients on the TEP "generally" indicated that outcomes such as readmission rates in 30 days are useful for decision-making purposes.

      Limitations:

      • Literature review suggests relationship between low quality of care related to PCI and poor outcomes that are likely correlated with readmission, but only one study cited (Mols et al. 2019) appears to connect poor quality in PCI care specifically to readmission rates.
      • Developer "is not currently able to use this data source as Medicare claims are not currently available for performance measure reporting. This has limited our ability to update and report this measure."
      • Most recent performance data are from 2010-2011 (i.e., more than a decade old). The developer should provide more clarity on the data access issue.

      Rationale:

      • Literature cited provides support for the relationship between quality of care related to PCI and poor outcomes, and the relationship between poor quality and readmission, but only one study cited (Mols et al. 2019) appears to connect poor quality in PCI care specifically to  readmission rates. Evidence for the meaningfulness to patients is derived from their TEP.
      • Developer reports they are unable to show more recent performance scores due a CMS restriction. A performance gap may still exist but the data available are not recent, which challenges the continued business case for the measure. Performance data show a range in rates of readmission in 30 days: median 11.7%, range 8.5-15.4% across hospitals; these data were from 2010-2011. The developer should provide more clarity on the data access issue.

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      Strengths:

      • All data elements are available in defined fields in electronic clinical data; data used for the measure are already routinely being collected by hospitals participating in the NCDR CathPCI Registry. Claims data are used to identify readmissions and registry data are used for risk adjustment factors. There is no fee for use of the measure.

      Limitations:

      • While there is no fee for using the measure, the measure as specified currently requires participation in the National Cardiovascular Data Registry (NCDR) CathPCI registry and a parallel pathway for data submission does not yet exist and would be challenging to implement.

      Rationale:

      • All data elements are available in defined fields in electronic clinical data, and there are no fees associated with reporting the measure.
      • The measure as specified requires participation in the NCDR CathPCI registry, and a parallel pathway for reporting has not been implemented; developers do not report the proportion of hospitals that do not participate in the registry.

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      Strengths:

      • The measure is well-defined and precisely specified.
      • The sample included 277,512 admissions in the combined two-year sample, with 138,756 admissions to 1,190 hospitals in the first randomly selected sample and 138,756 admissions to 1,193 hospitals in the second randomly-selected sample.

      Limitations:

      • The submission shows odds ratios as evidence of patient/encounter level reliability but this is not an accepted method for assessing patient/encounter level reliability.
      • Split-half reliability ICC was 0.3711, below the threshold of 0.6.
      • Data were collected more than a decade ago (2010-2011).

      Rationale:

      • Measure score reliability testing (accountable entity level reliability) performed. Split-half reliability ICC was 0.3711, below the threshold of 0.6. Report states that, as of Fall 2023, claims data use is currently restricted and unavailable to support performance measures. This would probably limit the ability of the measure developer to gather new data to improve reliability. The developer should provide more clarity on the data access issue.
      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      Strengths:

      • NCDR CathPCI data elements are validated through an audit process and include the risk adjustment factors.
      • The measure is risk-adjusted for patient risk factors, such as demographics (age, gender), history and risk factors, cardiac status (heart failure, symptoms present on admission), cath lab visits (ejection fraction percentage), and PCI procedure (PCI status, highest risk lesion, highest pre-procedure TIMI flow). The risk adjustment model discriminates between high-risk and low-risk patients. Risk factors used in the model were identified through literature and clinical review of the variables available in the registry; 29 variables were tested and 20 were retained in the final model.

      Limitations:

      • One exclusion is related to bundled claims - cases are excluded if "not the first claim in the same claim bundle" to avoid double-counting the index ICD procedure
      • Developers do not report results of empiric validity testing performed at the accountable entity level; all validity testing performed appears to be at the data element level. Developer refers to face validity established via TEP but does not report the results of any vote.
      • C-statistics were rated as "good" by developer but unclear if this is the appropriate threshold (other sources used .80-.89 to denote good discrimination); however, developers argue that the wide range between low and high decile patients indicate the ability to discriminate between low and high-risk patients.

      Rationale:

      • The data elements for this measure have been validated (2006-2007); risk factors, demographics, and PCI status are validated through routine audit of registry data. The developer should provide more clarity on the data access issue.
      • The risk adjustment model incorporates 20 patient risk factors: demographics (age, gender), history and risk factors, cardiac status (heart failure, symptoms present on admission), cath lab visits (ejection fraction percentage), and PCI procedure (PCI status, highest risk lesion, highest pre-procedure TIMI flow).
      • Accountable-entity level validity testing appears to be limited to face validity established via the TEP (voting results not reported).

      Equity

      Equity Rating
      Equity

      Strengths:

      • N/A

      Limitations:

      • Developer did not address this optional criterion.

      Rationale:

      • Developer did not address this optional criterion.

      Use and Usability

      Use and Usability Rating
      Use and Usability

      Strengths:

      • Registry participants receive feedback via quarterly benchmark reports; a dashboard visually displays metrics and allows "patient-level" drill downs; monthly calls and sessions at the NCDR national conference are other venues where providers can learn about data and interpretation.
      • Developer outlines a plan for collecting feedback via monthly calls, ad hoc meetings, and NCDR annual meetings.
      • Developer reports no evidence of unexpected findings from "first year of implementation", and describes a potential unintended consequence identified in the literature, i.e., that states that report PCI outcomes may refer high-risk patients to non-reporting states.

      Limitations:

      • Not in use; no plans for use have been made due to inaccessibility of CMS claims data.
      • Developer reports that no new feedback has been received due to their inability to report on the measure (see claims data reporting issue this developer mentions).
      • No performance gap or improvement on the measure can be reported currently.

      Rationale:

      • Developers outline a plan for providers to receive performance information via benchmark reports and a dashboard, a plan for collecting feedback, and a plan for identifying unexpected findings.
      • The measure is currently not in use in any program, and no data on performance gap or performance improvement is reported. There are no unintended consequences reported, though developers identify a possible risk of providers referring high-risk patients elsewhere.

      Summary

      N/A

    • Submitted by Dan Halevy MD … on Fri, 01/12/2024 - 17:08

      Permalink

      Importance

      Importance Rating
      Importance

      Direct and indirect evidence supports the association between PCI treatments and complications which may lead to readmissions. Whether the gap remains is unknown based on the information provided. Data not available from recent decade. The developer mentioned that a technical expert panel involved patients and caregivers, but more detail would be expected.

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      The measure is based on electronic data from claims and a registry.

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      Reliability testing from the original data set does not appear to support minimum standards for reliability. 

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      Approach to validity testing using the ACC’s audit program appear reasonable to confirm the accuracy of the data elements in the registry. 

      Equity

      Equity Rating
      Equity

      May be addressed through an analysis of demographics data included in the registry.

      Use and Usability

      Use and Usability Rating
      Use and Usability

      The measure has not been in use, and the developer reports problems in obtaining more recent data.

      Summary

      N/A

      Submitted by Christopher Dezii on Sun, 01/14/2024 - 14:13

      Permalink

      Importance

      Importance Rating
      Importance

      Little support (1 study)for any nexus of quality of care related to PCI and  outcomes as well as any linkages with quality and readmission

      Performance data is quite dated with no prospect for updating due to CMS restriction

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      Registry derived reflects optimum feasibility though does require participation in the registry which represents a low hurdle in my oipinion

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      as identified,split-half reliability ICC of 0.3711 fell below the threshold of 0.6. Claims data use is currently restricted and unavailable to support performance measures which represents a fatal flaw in execution of this metric as well as limiting any update from old data (2010/2011)

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      Validity asserted but not proven/identified. Thresholds identified as "good" appear poor to this observer

      Equity

      Equity Rating
      Equity

      Could not find any information on Equity in application

      Use and Usability

      Use and Usability Rating
      Use and Usability

      Measure is not in use and no reporting on performance can be provided as well as no feedback due to inaccessible data, therefore metric is not fit for use and is not usable

      Summary

      Measure is not and use and appears that it cnnot be used for its intended purpose therefore I do not feel measure should go forward

      Submitted by Hal McCard on Wed, 01/17/2024 - 11:10

      Permalink

      Importance

      Importance Rating
      Importance

      Agree with staff assessment

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      Agree with staff assessment

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      Agree with staff assesment

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      Agree with staff assessment

      Equity

      Equity Rating
      Equity

      Not addressed

      Use and Usability

      Use and Usability Rating
      Use and Usability

      Agree with staff comments

      Summary

      I agree with the staff comments and notes regarding the issues with the measure

      Submitted by Margaret Woeppel on Thu, 01/18/2024 - 12:43

      Permalink

      Importance

      Importance Rating
      Importance

      NA

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      NA

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      Has there been current research done on what percentage of readmissions are attributed to the PCI?  Local antidotal information shows that up to 40% of readmissions may not be attributed to original complaint/procedure. 

      What is the compliance rate for hospitals completing the readmission portion of CathPCI registry 

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      NA

      Equity

      Equity Rating
      Equity

      NA

      Use and Usability

      Use and Usability Rating
      Use and Usability

      NA

      Summary

      NA

      Submitted by John Martin on Thu, 01/18/2024 - 15:31

      Permalink

      Importance

      Importance Rating
      Importance

      Literature, patient feedback, and expert face validity indicates the importance of this measure.

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      The registry data is not available publicly, and according to the measure developer they can't access the necessary Medicare data. If data is made publicly available, it would be replicable. 

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      Unacceptable validity measures, and the discriminatory statistics are poor despite the measure developers comments. 

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      Unacceptable validity measures, and the discriminatory statistics are poor despite the measure developers comments. 

      Equity

      Equity Rating
      Equity

      Didn't address.

      Use and Usability

      Use and Usability Rating
      Use and Usability

      Could make data publicly available, but currently it is not. 

      Summary

      The measure developer is relying on very old analyses (over a decade old) and has not updated the submission to meet the current standards. Additionally, the measure statistics do not support a well performing measure. 

      Submitted by Dmitriy Poznyak on Thu, 01/18/2024 - 16:48

      Permalink

      Importance

      Importance Rating
      Importance

      Agree with the staff assessment. 

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      This measure uses a combination of the registry and claims data and it appears to be feasible. 

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      If I am reading the report correctly, the reliability of the measure was computed using data that is nearly 15 years old (2010-2011). This is a substantial limitation. Secondly, the intra-class correlation between the RSRRs in each sample was 0.37 which is below the acceptable standard. I do not agree with the developers' conclusion that the measure can produce the same results a high proportion of the time when assessed in the same population in the same time-period.   

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      The method chosen to support the empiric validity of the measure at the accountable entity level is questionable. The developers compared the performance of the risk-adjustment model in the development and validation samples. While this would allow the developers to assess the validity of the model, this does not support the validity of the measure per se. 

      Equity

      Equity Rating
      Equity

      Not addressed. 

      Use and Usability

      Use and Usability Rating
      Use and Usability

      Agree with the staff assessment. 

      Summary

      This measure is not in use and it is not clear whether it can be used for its intended purpose in the future. As tested, the measure does not meet any of the CBE's criteria, apart from feasibility. 

      Submitted by Seth Morrison on Thu, 01/18/2024 - 17:56

      Permalink

      Importance

      Importance Rating
      Importance

      The concept is of interest to patients and clinicians Unfortunately the measure relies on information not generally available and has not been updated or revalidated

       

      The application reports that this measure was developed with input from a technical expert panel that includes patient and caregiver representation.  However there is no information on the number of patients and caregivers or on the demographics of the patients and caregivers.  In a room populated by “experts” and clinicians some patients will  be intimidated and accept their opinions.   Efforts to ensure open patient and caregiver input are not documented.

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      With key data not available this measure is not feasible at this time

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      Data is not available to assure the scientific acceptability of this submission

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      Data is not available to assure the scientific validity of this submission

      Equity

      Equity Rating
      Equity

      While this domain is not required the conditions requiring Percutaneous Coronary Intervention (PCI) are much more prevalent in populations of color and lower socioeconomic patients.  A measure not taking these factors into account for this treatment is not useful for many patients most in need of the information. 

      Use and Usability

      Use and Usability Rating
      Use and Usability

      With limited data and 10+ year old data it is not possible to document the Use and Reliability of this measure

      Summary

      It is not clear why this application with so much old or limited data was submitted.

      Submitted by Harold Miller on Fri, 01/19/2024 - 07:38

      Permalink

      Importance

      Importance Rating
      Importance

      It is very important to assess whether patients receiving PCIs experience adverse outcomes, and to compare facilities performing PCIs to identify those with better or worse outcomes. However, this measure does not provide a valid or reliable assessment of PCI quality.  Moreover, it is applicable only to a subset of the facilities that perform PCIs (hospitals that participate in the CathPCI registry, not hospitals that do not participate and no ambulatory surgery centers) and a small subset of patients (patients over 65 on traditional Medicare).

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      Calculation of the measure requires data from two different sources – the CathPCI Registry and Medicare claims data, and the data have to be linked.  Not all hospitals participate in the CathPCI Registry, and the data from the registry are only accessible to the measure developers.  The claims data from Medicare are apparently not accessible by the measure developers.  Consequently, it is not clear how the measure can actually be computed. 

       

      No information is provided about the proportion of CathPCI registry patients that cannot be matched to Medicare claims data.

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      In contrast to other developers of similar types of measures, the developers of this measure attempted to assess test-retest reliability of the measure over multiple years, rather than only using a single year of data.  Although only a limited amount of information was provided about the results of that assessment, that information indicates that the measure has an unacceptably low reliability, particularly if the measure is to be used for public reporting or payment. In addition, the data used were very old, so it is not clear what the reliability would be if the measure were to be used today.

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      This is not a valid measure of the quality of PCIs due to serious problems with both the numerator and denominator. 

       

      The numerator is both too broad and too narrow:

      • All hospital readmissions are included, regardless of whether they have anything to do with the PCI procedure, the aftercare related to the procedure, or the care associated with the patient’s underlying cardiovascular condition.  Patients who are admitted to the hospital for unrelated problems are treated as a failure by the hospital or interventional cardiologists to provide appropriate cardiovascular care.  No information is presented to show what percentage of readmissions following a PCI are related or unrelated to the procedure or the cardiovascular condition that prompted it.
      • Only hospital readmissions are included.  If a patient comes to the emergency department with a complication of the PCI but is not admitted to the hospital, that is not counted as part of the numerator. If a patient dies after discharge from the hospital, that is also treated as a “success” since there is no readmission. 

      The denominator only includes PCIs performed in hospitals, not PCIs performed in ambulatory surgery centers (ASCs).  Not only does this prevent comparing the outcomes of PCIs performed in the two settings, it creates geographic biases in the measure results.  Because a hospital has greater capabilities to address complications than an ASC, patients whom physicians believe to be at higher risk of such complications will be more likely to have the PCI performed in a hospital than in an ASC.  As a result, the rate of hospital readmissions will likely be higher for the patients who receive a PCI at a hospital.  The availability of ambulatory surgery centers performing PCIs varies significantly across communities, which means the proportion of all PCIs performed in hospitals will also vary significantly.  For example, there are fewer ASCs in states with Certificate of Need laws, and there are fewer ASCs in small communities and rural areas simply because of the smaller number of patients.  As a result, hospitals in states and parts of states where there are more ASCs will have a smaller and higher-risk group of PCI patients than other hospitals do, and so they will likely have higher rates of unplanned hospital visits after the procedure.  It is not clear whether differences in the patients will be fully captured by the patient characteristics included in the risk adjustment model, and the measure does not adjust for the proportion of total PCIs in the community that are performed at the hospital, so hospitals that perform a smaller proportion of the total PCIs in their community could inappropriately appear to be delivering lower-quality care. 

      Equity

      Equity Rating
      Equity

      Patients who have health problems other than heart disease are more likely to have ED visits and hospital admissions than other patients.  In addition, patients with limited access to primary care and/or access to specialty care for other health problems are more likely to have ED visits and hospital admissions for those other problems.  This means that a hospital that treats a higher proportion of patients with these characteristics will have a higher all-cause readmission rate than a hospital that does not, even if the quality of PCI care is the same.

       

      In addition, many patients receiving a PCI at a hospital may come from a distant or rural community. The hospital where the PCI is performed will have limited ability to influence the post-discharge care of these patients, so the readmission rate for these patients may be higher for both problems related to the PCI and health problems that are unrelated to the PCI.  This means that a hospital that treats a higher proportion of patients from rural areas may have a higher readmission rate. 

       

      Since access to care in the patient’s community is not controlled for in the risk adjustment model, a hospital that has more patients with poor access to community care will inappropriately appear to be delivering lower-quality PCI care.  This could discourage hospitals from treating patients who live in rural and low-income communities, exacerbating inequities in access and outcomes.

      Use and Usability

      Use and Usability Rating
      Use and Usability

      This measure is only applicable to a subset of the facilities that perform PCIs (hospitals that participate in the CathPCI registry, not hospitals that do not participate and no ambulatory surgery centers) and it only measures readmissions for a small subset of patients (patients over 65 on traditional Medicare).  In addition, the measure does not provide a valid or reliable assessment of PCI quality even for those facilities and patients.  As such, it cannot be used by patients to determine where they should receive a PCI, and it should not be used to adjust payment to facilities based on the quality of care they provide. 

      Summary

      Endorsement should be removed from this measure.  There is no business case for using it.  It is not a valid or reliable measure of the quality or efficiency of PCIs due to problems with the numerator, the denominator, and the risk adjustment methodology.  Public reporting of the results could mislead patients about where they should receive PCIs, and use of the measure for public reporting or for modifying hospital payments could worsen disparities in access and outcomes for patients. 

      Submitted by Kim on Fri, 01/19/2024 - 22:42

      Permalink

      Importance

      Importance Rating
      Importance

      This measure lacks specific details regarding patient involvement in the measure development process, references to relevant literature on patient perspectives, and expected effects of the measure on outcomes. 

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      The measure relies on electronic clinical data from the NCDR CathPCI Registry, indicating the use of digital or electronic sources. 

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      Data is outdated (more than 10 years ago- 2010-2011)

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      The submission lacks the necessary data to ensure its scientific validity.

      Equity

      Equity Rating
      Equity

      Developer did not address this category.

      Use and Usability

      Use and Usability Rating
      Use and Usability

      This measure is currently not in use and efforts to gain CMS data have been unsuccessful. 

      Summary

      N/A

      Submitted by Ben Schleich on Sat, 01/20/2024 - 13:10

      Permalink

      Importance

      Importance Rating
      Importance

      The measure proposal, as currently reported, has several flaws. 

      One, the utilized data is heavily outdated and insufficient research supports the benefit.

      Also, claims data is aimed to be tied to voluntary participation in the registry, which is a main barrier and begs a big question of how this measure will be utilized.

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      The voluntary registry participation is considered as a main component to link it to the claims data and assess planned vs unplanned readmission. While this is feasible, it would be beneficial if the registry data reporting was mandatory or if another possibility of reporting existed.

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      The measure score was below acceptable thresholds and thus should not be accepted.

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      Agree with staff review.

      Equity

      Equity Rating
      Equity

      Not adressed.

      Use and Usability

      Use and Usability Rating
      Use and Usability

      Agree with staff review.

      Summary

      The proposed measure has too many flaws to be considered for implementation or even for a vote.

      Submitted by william golden on Sat, 01/20/2024 - 13:42

      Permalink

      Importance

      Importance Rating
      Importance

      Tracking readmissions and complications is important for health system monitoring.

      Who should be the accountable party?

      It would seem that the performing physician has more control over the follow up and immediate outcome than the institution. The accountable party should be the performing physician or his/her group practice and not the acute care facility. 

      I have difficulty with the conceptual design of this measure.

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      The document claims that the extractable items are available in the electronic medical records. This was based on collections a long time ago.  EMRs are not very good at consistent data collection across platforms or installations (of the same software) within a health system. The assumption that the data fields can consistently collect this information in all settings needs to be validated and likely will uncover concerns.

       

      All cause readmissions is another issue for consideration -- 10 years ago (time goes by) when we designed the first alternative payment models for a CMMI implementation grant - we counted for total cost of care readmissions and acute events but limited all cause events to the first 72 hrs post discharge and for procedure related events thereafter, All cause events was acceptable to our practicing community but not after that window. This is a serious design flaw and injures the face validty of the measure. 

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      agree with staff 

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      agree with staff with added concern about reliability of EMR data extraction across sites 

      Equity

      Equity Rating
      Equity

      agree with staff 

      Use and Usability

      Use and Usability Rating
      Use and Usability

      accountable party should be performing provider or group practice 

      Summary

      Must question why the facility and not the performing provider is the accountable party for this kind of measure.  Need better proof of consistency of data collection from EMR platforms.

      Submitted by DannyvL HealthHats on Sat, 01/20/2024 - 13:53

      Permalink

      Importance

      Importance Rating
      Importance
      • Doesn't it address the impact of the improvement of operator expertise, algorithms, and calcium modification in comparing rates over time or setting case mix? 
      • Outdated performance information, unable to adjust case mix
      • Patient and caregiver input is sketchy. How many of each?  What comments did they have beyond readmission is important?

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance
      • Can't access Medicare data
      • It is not clear what proportion of hospitals can't access the CathCPI Registry

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability
      • Based on very old data, a lot has changed in technology, workflow, and business and work environment.
      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity
      • The denominator doesn't include PCI performed in outpatient clinics and the developers don't say what proportion that is
      • Excluding facilities with less than 25 procedures. Important to patients and caregivers

      Equity

      Equity Rating
      Equity
      • Developers didn't address equity

      Use and Usability

      Use and Usability Rating
      Use and Usability
      • Can't picture explaining to my followers and subscribers how this measure would help them select practices, hospitals, or clinics. Insufficient, outdated data 
      • A long used measure, yet no information was provided by the developer on how anyone used information for cost, quality, or access.

      Summary

      • No evaluation factor is met.
      • Doesn't address how the impact of the improvement of operator expertise, development of algorithms, and calcium modification procedures  have in comparing rates over time or setting case mix? 
      • Outdated performance information, unable to adjust case mix
      • Patient and caregiver input insufficient.
      • Based on very old data, lot's has changed in technology, workflow, and business and work environment.

      Submitted by Beth Godsey on Sat, 01/20/2024 - 18:04

      Permalink

      Importance

      Importance Rating
      Importance

      Agree with the staffed reviewed comments in there was limited literature provided justifying the relationship between low quality of care and readmissions, as well as using dated data (2010-2011).  Additionally, the importance of this measure is difficult to assess due to the 30-day window.  A 30-day readmission window introduces factors into the outcome variable that cannot be solely attributed to provider care such as access to healthy food, safe and supportive home environment, ability to exercise, and a safe, supportive community.  In turn, limiting the measure's ability to drive improve provider patient care.  

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      For hospitals not participating in the National Cardiovascular Data Registry (NCDR) CathPCI registry be evaluated?  Also the specification stated that probabilistic matching methodology was used to match data between NCDR and CMS claims, which is fine for evaluating aggregate trends, but lacks the exactness needed to drive individual hospital performance improvement.  How would complete data be matched in the future if NCDR data and CMS claims are to be used for provider performance improvement?  This needs to be thought through.  

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      The measure inclusion/exclusion criteria were well defined.

      The data was dated.  

      Split-half reliability ICC was 0.3711, below the threshold of 0.6

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      Data elements from NCDR CathPCI data elements were validated and considered a reliable clinical source.

       

      Developers do not report results of empiric validity testing performed at the accountable entity level; all validity testing performed appears to be at the data element level. Developer refers to face validity established via TEP but does not report the results of any vote.

       

      As mentioned in the importance section, a 30-day readmission window introduces factors into the outcome variable that cannot be solely attributed to provider care such as access to healthy food, safe and supportive home environment, ability to exercise, and a safe, supportive community.  In turn, limiting the measures ability to drive provider patient care.  These unaccounted factors contribute to the low performing C-statistics as reported for the development, validation, and combined model performance (0.665, 0.663 and 0.668). C-statistics above 0.7 are considered acceptable.  In turn, 30-day readmission measures are limited in their ability to reliably identify opportunities that healthcare providers can use to improve.  Additionally, no clinical or statistical justification as to why the 30-day window was used.  

      Equity

      Equity Rating
      Equity

      Developer does not address therefore, putting this measure accuracy and completeness for equity care at risk.

      Use and Usability

      Use and Usability Rating
      Use and Usability

      A 30-day readmission window introduces factors into the outcome variable that cannot be solely attributed to provider care such as access to healthy food, safe and supportive home environment, ability to exercise, and a safe, supportive community.  In turn, limiting the measure's ability to drive provider improvement in patient care.

       Benchmark reporting is not mentioned, but if this measure were to follow CMS's typical 30-day readmission annual reporting cycle, it is difficult for healthcare providers to leverage delayed data for performance improvement.  Quarterly or monthly performance would improve use, acceptance and usability.  

      Summary

      A 30-day readmission window introduces factors into the outcome variable that cannot be solely attributed to provider care such as access to healthy food, safe and supportive home environment, ability to exercise, and a safe, supportive community.  In turn, limiting the measure's ability to drive improve provider patient care.

       

      Benchmark reporting is not mentioned, but if this measure were to follow CMS's typical 30-day readmission annual reporting cycle, it is difficult for healthcare providers to leverage delayed data for performance improvement.  Quarterly or monthly performance would improve use, acceptance and usability.  

       

      Statistical performance needs improvement.    

      Submitted by Sandeep Das on Sun, 01/21/2024 - 12:50

      Permalink

      Importance

      Importance Rating
      Importance

      Although in principle, improving the quality of care provided to patients undergoing PCI is a worthy goal, ther is a paucity of data supporting a causal link between PCI "quality" and all-cause readmission. Furthermore, there are no data that active interventions to improve "care quality" will reduce all-cause readmission. 

       

       

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      Requires participation in private/proprietary CathPCI registry. Develops note that addressing this is impractical. Requires linkage of those data with CMS data, which hasn't happened and there is no plan to make that happen. Although in theory this is "addressable" it does not seem like there is any current progress or plan along those lines, so I would call this "not met".

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      Agree with staff comments, as well as those of other reviewers. Use of very old data with no ability/plan to update. Suboptimal ICC

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      Numerous concerns, laid out by others as well. Use of all cause readmissions, exclusions limiting the population in ways that can have a distortionary effect on results.

      Equity

      Equity Rating
      Equity

      Although not addressed by the application, there are significant equity implications, espectially if this is used for public reporting or payment. SDOH are not explicitly included, but vulnerable populations may have higher than predicted all-cause readmission rates due to these factors, thereby making the centers that care for these patients look like they provide lower quality.

      Use and Usability

      Use and Usability Rating
      Use and Usability

      Relies on connecting two data sources that have not been connected for a decade with no path to connect them in the future. The metric itself is of no clear utility in guiding health system or MD decisions

      Summary

      Given the noted concerns about validity and the lack of a clear use case, I do not see a reason to continue this metric going forward

      Submitted by Pamela Roberts on Sun, 01/21/2024 - 14:28

      Permalink

      Importance

      Importance Rating
      Importance

      The information provided showed patient outcomes following Percutaneous Coronary Intervention (PCI) can be impacted by a variety of clinical decisions.  Also noted that readmission rates were affected by the quality of inpatient and outpatient care.  There was a range of performance that showed variations between facilities.  Readmission rates are useful for understanding quality of care.  

      The literature provided has one study that addressed poor quality in care related to readmission rates but it was only one study.  It would be important to know if there are other studies that also show this.

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      Data elements are in defined EHR fields that are routinely collected that participate in the registry.

      Claims data are used to capture readmissions.

      Issue if facility does not participate in the NCDR registry.  No fee to use the measure but could have challenges if not using.

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      The measure does provide well defined and has a large sample .  There are limitations in that it shows odds ratios which is not typically used for reliability and data is aged (from 2010-2011).  Seems like data needs to be updated as what is occurring today may be very different from 2010-2011.

       

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      Validity of data was not addressed at the accountable entity level and only addressed at the data element level.

      Equity

      Equity Rating
      Equity

      This was not addressed but could be used with more up to data from the registry and use of SDOH measures.

      Use and Usability

      Use and Usability Rating
      Use and Usability

      Measure is not in use and the developer reports issues with obtaining more recent data.

      Not clear on the performance or improvement on the measure as not reported.

      Summary

      There are significant issues with this measure including data is from 2010-2011 and has reliability and validity issues.  Would not recommend to move this measure forward.

      Submitted by Amy Chin on Sun, 01/21/2024 - 17:01

      Permalink

      Importance

      Importance Rating
      Importance

      Agree with the staff comments. Most importantly, maintenance measure should provide evidence of a performance gap or measurement gap with performance scores on the measure as specified. The developer is unable to access a portion of data that is the basis for the measure other than data that is over 10 years old. More current data is needed to understand whether a performance gap exists. It is unclear whether data will become available to allow the developer to address this.

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      Data comes from Medicare claims and the NCDR CathPCI Registry. Medicare claims data are collected as a routine part of billing. Hospitals voluntarily submit data to the NCDR CathPCI Registry collected via chart abstraction. One limitation is the requirement to submit data to the registry which may be a barrier to use in certain applications.

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      Agree with staff assessment. The developer presents results from old data (2010-2011). The reliability testing falls below the acceptable threshold. Developer reported split-half reliability ICC of 0.3711. This does not meet the threshold of at least 0.6.

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      Developers established face validity through a TEP when the measure was first developed. No discussion of the approach to assess face validity with the TEP was described. The NCDR CathPCI data elements are validated through an audit process and are the source of the risk adjustment factors. Comparisons of original results for the risk adjustments factors were made to the same risk factors in a model using more recent data (2010-2011) and showed little change in the risk factors between the two. Newer data is needed to understand whether the risk adjustment factors continue to demonstrate acceptable model performance.

       

      Equity

      Equity Rating
      Equity

      No information submitted

      Use and Usability

      Use and Usability Rating
      Use and Usability

      The measure is not currently in use. Registry participants have regular benchmarking and feedback reports on their analysis with comparison to the national benchmark. Support is provided to participants including calls, conferences, and support from clinical quality associates. As the measure is not in use, there is no information on feedback on measures, considerations from measure feedback, progress on improvement, and unexpected findings. The developer does note that during the first year of measure implementation, there was no evidence on unintended consequences of measurement like avoiding high risk cases or patients with PCI. 

      Summary

      The measure developer has not been able to provide updated information on this measure because Medicare data is unavailable to them, which is critical to calculating the measure. The lack of current information has prevented the developer from providing information on significant areas needed to assess the measure. In areas where data was provided, data is from 2010-2011 which is quite old and may not be pertinent as there have been many developments in Medicare policy and the broader institutional healthcare landscape between now and then. 

      Submitted by Pranavi Sreeramoju on Mon, 01/22/2024 - 10:02

      Permalink

      Importance

      Importance Rating
      Importance

      The importance of this measure is not clear. PCI is frequently an outpatient or a short stay procedure. The data supporting this measure need to be updated. The observed and predicted readmission rates are very close to each other – so it is not clear what gap the measure is looking to close.

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      The measure is not an eCQM and the effort for reporting this measure is not worth the return on investment. 

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      The data are over ten years old and have not been updated since. 

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      The data are over ten years old and have not been updated since. 

      Equity

      Equity Rating
      Equity

      There is no indication that social determinants of health were measured during the measure development process. 

      Use and Usability

      Use and Usability Rating
      Use and Usability

      The intended use of these data is not specified in the submission.

      Summary

      There would be no heartache if this measure were retired. Patient care would not be compromised and there would be one less measure to work with. 

      Submitted by Rosa Plasencia on Mon, 01/22/2024 - 11:55

      Permalink

      Importance

      Importance Rating
      Importance
      • Most recent performance data are from 2010-2011 (i.e., more than a decade old). Why is more recent data not cited?

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      no additional comments

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      no additional comments

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      no additional comments

      Equity

      Equity Rating
      Equity

      Developer did not address this optional criterion, although they note it addresses Equity.

      Use and Usability

      Use and Usability Rating
      Use and Usability

      no additional comments

      Summary

      no additional comments

      Submitted by Tera on Mon, 01/22/2024 - 16:05

      Permalink

      Importance

      Importance Rating
      Importance

      Developer "is not currently able to use this data source as Medicare claims are not currently available for performance measure reporting. This has limited our ability to update and report this measure."

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      The measure as specified requires participation in the NCDR CathPCI registry, and a parallel pathway for reporting has not been implemented

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      claims data use is currently restricted and unavailable to support performance measures

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      agree with staff comments

      Equity

      Equity Rating
      Equity

      Not addressed

      Use and Usability

      Use and Usability Rating
      Use and Usability

      Plans in place but not implemented 

      Summary

      None

      Submitted by Megan Guinn on Mon, 01/22/2024 - 16:23

      Permalink

      Importance

      Importance Rating
      Importance

      Data referenced is outdated, insufficient data to prove causation.

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      Utilizes electronic health record clinical data, but also participation with a particular registry 

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      Outdated data set

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      limited approach to validity testing/accountability

      Equity

      Equity Rating
      Equity

      not addressed

      Use and Usability

      Use and Usability Rating
      Use and Usability

      Data set is outdated and no current data indicates a clinical concern or issed that needs to be addressed

      Summary

      Agree with staff comments, additional data is needed to valdiate the need for this measure