Skip to main content

Hospital Risk-Standardized Complication Rate Following Implantation of Implantable Cardioverter-Defibrillator (ICD)

CBE ID
0694
Endorsement Status
E&M Committee Rationale/Justification

Endorsement was removed due to no consensus. The committee raised concern with the lack of updated data to determine whether a gap exists and for scientific acceptability. The measure is also not in use, which makes it challenging to know if the measure is improving over time.

1.1 New or Maintenance
Previous Endorsement Cycle
Is Under Review
No
1.3 Measure Description

This measure provides hospital specific risk-standardized rates of procedural complications following the implantation of an Implantable Cardioverter-Defibrillator (ICD) in patients at least 65 years of age. The measure uses clinical data available in the National Cardiovascular Data Registry (NCDR) Electrophysiology Device Implant Registry (EPDI - formerly the ICD Registry) for risk adjustment linked with administrative claims data using indirect patient identifiers to identify procedural complications.

        • 1.5 Measure Type
          1.6 Composite Measure
          Yes
          1.7 Electronic Clinical Quality Measure (eCQM)
          1.8 Level Of Analysis
          1.10 Measure Rationale

          Not applicable

          1.20 Testing Data Sources
          1.25 Data Sources

          This measure relies on claims data. As of Fall 2023 claims data use is currently restricted and unavailable to support performance measures. Legislation to change this has been introduced.  

           

          The datasets used to create the measures are described below.  

            

          (1) NCDR EP Device Implant Registry (EPDI - formerly ICD Registry) data  

          The NCDR EP Device Implant Registry (EPDI - formerly ICD Registry) is a cardiovascular data registry which captures detailed information about patients at least 18 years of age undergoing ICD implantation. This includes demographics, comorbid conditions, cardiac status, and laboratory results. As of May 2015, the registry had collected data from 1,786 hospitals in the United States totaling over 1,330,000 implants (NCDR data outcome reports).   

            

          The registry, launched on June 30, 2005, was developed through a partnership of the Heart Rhythm Society (HRS) and the (ACC) in response to CMS’ expanded ICD coverage decision for primary prevention ICD therapy. Data included in the registry are collected by hospitals and submitted electronically on a quarterly basis to NCDR. The patient records submitted to the registry focus on acute episodes of care, from admission to discharge. The NCDR does not currently link patient records longitudinally across episodes of care.   

            

          The data collection form and the complete list of variables collected and submitted by hospitals can be found at www.ncdr.com. For more information on these data, please see the attached methodology report.  

            

          Of note, hospitals are only required to submit data on all primary prevention ICDs implanted in Medicare patients, and, of the 159 data elements collected by the NCDR EP Device Implant Registry (EPDI - formerly ICD Registry), only 54 are forwarded to CMS by American College of Cardiology (ACC) to determine payment eligibility. Nevertheless, the majority of participating hospitals have opted to participate fully in the quality improvement aspect of the registry and submit all data elements on all patients undergoing ICD implantation.   

            

          (2) Medicare Data  

           

          IMPORTANT NOTE: ACC is not currently able to use this data source as Medicare claims are not currently available for performance measure reporting. This has limited our ability to update and report this measure.  

           

          The model was developed in a population of Medicare FFS beneficiaries but can be expanded to all ICD patients at least 65 years of age. We used the administrative claims data to identify complications.  

            

          (a) Part A inpatient and outpatient data: Part A data refers to claims paid for Medicare inpatient hospital care, outpatient hospital services, skilled nursing facility care, some home health agency services, and hospice care. For this measure, we used Part A data to identify ICDs implanted for admitted and non-admitted patients (i.e., hospital patients with observation status). For model development, we used 2007 Medicare Part A data to match patient stays associated with an ICD with comparable data from the NCDR EP Device Implant Registry (EPDI - formerly ICD Registry).   

            

          (b) Medicare Enrollment Database (EDB): This database contains Medicare beneficiary demographic, benefit/coverage, and vital status information. This dataset was used to obtain information on several inclusion/exclusion indicators, such as Medicare status on admission, and provided the ability to retrieve 90 days follow-up, linking patient Health Insurance Claim (HIC) number to the Part A data. These data have previously been shown to accurately reflect patient vital status (Fleming Fisher et al. 1992).  

        • 1.14 Numerator

          The outcome for this measure is one or more complications within 30 or 90 days (depending on the complication) following initial ICD implantation. The measure treats complications as a dichotomous (yes/no) variable; we are interested in whether or not a complication has occurred and not how many complications occurred in each hospital.

          1.14a Numerator Details

           

          The complications in this measure are defined below.

           

          Complications are identified using International Classification of Diseases, Clinical Modification diagnosis and procedure codes or Healthcare Common Procedure Coding System/Current Procedural Terminology (HCPCS/CPT) procedure codes as well as the Medicare Enrollment Database (vital status) as indicated below. This approach was developed by a CMS Technical Expert Panel of clinicians and methodologists who were charged with identifying a comprehensive claims-based approach to identifying serious procedural complications:  

            

          Complications identified within 30 days of device implant: 

            

          (1) Pneumothorax or hemothorax plus a chest tube  

          (2) Hematoma plus a blood transfusion or evacuation  

          (3) Cardiac tamponade or pericardiocentesis  

          (4) Death (Source: Medicare enrollment database)  

            

          Complications identified within 90 days of device implant: 

            

          (5) Mechanical complications requiring a system revision  

          (6) Device related infection  

          (7) Additional ICD implantation  

           

        • 1.15 Denominator

          The target population for this measure includes inpatient and outpatient hospital stays with ICD implants for patients at least 65 years of age who have matching information in the NCDR Electrophysiology Device Implant Registry EPDI (Formerly the ICD registry). The time window can be specified from one to three years.  

          1.15a Denominator Details

          The measure cohort is defined below.

           

          -Implantation of cardiac resynchronization pacemaker without mention of defibrillation, total system (crt-p)  

          -Implantation of cardiac resynchronization defibrillator, total system (crt-d)  

          -Implantation or replacement of transvenous lead (electrode) into left ventricular coronary venous system  

          -Implantation or replacement of cardiac resynchronization pacemaker pulse generator only (crt-p)  

          -Implantation or replacement of cardiac resynchronization defibrillator pulse generator device only (crt-d)  

          -Implantation or replacement of automatic cardioverter/defibrillator, total system (aicd) 

          -Insertion, single chamber transvenous electrode ICD  

          -Insertion, dual chamber transvenous electrode ICD  

          -Repair, single chamber transvenous electrode ICD  

          -Repair, dual chamber transvenous electrode ICD  

          -Pocket revision ICD  

          -Initial pulse generator insertion only with existing dual leads  

          -Initial pulse generator insertion only with existing multiple leads  

          -Insertion of single or dual chamber ICD pulse generator  

          -Removal of single or dual chamber ICD pulse generator  

          -Insertion or repositioning of electrode lead(s) for single or dual chamber pacing ICD and insertion of pulse generator  

          -Removal pulse generator with replacement pulse generator only single lead system (transvenous)  

          -Removal pulse generator with replacement pulse generator only dual lead system (transvenous)  

          -Removal pulse generator with replacement pulse generator only multiple lead system (transvenous)  

        • 1.15b Denominator Exclusions

          (1) Previous ICD placement. Hospital stays in which the patient had an ICD implanted prior to the index hospital stay are excluded.  

          Rationale: Ideally, the measure would include patients with a prior ICD, as this is a population known to be at high risk of adverse outcomes. However, for these patients it is difficult to distinguish in the administrative data whether adverse events such as infection were present on admission or complications of the second ICD placement. In order to avoid misclassification, we exclude these patients from the measure.  

           

          (2) Previous pacemaker placement, Hospital stays in which the patient had a previous pacemaker placement prior to the index hospital stay are excluded.  

          Rationale: Some complications (infection or mechanical complication) may be related to a pacemaker that was removed prior to placement of an ICD. Ideally, the measure would include patients with a prior pacemaker, as this is a population known to be at higher risk of adverse outcomes. However, for these patients it is difficult to distinguish in the administrative data whether adverse events such as infection were present on admission or complications of the ICD placement. In order to avoid misclassification, we exclude these patients from the measure.  

           

          (3) Not Medicare Fee-for-service (FFS) patient on admission. Patient admissions in which the patient is not enrolled in Medicare FFS at the time of the ICD procedure.  

          Rationale: Outcome data are being derived only for Medicare FFS patients.  

           

          (4) Lack 90-day follow-up in Medicare FFS post-discharge. Patients who cannot be tracked for 90 days following discharge are excluded.  

          Rationale: There will not be adequate follow-up data to assess complications  

           

          (5) Not the first claim in the same claim bundle. There are cases when several claims in the same hospital representing a single episode of care exist in the data together. These claims are bundled together and any claim other than the first is excluded.  

          Rationale: Inclusion of additional claims could lead to double counting of an index ICD procedure.  

          1.15c Denominator Exclusions Details

          Denominator exclusions are identified based on variables contained in the Standard Analytic File (SAF) or Enrollment Database (EDB). Of note, a hospital stay may satisfy multiple exclusion criteria.  

          (1) Previous ICD placement is a flag in the NCDR EP Device Implant Registry (EPDI - formerly the ICD Registry) that indicates whether or not a patient has an ICD present on admission.  

           

          (2) Previous pacemaker is a flag in the NCDR EP Device Implant Registry (EPDI - formerly the ICD Registry)whether or not a patient has a pacemaker present on admission.  

           

          (3) Not Medicare FFS patient on admission is determined by patient enrollment in both Part A and Part B in FFS using Center for Medicare and Medicaid Services (CMS) EDB.  

           

          (4) Lack 90-day follow-up in Medicare FFS post-discharge is determined by patient enrollment status in both Part A and Part B and in FFS using CMS’ EDB; the enrollment indicators must be appropriately marked for any month which falls within 90 days of hospital discharge or enrollment end date (this does not apply for patients who die within 90 days of the index hospital stay).  

           

          (5) Not the first claim in the same claim bundle is derived by examining inpatient claims located in the SAF; specifically, the fields for admission and discharge date and provider ID.  

        • OLD 1.12 MAT output not attached
          Attached
          1.13a Data dictionary not attached
          Yes
          1.16 Type of Score
          1.17 Measure Score Interpretation
          Better quality = Lower score
          1.18 Calculation of Measure Score

           

          The measure employs a hierarchical logistic regression model to create a hospital-level 30- or 90-day RSCR. In brief, the approach simultaneously models data at the patient and hospital levels to account for the variance in patient outcomes within and between hospitals (Normand & Shahian, 2007). At the patient level, it models the log-odds of hospital complications within 30 or 90 days of discharge using age, selected clinical covariates, and a hospital-specific intercept. At the hospital level, the approach models the hospital-specific intercepts as arising from a normal distribution. The hospital intercept represents the underlying risk of complications at the hospital, after accounting for patient risk. If there were no differences among hospitals, then after adjusting for patient risk, the hospital intercepts should be identical across all hospitals.  

            

          The RSCR is calculated as the ratio of the number of “predicted” to the number of “expected” complications, multiplied by the national unadjusted complication rate. For each hospital, the numerator of the ratio (“predicted”) is the number of complications within 30 or 90 days predicted on the basis of the hospital’s performance with its observed case mix, and the denominator (“expected”) is the number of complications expected on the basis of the nation’s performance with that hospital’s case mix. This approach is analogous to a ratio of “observed” to “expected” used in other types of statistical analyses. It conceptually allows for a comparison of a particular hospital’s performance given its case mix to an average hospital’s performance with the same case mix. Thus, a lower ratio indicates lower-than-expected complications or better quality and a higher ratio indicates higher-than-expected complications or worse quality.  

           

          The “predicted” number of complications (the numerator) is calculated by using the coefficients estimated by regressing the risk factors and the hospital-specific intercept on the risk of complication. The estimated hospital-specific intercept is added coefficients.  

           

          Normand, Sharon-Lise T.; Shahian, David M. Statistical and Clinical Aspects of Hospital Outcomes Profiling. Statist. Sci. 22 (2007), no. 2, 206--226. doi:10.1214/088342307000000096. https://projecteuclid.org/euclid.ss/1190905519  

          1.19 Measure Stratification Details

          This measure is not stratified. 

          1.26 Minimum Sample Size

          This measure requires a minimum sample of 25 patients per facility.

        • Most Recent Endorsement Activity
          Management of Acute Events, Chronic Disease, Surgery, and Behavioral Health Fall 2023
          Initial Endorsement
          Removal Date
        • Measure Developer Secondary Point Of Contact

          Katie Goodwin
          American College of Cardiology
          2400 N St NW
          Washington, DC 20037
          United States

          • 2.1 Attach Logic Model
            2.2 Evidence of Measure Importance

            Complications following insertion of Implantable Cardioverter Defibrillators (ICD) are an important patient outcome (Al-Khatib 2005, 2008; Curtis 2009, Peterson, 2013) that reflects the quality of care delivered to patients. Complications following ICD insertion represent an increased risk of all-cause mortality. A 2017 retrospective NCDR ICD registry study done by Kipp et al showed complications during the index hospitalization occurred in 5.18% of patients. Within 90 days of implantation complications occurred in 7.34% of patients. The occurrence of complications within 90 days of ICD implantation was associated with increased risk of all-cause mortality and all-cause mortality or hospitalization at 1 and 3 years. Patient, procedure, and hospital characteristics associated with mortality at 3 years after implantation were similar regardless of whether acute procedural complication occurred. (Kipp et al, 2017).

             

            The risk of adverse outcomes following ICD implantation varies markedly by the experience and training of the implanting physician, the device implanted, and the characteristics of the facility in which the procedure is performed (Curtis, 2009). Additionally, patient characteristics are associated with early mortality following ICD implantation. Early mortality is associated with older age, advanced NYHA class and atrial fibrillation. These factors may change as patient populations selected for ICD implantation continue to evolve over time and with advances in heart failure medical therapy. Even when studying a large cohort, there was no single powerful clinical feature predicting early mortality after ICD implantation as only 12.7% having all the 3 risk factors described above as dying within the first year. (Garcia, 2020).  

             

            Infection is one of the most serious complications of ICD implantation and closely associated with mortality and morbidity. End stage renal disease was consistently associated with the highest risk of infection with respect to patient-related factors. The presence of hematoma was associated with a 9-fold increase risk of infection with respect to procedure related factors. Pre-procedural measures for prevention include patient selection, lead management (decreasing number of leads), delaying procedure to produce more favorable patient factors (improving glycemic control), anticoagulation and antiplatelet drugs and sterile procedure. (EHRA International consensus document on how to prevent, diagnose, and treat cardiac implantable electronic device infections). These structures and processes of care that prevent these costly and undesirable complications are difficult to measure in a way that is reliable, valid and meaningful to providers and patients.  It is important that ICDs are provided to those patients for whom it is deemed appropriate based on guideline-based assessments and evaluations and hospitals ensure provision of the highest quality of care.  Reporting the rate of procedure-related complications provides relevant information on whether these characteristics were achieved. 

             

            Measuring complications have improved treatment. For example, the safety and efficacy of venous access techniques for cardiac implantable electronic device implantation has been a recent area of study. The risk of pneumothorax and lead failure was found to be lowered when using cephalic vein cutdown versus subclavian vein puncture. Additionally, it was reported that axillary vein puncture and cephalic vein cutdown were both effective approached for IED lead implantation and offer the potential to avoid complications usually observed with traditional SVP. (Atti et al, 2020) 

             

            ICDs are expensive and are utilized in patients with high cost conditions such as coronary artery disease or heart failure. Reynolds, et al (2006) found that just over 10% of all patients with an ICD placed had a complication deemed attributable the procedure.  The cost to treat these unexpected complications was more than $7,000 for each patient and frequently extended a patient’s hospital stay. 

             

            Al-Khatib SM, Greiner MA, Peterson ED, Hernandez AF, Schulman KA, Curtis LH. Patient and Implanting Physician Factors Associated With Mortality and Complications After Implantable Cardioverter-Defibrillator Implantation, 2002-2005. Circ Arrhythmia Electrophysiol. 2008;1:240-249. doi: 10.1161/CIRCEP.108.777888 

             

            Al-Khatib, SM, Lucas LF, Jollis JG, Malenka DJ, Wennberg DE. The relation between patients' outcomes and the volume of cardioverter-defibrillator implantation procedures performed by physicians treating Medicare beneficiaries.[see comment][erratum appears in J Am Coll Cardiol. 2005;46:1964]. Journal of the American College of Cardiology, 2005;46: p 1536-40.   

             

            Atti, V., Turagam, M., Garg, J., Koerber, S., Angirekula, A., Gopinathannair, R., Natale, A. and Lakkireddy, D., 2020. Subclavian and Axillary Vein Access Versus Cephalic Vein Cutdown for Cardiac Implantable Electronic Device Implantation. JACC: Clinical Electrophysiology, 6(6), pp.661-671. 

             

            Curtis JP, Luebbert JJ, Wang Y; et al. Association of physician certification and outcomes among patients receiving an implantable cardioverter-defibrillator. JAMA. 2009;301(16):1661-1670. 

             

            Blomström-Lundqvist, C., Traykov, V., Erba, P., Burri, H., Nielsen, J., Bongiorni, M., Poole, J., Boriani, G., Costa, R., Deharo, J., Epstein, L., Saghy, L., Snygg-Martin, U., Starck, C., Tascini, C., Strathmore, N., Kalarus, Z., Boveda, S., Dagres, N., Rinaldi, C., Biffi, M., Gellér, L., Sokal, A., Birgersdotter-Green, U., Lever, N., Tajstra, M., Kutarski, A., Rodríguez, D., Hasse, B., Zinkernagel, A. and Mangoni, E., 2019. European Heart Rhythm Association (EHRA) international consensus document on how to prevent, diagnose, and treat cardiac implantable electronic device infections—endorsed by the Heart Rhythm Society (HRS), the Asia Pacific Heart Rhythm Society (APHRS), the Latin American Heart Rhythm Society (LAHRS), International Society for Cardiovascular Infectious Diseases (ISCVID) and the European Society of Clinical Microbiology and Infectious Diseases (ESCMID) in collaboration with the European Association for Cardio-Thoracic Surgery (EACTS). European Journal of Cardio-Thoracic Surgery, 57(1), pp.e1-e31. 

             

            Garcia, R., Boveda, S., Defaye, P., Sadoul, N., Narayanan, K., Perier, M., Klug, D., Fauchier, L., Leclercq, C., Babuty, D., Bordachar, P., Gras, D., Deharo, J., Piot, O., Providencia, R., Marijon, E. and Algalarrondo, V., 2020. Early mortality after implantable cardioverter defibrillator: Incidence and associated factors. International Journal of Cardiology, 301, pp.114-118. 

             

            Kipp R, Hsu JC, Freeman J, Curtis J, Bao H, Hoffmayer KS. Long-term morbidity and mortality after implantable cardioverter-defibrillator implantation with procedural complication: A report from the National Cardiovascular Data Registry. Heart Rhythm. 2018;15(6):847-854. doi:10.1016/j.hrthm.2017.09.043 

             

            Reynolds MR, et al. Complications among Medicare beneficiaries, receiving implantable cardioverter-defibrillators.  J Am Coll Cardiol. 2006; 47:2493-2497. 

             

            Peterson PE, et al. Association of single- vs dual-chamber ICDs with mortality, readmissions and complications among patients receiving an ICD for primary prevention. JAMA. 2013;309:2025-2034. 

             

            Additional relevant articles: 

            Pokorney SD, et al. Primary prevention implantable cardioverter-defibrillators in older racial and ethnic minority patients. Circ Arrhythm Electrophysiol. 2015 Feb;8(1):145-51 

             

            Dodson, JA, et al. 2014. Developing a Risk Model for in-Hospital Adverse Events following ICD Implantation: A Report from the NCDR® Registry. Journal of the American College of Cardiology, 63(8), 788–796. doi:10.1016/j.jacc.2013.09.079 

          • 2.6 Meaningfulness to Target Population

            This measure was developed with input from a technical expert panel that includes patient and caregiver representation. Generally, patients indicate that outcomes such as complications following a procedure are useful for decision-making purposes and we believe that this measure would be found meaningful by them. 

          • 2.4 Performance Gap

            See Table 1 below.

            Table 1. Performance Scores by Decile
            Performance Gap
            Overall Minimum Decile_1 Decile_2 Decile_3 Decile_4 Decile_5 Decile_6 Decile_7 Decile_8 Decile_9 Decile_10 Maximum
            Mean Performance Score 5.7 0-17.8
            N of Entities 1792 1792
            N of Persons / Encounters / Episodes 67080 67080
            • 3.1 Feasibility Assessment

              Not applicable during the Fall 2023 cycle.

              3.3 Feasibility Informed Final Measure

              The data elements required to generate this measure are coded by an individual other than the person obtaining the original information (e.g., DRG, ICD-10 codes on claims). All data elements are available in defined fields in electronic clinical data (e.g., clinical registry). This measure uses clinical data from the NCDR Electrophysiology Device Implant Registry EPDI (Formerly the ICD registry) for risk adjustment and that data are linked to CMS administrative claims data to identify the ICD-related complications.  

               

              During initial measure testing when considering that the administrative database may be subject to coding errors and variation in coding practices within and across care settings, the ICD measure development team at YNHHSC/CORE chose to conduct a chart validation study. The goal was to determine whether ICD-9-CM diagnosis and procedure codes reported on Medicare claims and used in the measure specifications accurately identify patients experiencing ICD complications within 30 or 90 days of ICD implantation as reported in the medical charts. This approach required obtaining medical records of patients who had an ICD implanted from participating hospitals, abstracting data related to ICD complications (including number of complications, timing, severity and treatment), conducting a head-to-head comparison of data between Medicare claims and medical records to assess the degree of agreement, and finally, where appropriate, adjusting the list of codes and/or the cohort definition in the ICD measure specifications to improve the agreement.  

               

              We calculated the sample size requirement based on the desired degree of agreement between medical records and claims data, which can be categorized as fair, moderate, substantial, or almost perfect depending on the magnitude of the kappa coefficient2. Our initial calculation was based on achieving a “substantial” degree of agreement, and accounting for a within-hospital correlation coefficient of 0.03. This would have required approximately 860 medical records. To compensate for missing charts, we added 10-20 percent to the required sample size. This increased the required number to nearly 1000 charts, which would have resulted in doubling our budgetary allotment. We therefore decided to keep the sample size at 500 medical records from 9 candidate hospitals. This sample size would allow a qualitative assessment of the ICD-9-CM codes used in the claims model while meeting NQF review committee standards, considering budgetary and time constraints.  

               

              Thus, our approach was to recruit 9 hospitals and request copies of medical charts for approximately 60 Medicare FFS patients who had an ICD implanted between 2005 and 2007, 30 with and 30 without complications at each hospital. This number also accounted for 10 to 20 percent of medical records that may be missing. Given the low ICD complication rate, we selected sites that had a minimum of 25 cases with complications over the three-year period.  

              Although we planned to review approximately 540 charts from 9 hospitals, the final sample size was 411. One of the 9 hospitals withdrew from the project and did not provide the charts, though it previously signed both, a data use agreement (DUA) and a business associate agreement (BAA). Because they withdrew 9 months into the study, there was insufficient time remaining in the contract to recruit a replacement hospital. The remaining 8 hospitals provided 411 charts out of the 480 requested, with 69 (14.4%) missing charts.  

               

              Summary of Study and Findings:  

              • 9 Hospitals agreed to participate in the study; 8 completed the study and one withdrew  
              • Of the 480 charts requested, 411 were obtained from 8 hospitals (14.4% of missing charts)  
              • Charts were abstracted by professional chart abstractors at an independent company, Information Collection Enterprises, LLC (ICE)  
              • We excluded 95 cases because they had a previous ICD (consistent with current measure specifications). Thus, the study was based on 316 patients  
              • We identified 149 patients with complications (1 or more complication) in the chart, while administrative codes identified 166 patients with complications; 144 patients with complications were identified by both charts and codes; 22 by codes only; and 5 by charts only.  
              • These findings resulted in an overall agreement between chart and claims (based on all paired ratings) of 91.5% [Yes/Yes(144) + No/No(145) / total (316)=0.915] with a kappa coefficient of 0.83 (0.7865 – 0.8907) which is in the “almost perfect” range. A depiction of the overall agreement can be found in section 2b2.2 of the testing form (figure 1. and table 2).  

               

              We examined all cases of disagreement between the charts and the claims for each complication. Complications reported in the charts but not identified in the claims were due to either missing codes from our measure specifications or a failure to report the complication in the claims (e.g. evacuation of a hematoma). Examination of cases where the complication was reported in the claims but not in the charts, revealed that the complication was not related to the ICD but to another procedure, device, or medical condition. Based on these results, we made the following changes to the cohort and definitions of complications:  

              • Added the following administrative claim codes to the measure specifications to capture more mechanical complications with revision: 996.72; 39.94; 37.77  
              • Excluded patients with a previous pacemaker from the cohort, considering the lack of availability of present-on-admission codes at this time  

              _______________________________________

            • 3.4a Fees, Licensing, or Other Requirements

              The ACCF’s program the National Cardiovascular Data Registry (NCDR) provides evidence based solutions for cardiologists and other medical professionals committed to excellence in cardiovascular care. NCDR hospital participants receive confidential benchmark reports that include access to measure macro specifications and micro specifications, the eligible patient population, exclusions, and model variables (when applicable). In addition to hospital sites, NCDR Analytic and Reporting Services provides consenting hospitals’ aggregated data reports to interested federal and state regulatory agencies, multi-system provider groups, third-party payers, and other organizations that have an identified quality improvement initiative that supports NCDR-participating facilities. Lastly, the ACCF also allows for licensing of the measure specifications outside of the Registry.  

               

              It should be noted that the centers already have to participate in this specific registry for reimbursement purposes so that currently almost all hospitals that implant ICDs in Medicare populations already participate. Hence there is no additional cost.  

               

              Measures that are aggregated by ACCF and submitted to the CBE are intended for public reporting and therefore there is no charge for a standard export package. However, on a case by case basis, requests for modifications to the standard export package will be available for a separate charge.  

              3.4 Proprietary Information
              Not a proprietary measure and no proprietary components
              • 4.1.3 Characteristics of Measured Entities

                For this measure, hospitals are the measured entities. All non-federal, acute inpatient US hospitals (including territories) that participate in the American College of Cardiology (ACC) NCDR’s ICD Registry and care for Medicare Fee-for-Service (FFS) beneficiaries who are 65 years of age or older are included. The number of measured entities (hospitals) varies by testing type as described in the question above. 

                4.1.1 Data Used for Testing

                The specifications for this measure have not changed since the prior review.

                 

                Several sections of this application for this measure could not be updated, including information on reliability and validity. This uniquely valuable measure was developed when access to CMS claims data was feasible. Current data access restrictions to the same CMS claims data prevents ACC from conducting the patient matching to assess long term follow up. Imposing requirements upon hospitals to acquire follow up data themselves during the past two plus years of the pandemic-induced hospital crisis was not possible. Thus, ACC-NCDR was prevented from linking claims data to our registry data.  

                 

                The existing data set used was a clinical registry, the National Cardiovascular Data Registry (NCDR) ICD Registry, which has since been renamed to the EP Device Implant (EPDI) Registry. Some states and healthcare systems mandate participation. Rigorous quality standards are applied to the data and both quarterly and ad hoc performance reports are generated for participating centers to track and improve their performance.  

                 

                The measure links clinical data from NCDR to Medicare claims data to ascertain complications.   

                 

                The measure reliability dataset linked the ICD registry and Medicare Part A claims data from 2010Q2-2011Q4. The combined two-year sample included 43, 711 to 1,279 hospitals with 21,856 admissions to 1,254 hospitals in one randomly selected sample and 21,855 admissions to 1,246 hospitals in the remaining sample for patients aged 65 years and older. After excluding hospitals with fewer than 25 cases in each sample, the first sample contained 297 hospitals and the second sample contained 298 hospitals. In addition to being used for reliability testing, the linked dataset was used for measure exclusions testing. 

                 

                These analyses used a cohort of patients undergoing ICD placement for whom NCDR ICD Registry data were linked with corresponding administrative claims data. However, we also conducted additional analyses to meet newer testing requirements, and these analyses were performed using comparable linked data from 2010-2011. Details are provided below.  

                 

                Reliability testing and exclusions testing  

                The measure reliability dataset linked the ICD and Medicare Part A claims data for claims between 2010Q2 and 2011Q4. The sample included ICD placement in a cohort of 43,711 Medicare FFS patients aged 65 years and older performed in 1279 hospitals. We then randomly split the sample, leaving 21,856 admissions to 1,254 hospitals in one randomly selected sample and 21,855 admissions to 1,246 hospitals in the remaining sample for patients aged 65 years and older. After excluding hospitals with fewer than 25 cases in each sample, the first sample contained 297 hospitals and the second sample contained 298 hospitals. 

                 

                Validity testing  

                 

                A summary of validity testing undertaken was provided in the Feasibilty section of this form. A chart validation study has been completed to determine whether ICD-9-CM diagnosis and procedure codes reported on Medicare claims and used in the measure specifications accurately identified patients experiencing ICD complications within 30 and 90 days of ICD implantation as reported in the medical charts. The findings of the study reported an overall agreement between chart and claims (based on paired ratings) of 91.5%.  

                 

                Measure development and risk-adjustment dataset  

                In measure development, we identified ICD procedures in the NCDR ICD Registry in which the patient was released from the hospital between April 2010 and December 2011. We merged ICD admissions in the NCDR ICD Registry data and ICD admissions in Medicare claims data to derive cohorts for development using probabilistic matching methodology. There were 21,855 cases discharged from the 1226 hospitals in the validation sample. This validation sample had a crude complication rate of 6.71%. 

                 

                4.1.4 Characteristics of Units of the Eligible Population

                Please refer to table 1 in the attachment.

                 
                 

                 

                4.1.2 Differences in Data

                Information on the differences with the data used is described above. 

              • 4.2.2 Method(s) of Reliability Testing

                Patient or Encounter-Level Reliability 

                 

                In constructing the measure we aim to utilize only those data elements from the claims that have both face validity and reliability. We avoid the use of fields that are thought to be coded inconsistently across hospitals or providers. Specifically, we use fields that are consequential for payment and which are audited. We identify such variables through empiric analyses and our understanding of CMS auditing and billing policies and seek to avoid variables which do not meet this standard. For example, “discharge disposition” is a variable in Medicare claims data that is not thought to be a reliable variable for identifying a transfer between two acute care facilities. Thus, we derive a variable using admission and discharge dates as a surrogate for “discharge disposition” to identify hospital admissions involving transfers. This allows us to identify these admissions using variables in the claims data which have greater reliability than the “discharge disposition” variable. In addition, CMS has in place several hospital auditing programs used to assess overall claims code accuracy, to ensure appropriate billing, and for overpayment recoupment. CMS routinely conducts data analysis to identify potential problem areas and detect fraud, and audits important data fields used in our measures, including diagnosis and procedure codes and other elements that are consequential to payment. 

                 

                In addition, as an example of some of the methods that could be used to ensure data quality, we describe the NCDR’s existing Data Quality Program (DQP). The two main component of the DQP are complementary and consist of the Data Quality Report (DQR) and the Data Audit Program (DAP). The DQR process assesses the completeness and validity of the electronic data submitted by participating hospitals. Hospitals must achieve >95% completeness of specific data elements identified as ‘core fields’ to be included in the registry’s data warehouse for analysis. The ‘core fields’ include the variables included in 25 our risk adjustment models. The process is iterative, providing hospitals with the opportunity to correct errors and resubmit data for review and acceptance into the data warehouse. The DAP consists of annual on-site chart review and data abstraction. Among participating hospitals that pass the DQ random charts of 10% of submitted cases.  

                 

                Finally, we assess the reliability of the data elements by comparing model variable frequencies and odds ratios in two years of data. 

                 

                Accountable Entity-Level Reliability 

                 

                The reliability of a measurement is the degree to which repeated measurements of the same entity agree with each other. For measures of hospital performance, the measured entity is naturally the hospital, and reliability is the extent to which repeated measurements of the same hospital give similar results. In line with this thinking, our approach to assess reliability is to consider the extent to which assessments of a hospital using different, but randomly selected subsets of patients, produce similar measures of hospital performance. That is, we take a "test-retest" approach in which hospital performance is measured once using a random subset of patients, then measured again using a second random subset exclusive of the first, and finally comparing the agreement between the two resulting performance measures across hospitals (Rousson et al., 2002). 

                 

                For test-retest reliability of the measure score, from the study cohort, we randomly sampled half of patients within each hospital, calculated the measure for each hospital in the first half, and then repeated the calculation using the second half. Thus, each hospital is measured twice, but each measurement is made using an entirely distinct set of patients. To the extent that the calculated measures of these two subsets agree, we have evidence that the measure is assessing an attribute of the hospital, not of the patients. As a metric of agreement we calculated the intra-class correlation coefficient (ICC) (Shrout and Fleiss, 1979), and assessed the values according to conventional standards (Landis and Koch, 1977). Specifically, we used the two data samples and calculated the risk-standardized complication rate (RSCR) for each hospital for each sample. The agreement of the two RSCRs was quantified for hospitals in each sample using the intra-class correlation (ICC) as defined by Shrout and Fleiss (1979). 

                 

                Using two independent samples provides an honest estimate of the measure’s reliability, compared with using two random, but potentially overlapping samples, which would exaggerate the agreement. Moreover, because our final measure is derived using hierarchical logistic regression, and a known property of hierarchical logistic regression models is that small volume hospitals contribute less ´signal´. As such a split sample using a single measurement period likely introduces extra noise; potentially underestimating the actual test-retest reliability that would be achieved if the measures were reported using additional years of data. Furthermore, the measure is specified for the entire ICD population, but we tested it only in the subset of Medicare FFS patients for whom information about vital status was available.  

                 

                References: 

                1) Rousson V, Gasser T, Seifert B. Assessing intrarater, interrater and test–retest reliability of continuous measurements. Statistics in Medicine 2002;21:3431-3446. 

                2) Shrout P, Fleiss J. Intraclass correlations: uses in assessing rater reliability. Psychological Bulletin 1979;86:420-428. 

                3) Landis J, Koch G, The measurement of observer agreement for categorical data. Biometrics 1977;33:159-174. 

                4.2.3 Reliability Testing Results

                Patient or encounter-level reliability results 

                Overall, risk factor frequencies changed little across years, and there were no notable differences in the odds ratios across years of data (please refer to table 2 in the attachment).  

                 

                The 2 split samples were calculated during the same timeframe to avoid the potential for changes in hospital performance over time. After splitting the cohort into two random samples, we compared measure scores calculated at hospitals with at least 25 cases in both random samples. The distribution of hospital performance was similar in the two samples (figure below), and there was a fair correlation between hospital performances assessed in the two samples (r 0.1494).

                 

                Accountable entity-level reliability results 

                 

                In the most recent years of data (2010Q1-2011Q4), there were 43,711 admissions in the combined two-year sample, with 21,856 admissions to 1,254 hospitals in the first randomly selected sample (mean RSCR 7.01%), and 21,855 admissions to 1,246 hospitals in the second randomly-selected sample (mean RSCR 6.58%).  The agreement between the two RSCRs for each hospital was 0.1494, which according to the conventional interpretation is “slight” (Landis & Koch, 1977). The intra-class correlation coefficient is based on a split sample of 2 years of data, resulting in a volume of patients in each sample equivalent to only 1 year of data, whereas the measure is likely to be reported with a full two years of data. 

                 

                Reference.  

                Landis J, Koch G. The measurement of observer agreement for categorical data, Biometrics 1977;33:159-174.. 

                4.2.4 Interpretation of Reliability Results

                The stability over time of the risk factor frequencies and odds ratios indicate that the underlying data elements are reliable. Additionally, the ICC score demonstrates fair agreement across samples using a “strict” approach to assessment that would likely improve with greater sample size. 

              • 4.3.3 Method(s) of Validity Testing

                Measure validity is demonstrated through prior validity testing done on our other measures, through use of established measure development guidelines, by systematic assessment of measure face validity by a technical expert panel (TEP) of national experts and stakeholder organizations, and through registry data validation. 

                 

                Validity of Registry Data 

                Data element validity testing was done on the specified measure by comparing with variables in the ACC audit program. The NCDR ICD Registry has an established DQP that serves to assess and improve the quality of the data submitted to the registry. There are two complementary components to the Data Quality Program- the Data Quality Report (DQR) and the Data Audit Program (DAP). The DQR process assesses the completeness of the electronic data submitted by participating hospitals. Hospitals must achieve >95% completeness of specific data elements identified as “core fields” to be included in the registry’s data warehouse for analysis. The “core fields” encompass the variables included in our risk adjustment models. The process is iterative, providing hospitals with the opportunity to correct errors and resubmit data for review and acceptance into the data warehouse. All data for this analysis passed the DQR completeness thresholds.  

                 

                The DAP consists of annual on-site chart review and data abstraction. Among participating hospitals that pass the DQR for a minimum of two quarters, at least 5% are randomly selected to participate in the DAP. At individual sites, auditors review charts of 10% of submitted cases. The audits focus on variables that are used in the NCDR risk-adjusted in-hospital mortality model including demographics, comorbidities, cardiac status, coronary anatomy, and ICD status. However, the scope of the audit could be expanded to include additional fields. The DAP includes an appeals process for hospitals to dispute the audit findings.  

                 

                We also examined the temporal variation of the standardized estimates and frequencies of the variables in the development and validation models.  

                 

                To assess the predictive ability of the model, we grouped patients into deciles of predicted 30 or 90 day complication and compared predicted complication with observed complication for each decile in the derivation cohort (figure 3 in the attachment). 

                 

                As noted in more detail in feasibility section of the form, a chart validation study has been completed to determine whether ICD-9-CM diagnosis and procedure codes reported on Medicare claims and used in the measure specifications accurately identified patients experiencing ICD complications within 30 and 90 days of ICD implantation as reported in the medical charts. The findings of the study reported an overall agreement between chart and claims (based on paired ratings) of 91.5%. Table 3 and figure 2 provide a depiction of the agreement and disagreement for patients with complication identified by charts and claims.  

                 

                Validity as Assessed by External Groups 

                During original measure development and in alignment with the CMS Measures Management System (MMS), we released a public call for nominations and convened a TEP when originally developing the measure. The purpose of convening the TEP was to provide input and feedback during measure development from a group of recognized experts in relevant fields. The TEP represented physician, consumer, hospital, and purchaser perspectives, chosen to represent a diverse of perspectives and backgrounds.  

                 

                4.3.4 Validity Testing Results

                See table 4 in the attachment. The performance of the derivation and validation samples is similar. The areas under the receiver operating characteristic (ROC) curve are 0.640 and 0.642, respectively, for the two samples. In addition, they are similar with respect to predictive ability. For the derivation sample, the predicted complication rate ranges from 3% in the lowest predicted decile to 14% in the highest predicted decile, a range of 11%. For the validation sample, the corresponding range is 3% to 14%, also a range of 11%. 

                4.3.5 Interpretation of Validity Results

                The audits conducted by the ACC support the overall validity of the data elements included in this measure. The data elements used for risk adjustment were consistently found for all patients and were accurately extracted from the medical record.  
                 

                Additionally, the frequencies and regression coefficients are fairly consistent over the two years of data. Also, there was excellent correlation between predicted and observed complications.  

              • 4.4.1 Methods used to address risk factors
                4.4.2 Conceptual Model Rationale

                See table 5 in the attachment. We developed a parsimonious model that included key variables previously shown to be associated with complications following ICD implantation. Importantly, the variables included in the risk model were fully harmonized with the NCDR’s existing risk model used to provide hospitals with risk adjusted in-hospital adverse events. In the development of that model, a team of clinicians had reviewed all variables in the NCDR ICD Registry database (a copy of the data collection form and the complete list of variables collected and submitted by hospitals can be found at www.ncdr.com and also in this application).  

                 

                Based on clinical review informed by the literature, a total of 15 variables were determined to be appropriate for consideration as candidate variables. We used logistic regression with stepwise selection (entry p<0.05; retention with p<0.01) for variable selection. We also assessed the direction and magnitude of the regression coefficients. This resulted in a final risk-adjusted complication model that included 9 variables (table 5). To harmonize the models, we elected to apply this approach to risk adjustment to the 30/90 day complications risk model. Several variables were not clinically significantly associated with risk of complications at 30/90 days, but we elected to retain them in the model for consistency. We compared hospitals’ RSCR calculated using this model with the output from a risk model that been developed specifically for 30/90 day complications and found them to be almost identical (correlation coefficient 0.996).  

                 

                For categorical variables with missing values, the value from the reference group was added. The percentage of missing values for all categorical variables was very small (<1%) and they were imputed to a specific categories based on our previous experience. There were three continuous variables with missing values: Hemoglobin (HGB, 1.9%), BUN (1.3%), and Sodium (1.1%); and these missing values were imputed as the median of the non-missing values of the corresponding variable. 

                 

                4.4.3 Risk Factor Characteristics Across Measured Entities

                Information on the descriptive statistics on the distribution across the measured entities of the risk variables identified was not previously required and as a result, we do not have these data to share.  

                4.4.4 Risk Adjustment Modeling and/or Stratification Results

                See table 6 and 7 in the attachment. 

                4.4.5 Calibration and Discrimination

                Approach to assessing model performance 

                During measure development, we computed three summary statistics for assessing model performance (Harrell and Shih, 2001) for the development and validation cohort: 

                Discrimination Statistics: 

                (1) Area under the receiver operating characteristic (ROC) curve (the c statistic (also called ROC) is the probability that predicting the outcome is better than chance, which is a measure of how accurately a statistical model is able to distinguish between a patient with and without an outcome.) 

                (2) Predictive ability (discrimination in predictive ability measures the ability to distinguish high-risk subjects from low-risk subjects. Therefore, we would hope to see a wide range between the lowest decile and highest decile) 

                 

                Calibration Statistics: 

                (3) Over-fitting indices (over-fitting refers to the phenomenon in which a model accurately describes the relationship between predictive variables and outcome in the development dataset but fails to provide valid predictions in new patients) 

                We compared the model performance in the development sample with its performance in another sample of half of the patients randomly selected from the whole 2010Q2-2011Q4 study cohort. There were 21, 856 cases discharged from the 1222 hospitals in the 2010-2011 validation dataset. This validation sample had a crude complication rate of 6.86%. We also computed statistics (1) and (2) for the current measure cohort, which includes discharges from 2010Q2-2011Q4. 

                 

                For the derivation cohort the results are summarized below: 

                C-statistic=0.640 

                Predictive ability (lowest decile %, highest decile %): 4.05%, 25.08% 

                 

                For the validation cohort the results are summarized below: 

                C-statistic=0.642 

                Predictive ability (lowest decile %, highest decile %): 3.80%, 23.80% 

                 

                For the validation cohort the results are summarized below: 

                Calibration: (0.03, 1.02) 

                 

                The risk decile plot is a graphical depiction of the deciles calculated to measure predictive ability. Below, we present the risk decile plot showing the distributions for the current measure cohort.  See figure 3 in the attachment.

                 

                Discrimination Statistics 

                 

                The C-statistics of 0.640 and 0.642 indicate fair model discrimination in the derivation and validation cohorts. Complications, as opposed to other outcomes such as mortality consistently have a lower c-statistic, even in medical record models. This is likely because complications are less determined by patient comorbidities and more by health system factors. The model indicated a wide range between the lowest decile and highest decile, indicating the ability to distinguish high-risk patients with low-risk patients.  

                 

                Calibration Statistics  

                Over-fitting (calibration y0, y1) 

                If the Y0 in the validation samples are substantially far from zero and the y1 is substantially far from 1, there is potential evidence of over-fitting. The calibration value close to 0 at one end and close to 1 on the other end indicates good calibration of the model.  

                 

                Risk Decile Plots 

                Higher deciles of the predicted outcomes are associated with higher observed outcomes, which show a good calibration of the model. This plot indicates excellent discrimination of the model and good predictive ability. 

                 

                Overall Interpretation  

                Interpreted together, our diagnostic results demonstrate the risk-adjustment model adequately controls for differences in patient characteristics (case mix).  

                 

                __________________________________

                4.4.6 Interpretation of Risk Factor Findings

                See fields above.

                4.4.7 Final Approach to Address Risk Factors
                Risk adjustment approach
                On
                Risk adjustment approach
                Off
                Specify number of risk factors

                9

                Conceptual model for risk adjustment
                Off
                Conceptual model for risk adjustment
                On
                • 5.1 Contributions Towards Advancing Health Equity

                  optional question

                  • 6.1.3 Current Use(s)
                    6.1.3b Why the measure is not in use
                    Efforts have been unsuccessful to gain access to CMS claims data. Should these become available to the ACC we will provide a detailed plan of implementation.
                    6.1.4 Program Details
                    NA, https://doesnotexist.org, NA, NA, NA
                  • 6.2.1 Actions of Measured Entities to Improve Performance

                    In general, registry participants receive feedback though quarterly benchmark reports. These reports contain detailed analyses of the institution’s performance. In these reports, institutional performance is compared to national aggregates. A thorough understanding of performance can be gleaned from the executive summary dashboard which contains visual displays of metric performance as well as patient level drill downs. Sites have the capability to export their information into their own excel spreadsheets to conduct their own analysis. Supporting documentation in the form of the coder’s dictionary and outcome reports guide includes additional support for the sites. Monthly Registry Site Manager (RSM) Calls, sessions at the NCDR annual conference and access to Clinical Quality Associates are other ways that sites can get updates on data interpretation.  

                     

                    6.2.2 Feedback on Measure Performance

                    Because ACC is unable to currently report on the measure, we have not received any new feedback on measure performance and implementation.  

                     

                    Once implemented, feedback will be collected during monthly RSM calls, ad hoc phone calls tracked with Salesforce software, and during registry –specific break-out sessions at the NCDR’s annual meeting. Registry Steering Committee members may also provide feedback during regularly scheduled calls.  

                    6.2.3 Consideration of Measure Feedback

                    Because ACC is unable to currently report on the measure, we have not received any new feedback on measure performance and implementation.  

                    6.2.4 Progress on Improvement

                      Because ACC is unable to currently report on the measure, we are not able to track improvement on the measure at this time.  

                    6.2.5 Unexpected Findings

                    As noted earlier, publicly reporting hospital risk-standardized ICD complication rates requires that the data submitted by hospitals be complete, consistent, and accurate. A protocol that assures accurate data for public reporting should be established prior to implementation. Steps to ensure data quality could include monitoring data for variances in case mix, chart audits, and possibly adjudicating cases that are vulnerable to systematic misclassification.  

                     

                    As an example of some of the methods that could be used to ensure data quality, we describe the NCDR’s existing Data Quality Program (DQP). The two main components of the DQP are complementary and consist of the Data Quality Report (DQR) and the Data Audit Program (DAP). The DQR process assesses the completeness and validity of the electronic data submitted by participating hospitals. Hospitals must achieve >95% completeness of specific data elements identified as ‘core fields’ to be included in the registry’s data warehouse for analysis. The ‘core fields’ capture many of the variables included in our risk adjustment models. The process is iterative, providing hospitals with the opportunity to correct errors and resubmit data for review and acceptance into the data warehouse. The DAP consists of annual on-site chart review and data abstraction. Among participating hospitals that pass the DQR for a minimum of two quarters, at least 5% are randomly selected to participate in the DAP. At individual sites, on-site auditors review charts of 10% of submitted cases. The NCDR audit focuses on variables used to determine whether patients meet accepted criteria for ICD implantation. However, the scope of the audit could be expanded to include additional fields. The DAP includes an appeals process that allows hospitals to reconcile audit findings.  

                    • Submitted by MPickering01 on Mon, 01/08/2024 - 20:00

                      Permalink

                      Importance

                      Importance Rating
                      Importance

                      Strengths:

                      • Literature review shows that complication following ICD placement increase risk of all-cause mortality, and approximately 7% of patients have complications within 90 days. Other rates cited range from 4% to 30% depending on how complications are defined, and rates have been shown to change over time. The risk of adverse outcomes varied based on factors related to clinician training and experience, choice of device, and facility characteristics. Adverse outcomes include mortality, higher cost, and increased length of stay.
                      • Performance data show a median complication rate across deciles ranges from 0% to 17.8%.
                      • Patients on the TEP "generally" indicated that outcomes such as complications are useful for decision-making purposes.

                      Limitations:

                      • Submission refers to an "evidence supplement" that does not appear to have been provided unless it is the testing supplement.
                      • Performance data not provided by decile; the developer reports that "As of Fall 2023 claims data use is currently restricted and unavailable to support performance measures" as the reason newer data are not presented, but the developer indicates legislation has been introduced to change this restriction; performance data have not been updated since 2007.

                      Rationale:

                      • Literature cited supports the value of this measure for reducing the impacts of complications (e.g., mortality, cost) and shows that clinician and facility characteristics can influence complication rates following ICD placement. Evidence for the meaningfulness to patients is derived from the developer-convened TEP.
                      • Developer reports it is unable to show more recent performance scores due a CMS restriction. A performance gap may still exist but the data available are not recent, which challenges the continued business case for the measure. Performance data show a range in complication rates: preliminary analysis of claims data performed by the developer showed a mean complication rate of 5.7% among ICD admissions and range of 0-17.8% across deciles of hospitals grouped by their all-cause complication rate; these data were from 2007. The developer should provide more clarity on the data access issue.

                      Feasibility Acceptance

                      Feasibility Rating
                      Feasibility Acceptance

                      Strengths:

                      • All data elements are available in defined fields in electronic clinical data; claims data are used to identify complications and registry data are used for risk adjustment factors.
                      • Participation in the National Cardiovascular Data Registry (NCDR) is required to report the measure, but developer notes that centers are already required to participate in the NCDR for reimbursement, and most hospitals implanting ICDs in Medicare population already do participate so there is no extra cost.

                      Limitations:

                      • Developer notes that hospitals are only required to submit data on some patients, and only 54 of 159 data elements are required; they indicate that "the majority of participating hospitals" have opted to submit all data elements on all patients, but they do not provide the proportion of hospitals for which the measure cannot be reported (data element validations studies are "among participating hospitals."

                      Rationale:

                      • All data elements are available in defined fields in electronic clinical data and there are no additional costs associated with reporting the measure.
                      • Accountable entities are already required to participate in the National Cardiovascular Data Registry (NCDR) for reimbursement, and most hospitals implanting ICDs in Medicare population already do participate. However, developer does not report the proportion of hospitals that do not participate in the registry.

                      Scientific Acceptability

                      Scientific Acceptability Reliability Rating
                      Scientific Acceptability Reliability

                      Strengths:

                      • The measure is well-defined and precisely specified.
                      • The sample included 43,711 admissions in the combined two-year sample, 21,856 admissions to 1,254 hospitals in the first randomly selected sample and 21,855 admissions to 1,246 hospitals in the second randomly-selected sample.
                      • This measure requires a minimum sample of 25 patients per facility.

                      Limitations:

                      • The submission shows odds ratios as evidence of patient/encounter-level reliability but this is not an common method for assessing patient/encounter-level reliability.
                      • Split-half reliability ICC was 0.1494, below the threshold of 0.6.
                      • Data were collected more than a decade ago (2010Q2-2011Q4).

                      Rationale:

                      • Measure score reliability testing (accountable entity level reliability) performed. However, split-half reliability ICC was 0.1494, below the threshold of 0.6. 
                      • The developer states that, as of Fall 2023, claims data use is currently restricted and unavailable to support performance measures. This would probably limit the ability of the measure developer to gather new data to improve reliability. The developer should provide more clarity on the data access issue.
                      Scientific Acceptability Validity Rating
                      Scientific Acceptability Validity

                      Strengths:

                      • A data element validation was performed initially to validate complications identified through claims in comparison to complications found in the chart ("gold standard"). The developer found an overall agreement rate of 91.5%.
                      • NCDR data elements are validated through an audit process and include demographics, comorbidities, cardiac status, coronary anatomy, and ICD status.
                      • The measure is risk-adjusted for patient risk factors of sex, reason for admission, NYHA class (HF stage), prior CABG, abnormal conduction, ICD type, sodium level, hemoglobin level, BUN level. C-statistics were .640 (derivation cohort) and 0.642 (validation cohort). Risk factors used in the model were identified through literature and were harmonized with NCDR's own risk model for proving risk-adjusted in-hospital adverse events. 

                      Limitations:

                      • Developer notes that validity information could not be updated due to CMS restrictions; however they report they "also conducted additional analyses to meet newer testing requirements" using comparable data from 2010-2011 (still more than a decade old). The developer should provide more clarity on the data access issue.
                      • Patients at known high-risk for complications are excluded (previous ICD or pacemaker placement) since it is difficult to tell in administrative data if complications were present on admission.
                      • Another exclusion is related to bundled claims - cases are excluded if "not the first claim in the same claim bundle" to avoid double-counting the index ICD procedure, but it is not explained why that would happen.
                      • Developers do not report results of empiric validity testing performed at the accountable entity-level; all validity testing performed appears to be at the data element-level. Developer refers to face validity established via TEP but does not report the results of any face validity assessment.
                      • C-statistics were rated as "fair" by developer but unclear if this is the appropriate threshold (other sources used .70-.79 to denote fair discrimination). However, developers claims that the wide range between low and  high decile patients indicate the ability to discriminate between low and high risk patients.
                      • Submission refers to a supplemental methods document that does not appear to have been provided (a testing attachment is provided - maybe what they are referring to?).

                      Rationale:

                      • The data elements for this measure have been validated: chart abstraction was used to validate identification of complications in claims; risk factors, demographics, and ICD status are validated through routine audit of registry data. Additional validity testing uses data from 2010-2011. The developer should provide more clarity on the data access issue.
                      • The risk adjustment model includes 9 patient risk factors and these harmonize with NCDRs own risk adjustment model. C-statistics were .640 (derivation cohort) and 0.642 (validation cohort), demonstrating fair model discrimination between high and low risk patient.

                      Equity

                      Equity Rating
                      Equity

                      Strengths:

                      • N/A

                      Limitations:

                      • Developer did not address this optional criterion.

                      Rationale:

                      • Developer did not address this optional criterion.

                      Use and Usability

                      Use and Usability Rating
                      Use and Usability

                      Strengths:

                      • Registry participants receive feedback via quarterly benchmark reports; a dashboard visually displays metrics and allows "patient-level" drill downs; monthly calls and sessions at the NCDR national conference are other venues where providers can learn about data and interpretation.
                      • Developer outlines a plan for collecting feedback via monthly calls, ad hoc meetings, and NCDR annual meetings.

                      Limitations:

                      • Developer notes the measure is not in use; no implementation plans have been made due to inaccessibility of CMS claims data.
                      • Developer reports that no feedback has been received due to their inability to report on the measure (see claims data reporting issue this developer mentions)
                      • No performance gap or improvement on the measure can be reported currently
                      • In lieu of reporting on unexpected findings/unintended consequences, developer describes steps for ensuring validity, accuracy, and completeness of data elements.

                      Rationale:

                      • Developers outline a plan for providers to receive performance information via benchmark reports and a dashboard, a plan for collecting feedback, and a plan for identifying unexpected findings.
                      • The measure is currently not in use in any program, and no data on performance gap or performance improvement is reported. Unintended consequences portion does not appear to have been addressed.

                      Summary

                      N/A

                    • Submitted by Antoinette on Fri, 01/12/2024 - 13:19

                      Permalink

                      Importance

                      Importance Rating
                      Importance

                      Agree with staff assessment - Evidence cited is outdated with no updated information on performance gap. No strong support from patients that this is an important measure.  Reported as perception that this measure would be important.

                      Feasibility Acceptance

                      Feasibility Rating
                      Feasibility Acceptance

                      All data elements are available in defined fields, no additional costs, and generally hospitals participate in submitting data.  Data is needed to quantify majority and whether there, are differences in hospitals who do submit vs. doesn't. 

                      Scientific Acceptability

                      Scientific Acceptability Reliability Rating
                      Scientific Acceptability Reliability

                      Agree with staff assessment.  Data used to calculate reliability is outdated.  Fair ICC scores.

                      Scientific Acceptability Validity Rating
                      Scientific Acceptability Validity

                      Agree with staff assessment. Good overall agreement rate with chart review but estimates are calculated using outdated data and developers state can't access newer data.

                      Equity

                      Equity Rating
                      Equity

                      Agree with staff assessment.

                      Use and Usability

                      Use and Usability Rating
                      Use and Usability

                      Agree with staff assessment.  Measure is not in use and no feedback on utility collected.

                      Summary

                      Lack of recent data and evidence as well as non-use makes this measure not viable.

                      Submitted by Amber on Fri, 01/12/2024 - 14:00

                      Permalink

                      Importance

                      Importance Rating
                      Importance

                      Agree with staff assessment. Reviewing 'old' data is not meaningful to hospitals.

                      Feasibility Acceptance

                      Feasibility Rating
                      Feasibility Acceptance

                      All data elements are defined.

                      Scientific Acceptability

                      Scientific Acceptability Reliability Rating
                      Scientific Acceptability Reliability

                      Agree with staff assessment.

                      Scientific Acceptability Validity Rating
                      Scientific Acceptability Validity

                      Agree with staff assessment.

                      Equity

                      Equity Rating
                      Equity

                      Not addressed.

                      Use and Usability

                      Use and Usability Rating
                      Use and Usability

                      Agree with staff assessment.

                      Summary

                      Overall, this measure could improve patient harm in hospitals, however data collection and standardized reporting would be difficult.

                      Submitted by rbartel on Sun, 01/14/2024 - 11:22

                      Permalink

                      Importance

                      Importance Rating
                      Importance

                      Agree with staff assessment. Old data is not acceptable. It appears to be excuses for lack of documentation or doing the work.

                      Feasibility Acceptance

                      Feasibility Rating
                      Feasibility Acceptance

                      Agree with staff assessment. The data is available and this is an important issue. All the elements are available but they didn’t use them. I was torn by met or not met but addressable but decided it was feasible to use the data need but they chose not to do the work.

                      Scientific Acceptability

                      Scientific Acceptability Reliability Rating
                      Scientific Acceptability Reliability

                      Agree with staff assessment. Again use of outdated data. Lots of things changed since 2010-2011.

                      Scientific Acceptability Validity Rating
                      Scientific Acceptability Validity

                      Agree with staff assessment. Out of date data. Chart review might be good but today we have many other ways to collect data. 

                      Equity

                      Equity Rating
                      Equity

                      Not required but it seems the least of this measures problems. If and when they bring it back, I hope they look at the data with all the lens available to them.

                      Use and Usability

                      Use and Usability Rating
                      Use and Usability

                      Agree with staff assessment. My concern is that there is real negative patient outcomes that are not being addressed in the best ways possible. Patients could be having negative lived experiences that no one is trying to change because we decided that we can’t find the data. 

                      Summary

                      This makes me sad because lots of work goes into creating a measure but letting it sit on a shelf without using it is just another reason why patients and even staff don’t trust the healthcare system.

                      Submitted by Jason H Wasfy on Tue, 01/16/2024 - 09:25

                      Permalink

                      Importance

                      Importance Rating
                      Importance

                      I actually think it's pretty well established that this is a meaningful quality of care measure, although I agree with staff about their suggestions to address

                      Feasibility Acceptance

                      Feasibility Rating
                      Feasibility Acceptance

                      Agree with staff, also agree would be helpful to see missing data

                      Scientific Acceptability

                      Scientific Acceptability Reliability Rating
                      Scientific Acceptability Reliability

                      Agree that more recent data would be helpful

                      Scientific Acceptability Validity Rating
                      Scientific Acceptability Validity

                      Agree that more recent data would be helpful, however not clear to me that low C stat is a problem (might just suggest that quality problems explain differences rather the covariates in model - which makes this a good model not a bad one).  Hard to know

                      Equity

                      Equity Rating
                      Equity

                      agree - optional element not done

                      Use and Usability

                      Use and Usability Rating
                      Use and Usability

                      agree with staff

                      Summary

                      Of note I have "conflict" sort of since I am an ACC volunteer and chair metrics committee (although I did not develop this metric)

                      Submitted by Vik Shah on Tue, 01/16/2024 - 13:42

                      Permalink

                      Importance

                      Importance Rating
                      Importance

                      Agree with staff recommendations.

                      Feasibility Acceptance

                      Feasibility Rating
                      Feasibility Acceptance

                      Agree with staff recommendations.

                      Scientific Acceptability

                      Scientific Acceptability Reliability Rating
                      Scientific Acceptability Reliability

                      Agree with staff recommendations.

                      Scientific Acceptability Validity Rating
                      Scientific Acceptability Validity

                      Agree with staff recommendations.

                      Equity

                      Equity Rating
                      Equity

                      Agree with staff recommendations.

                      Use and Usability

                      Use and Usability Rating
                      Use and Usability

                      Agree with staff recommendations.

                      Summary

                      n/a

                      Submitted by Kyle A Hultz on Tue, 01/16/2024 - 17:52

                      Permalink

                      Importance

                      Importance Rating
                      Importance

                      There appears to be an inability to obtain a full set of previously expected data. If this is unable to be reported the measure will not meet the expected utility. Would not be able to support the continued use of this metric.

                      Feasibility Acceptance

                      Feasibility Rating
                      Feasibility Acceptance

                      The endpoints are measurable, but reporting seems to be an ongoing issue associated with institutions that are enrolled in the National Cardiovascular Data Registry. If a significant proportion of reporting institutions are not reporting data to the registry it is unclear how this measure can used consistently across all systems.

                      Scientific Acceptability

                      Scientific Acceptability Reliability Rating
                      Scientific Acceptability Reliability

                      The data for this measure is over 10 years old, and claims data has been restricted.

                      Scientific Acceptability Validity Rating
                      Scientific Acceptability Validity

                      See above, similar to reliability acces to data has restricted validity testing.

                      Equity

                      Equity Rating
                      Equity

                      Not addressed.

                      Use and Usability

                      Use and Usability Rating
                      Use and Usability

                      The measure is not in use, and the authors have not been able to resolve issues related to data access. Unclear path forward without more information.

                      Summary

                      The authors identified a major barrier for using this measure with the inability to access data. The data which has been reported is >10 years old. Cannot support the use of this measure without updated and complete data.

                      Submitted by Marjorie Everson on Tue, 01/16/2024 - 19:11

                      Permalink

                      Importance

                      Importance Rating
                      Importance

                      agree with staff assessment

                      Feasibility Acceptance

                      Feasibility Rating
                      Feasibility Acceptance

                      agree with staff assessment

                      Scientific Acceptability

                      Scientific Acceptability Reliability Rating
                      Scientific Acceptability Reliability

                      agree with staff assessment

                      Scientific Acceptability Validity Rating
                      Scientific Acceptability Validity

                      agree with staff assessment

                      Equity

                      Equity Rating
                      Equity

                      not addressed

                      Use and Usability

                      Use and Usability Rating
                      Use and Usability

                      agree with staff assessment

                      Summary

                      measure unable to be updated. it is most unfortunate; this could possibly be an important measure.

                      Submitted by Dr. Joshua Ardise on Wed, 01/17/2024 - 17:38

                      Permalink

                      Importance

                      Importance Rating
                      Importance

                      Missing data as the reviewers noted.

                      Feasibility Acceptance

                      Feasibility Rating
                      Feasibility Acceptance

                      I agree with the Staff's assessment.

                      Scientific Acceptability

                      Scientific Acceptability Reliability Rating
                      Scientific Acceptability Reliability

                      I agree with the Staff's assessment.

                      Scientific Acceptability Validity Rating
                      Scientific Acceptability Validity

                      I agree with the Staff's assessment.

                      Equity

                      Equity Rating
                      Equity

                      I agree with the Staff's assessment.

                      Use and Usability

                      Use and Usability Rating
                      Use and Usability

                      I agree with the Staff's assessment.

                      Summary

                      N/A

                      Submitted by Michael on Thu, 01/18/2024 - 00:36

                      Permalink

                      Importance

                      Importance Rating
                      Importance

                      Agree with staff concerning need for current and robust data.

                      Feasibility Acceptance

                      Feasibility Rating
                      Feasibility Acceptance

                      Agree with staff.

                      Scientific Acceptability

                      Scientific Acceptability Reliability Rating
                      Scientific Acceptability Reliability

                      Agree with staff - data is outdated.

                      Scientific Acceptability Validity Rating
                      Scientific Acceptability Validity

                      Agree with staff.

                      Equity

                      Equity Rating
                      Equity

                      Not included.

                      Use and Usability

                      Use and Usability Rating
                      Use and Usability

                      Data and coding needs to be addressed, no implementation plans.

                      Summary

                      Agree with staff that this measure is not going to be useful without additional revision and data access.

                      Submitted by David Clayman on Fri, 01/19/2024 - 10:44

                      Permalink

                      Importance

                      Importance Rating
                      Importance

                      Will need to address performance gap analysis with more recent data. Data from 2007 is insufficient. 

                      Feasibility Acceptance

                      Feasibility Rating
                      Feasibility Acceptance

                      Agree with staff assessment.

                      Scientific Acceptability

                      Scientific Acceptability Reliability Rating
                      Scientific Acceptability Reliability

                      Agree with staff assessment. Split-half reliability ICC was 0.1494 is below threshold. 

                      Scientific Acceptability Validity Rating
                      Scientific Acceptability Validity

                      Agree with staff assessment.

                      Equity

                      Equity Rating
                      Equity

                      Agree with staff assessment.

                      Use and Usability

                      Use and Usability Rating
                      Use and Usability

                      Agree with staff assessment.

                      Summary

                      The lack of new data and reporting is a concern. 

                      Submitted by Eleni Theodoropoulos on Fri, 01/19/2024 - 13:32

                      Permalink

                      Importance

                      Importance Rating
                      Importance

                      Agree with staff preliminary assessment.  Missing recent performance results and data access issues limit the ability to report on this measure.  

                      Feasibility Acceptance

                      Feasibility Rating
                      Feasibility Acceptance

                      Agree with Agree with staff preliminary assessment.  The data is electronically available within registry and claims data, especially for the Medicare population.  It would be helpful to know proportion of facilities that do not report, if possible. 

                       

                       

                      Scientific Acceptability

                      Scientific Acceptability Reliability Rating
                      Scientific Acceptability Reliability

                      Agree with staff preliminary assessment.  The data used is not current and access to more recent data is not available for use.  While reliability testing was performed, the results were far below the threshold.

                      Scientific Acceptability Validity Rating
                      Scientific Acceptability Validity

                      Agree with staff preliminary assessment.  

                      Equity

                      Equity Rating
                      Equity

                      This is not addressed by the measure developer.

                      Use and Usability

                      Use and Usability Rating
                      Use and Usability

                      Agree with staff preliminary assessment.  The measure is not currently in use.

                      Summary

                      N/A

                      Submitted by Bonnie Zima on Fri, 01/19/2024 - 20:45

                      Permalink

                      Importance

                      Importance Rating
                      Importance

                      The literature review supporting the importance of this measure does not appear comprehensive. There was little mention how methodologic differences could explain variable findings. References include data from more than 20 years ago.  The most recent citation appears to be from 2017?  The additional list of papers that were not cited to did not strengthen this application. 

                      The rationale appears to be that complications following ICD insertion are associated with increased risk of all cause mortality, rates of complicaitons range from about 5-7% (a later study is cited that found slightly over 10%),  one study (2009) suggests that  ( lathere are modifiable factors (e.g., training) that are related to adverse outcomes.  Later in section, venous access techniques are also mentioned. One of the most common complications is infection—an example of a potentially preventable complication. Cost data appears to be from one study published in 2006?

                      Feasibility Acceptance

                      Feasibility Rating
                      Feasibility Acceptance

                      The required data sources are registry data linked to claims data. As of Fall 2023 claims data use is currently restricted and unavailable to support performance measures.

                      The registry was launched on June 30, 2005. The most recent data on number of hospitals participating is from May 2015. At this time, the registry had collected data from 1,786 hospitals in the United States totaling over 1,330,000 implants (NCDR data outcome reports).  The characteristics of the hospitals are not provided. 

                      It appears that there is data collection form.  Hospitals are expected to collect their own data, submit the completed form on a quarterly basis. The AAC research team conducts that hierarchical data analysis: “The RSCR is calculated as the ratio of the number of “predicted” to the number of “expected” complications, multiplied by the national unadjusted complication rate. For each hospital, the numerator of the ratio (“predicted”) is the number of complications within 30 or 90 days predicted on the basis of the hospital’s performance with its observed case mix, and the denominator (“expected”) is the number of complications expected on the basis of the nation’s performance with that hospital’s case mix.”

                      It appear that ACC research team reports back aggregated data to the hospital. There is no information on how data quality is assured when hospitals complete this form.

                      No new data to support feasibility is reported.  However, initial feasibility testing is described. 

                      Scientific Acceptability

                      Scientific Acceptability Reliability Rating
                      Scientific Acceptability Reliability

                      The specifications include different complications at 30 and 90 days, and the time window also varies between one to three years. A denominator exclusion is lack 90-day follow-up in Medicare FFS post-discharge, potentially underestimating mortality if the patient does not access timely care. 

                      Information on reliability was not updated. This uniquely valuable measure was developed when access to CMS claims data was feasible. Nevertheless, the specifications for this measure have not changed since the prior review.

                      Scientific Acceptability Validity Rating
                      Scientific Acceptability Validity

                      Information on validity was not updated.

                      Equity

                      Equity Rating
                      Equity

                      Not assessed.

                      Use and Usability

                      Use and Usability Rating
                      Use and Usability

                      The measure is currently not in use in any program, and no data on performance gap or performance improvement is reported.

                      Summary

                      This is a quality measure developed by a medical professional society, American College of Cardiology (ACC). This review is for maintenance of a measure which was endorsed on 02/19/2016.The major weakness is that it relies on two data sources, their registry and CMS claims data, and CMS claims data are currently restricted and no longer available. Thus, updated data to support feasibility and scientific acceptability were not available. This raises questions about what should be the bar for maintenance measures when data on scientific acceptability are from initial testing? Overall, this measure provides a glimpse into how quality measure development has improved over time. The ACC appears to be one of the early leaders in development of QM’s to support hospitals to collect data and then use aggregated data reported back to measure hospital performance. 

                      Submitted by Marisa Valdes on Sat, 01/20/2024 - 20:29

                      Permalink

                      Importance

                      Importance Rating
                      Importance

                      Ideally the measure developer would be able to provide more recent evidence so that we can understand current state.

                      Feasibility Acceptance

                      Feasibility Rating
                      Feasibility Acceptance

                      Agree w/staff assessment

                      Scientific Acceptability

                      Scientific Acceptability Reliability Rating
                      Scientific Acceptability Reliability

                      Agree w/ staff assessment.

                      Scientific Acceptability Validity Rating
                      Scientific Acceptability Validity

                      Agree w/ staff assessment.

                      Equity

                      Equity Rating
                      Equity

                      Not addressed.

                      Use and Usability

                      Use and Usability Rating
                      Use and Usability

                      Given that the measure is currently not in use, the usability rating is uncertain.

                      Summary

                      See above.

                      Submitted by Tarik Yuce on Sun, 01/21/2024 - 20:20

                      Permalink

                      Importance

                      Importance Rating
                      Importance

                      Agree with staff assessment.

                      Feasibility Acceptance

                      Feasibility Rating
                      Feasibility Acceptance

                      Agree with staff assessment.

                      Scientific Acceptability

                      Scientific Acceptability Reliability Rating
                      Scientific Acceptability Reliability

                      Low reliability and old data limit this measure.

                      Scientific Acceptability Validity Rating
                      Scientific Acceptability Validity

                      Old data again limit the validity of this measure.

                      Equity

                      Equity Rating
                      Equity

                      Did not submit.

                      Use and Usability

                      Use and Usability Rating
                      Use and Usability

                      CMS data access limits the developers ability to move this measure forward.

                      Summary

                      This measure does not seem appropriate for approval. The use of data from 2011 limits our ability to interpret the importance, validity, and reliability of this measure. Data access concerns compound this issue. 

                      Submitted by Ashley Tait-Dinger on Mon, 01/22/2024 - 16:14

                      Permalink

                      Importance

                      Importance Rating
                      Importance

                      Agree with the staff assessment.

                      Feasibility Acceptance

                      Feasibility Rating
                      Feasibility Acceptance

                      Agree with the staff assessment.

                      Scientific Acceptability

                      Scientific Acceptability Reliability Rating
                      Scientific Acceptability Reliability

                      Agree with the staff assessment.

                      Scientific Acceptability Validity Rating
                      Scientific Acceptability Validity

                      Agree with the staff assessment.

                      Equity

                      Equity Rating
                      Equity

                      Agree with the staff assessment.

                      Use and Usability

                      Use and Usability Rating
                      Use and Usability

                      Agree with the staff assessment.

                      Summary

                      Needs more development and testing. 

                      Submitted by Aileen Schast on Mon, 01/22/2024 - 17:16

                      Permalink

                      Importance

                      Importance Rating
                      Importance

                      Agree with staff assessment

                      Feasibility Acceptance

                      Feasibility Rating
                      Feasibility Acceptance

                      elements are clearly defined

                      Scientific Acceptability

                      Scientific Acceptability Reliability Rating
                      Scientific Acceptability Reliability

                      Agree with Staff Assessment

                      Scientific Acceptability Validity Rating
                      Scientific Acceptability Validity

                      Agree with Staff Assessment

                      Equity

                      Equity Rating
                      Equity

                      I do not believe this was addressed at all

                      Use and Usability

                      Use and Usability Rating
                      Use and Usability

                      Agree with staff assessment

                      Summary

                      Measure is old and not up to current standards

                      Submitted by Anna Doubeni on Mon, 01/22/2024 - 18:38

                      Permalink

                      Importance

                      Importance Rating
                      Importance

                      I agree with staff assessment.

                      Feasibility Acceptance

                      Feasibility Rating
                      Feasibility Acceptance

                      I agree with staff assessment.

                      Scientific Acceptability

                      Scientific Acceptability Reliability Rating
                      Scientific Acceptability Reliability

                      I agree with staff assessment.

                      Scientific Acceptability Validity Rating
                      Scientific Acceptability Validity

                      O agree with staff assessment

                      Equity

                      Equity Rating
                      Equity

                      N/A optional and not addressed

                      Use and Usability

                      Use and Usability Rating
                      Use and Usability

                      I agree with staff assessment.  This does not seem ready for use at this time.

                      Summary

                      I agree with staff assessment regarding this measure.

                       

                      Submitted by Ayers813 on Mon, 01/22/2024 - 23:24

                      Permalink

                      Importance

                      Importance Rating
                      Importance

                      Agree with staff assessment

                      Feasibility Acceptance

                      Feasibility Rating
                      Feasibility Acceptance

                      Agree with staff assessment

                      Scientific Acceptability

                      Scientific Acceptability Reliability Rating
                      Scientific Acceptability Reliability

                      agree with staff assessment

                      Scientific Acceptability Validity Rating
                      Scientific Acceptability Validity

                      agree with staff assessment

                      Equity

                      Equity Rating
                      Equity

                      Not addressed

                      Use and Usability

                      Use and Usability Rating
                      Use and Usability

                      Agree with staff assessment

                      Summary

                      NA

                      Submitted by Jamie Wilcox on Mon, 01/22/2024 - 23:52

                      Permalink

                      Importance

                      Importance Rating
                      Importance

                      Agree with staff assessments. 

                      Feasibility Acceptance

                      Feasibility Rating
                      Feasibility Acceptance

                      Agree with staff assessment. 

                      Scientific Acceptability

                      Scientific Acceptability Reliability Rating
                      Scientific Acceptability Reliability

                      Agree with staff assessments. 

                      Scientific Acceptability Validity Rating
                      Scientific Acceptability Validity

                      Agree with staff assessments. 

                      Equity

                      Equity Rating
                      Equity

                      n/a, developers chose not to consider health equity implications. 

                      Use and Usability

                      Use and Usability Rating
                      Use and Usability

                      Agree with staff assessments. 

                      Summary

                      n/a