Skip to main content

Oncology: Medical and Radiation – Pain Intensity Quantified

CBE ID
0384e
Endorsement Status
E&M Committee Rationale/Justification
  • Explore, with the developer’s TEP, adding mention of other specific measurement tools that can be used to support the measure.
  • Include additional guidance for caregivers, namely for patients with cognitive impairment. For instance, adding additional guidance to note alternative methods of assessment, such as observations, behavioral cues, or care plans may be employed.
1.0 New or Maintenance
Previous Endorsement Cycle
Is Under Review
No
Next Maintenance Cycle
Fall 2028
1.6 Measure Description

This measure looks at the percentage of patient visits, regardless of patient age, with a diagnosis of cancer currently receiving chemotherapy or radiation therapy in which pain intensity is quantified.

 

This eCQM is an episode-based measure. An episode is defined as each eligible encounter for patients with a diagnosis of cancer who are also currently receiving chemotherapy or radiation therapy during the measurement period. 

 

The time period for data collection is intended to be 12 consecutive months.

 

There are two population criteria for this measure: 

1)           All patient visits for patients with a diagnosis of cancer currently receiving chemotherapy

OR

 

2)           All patient visits for patients with a diagnosis of cancer currently receiving radiation therapy.

 

This measure is comprised of two populations but is intended to result in one reporting rate. This is a proportion measure and better quality is associated with a higher score.

    Measure Specs
      General Information
      1.7 Measure Type
      1.7 Composite Measure
      No
      1.3 Electronic Clinical Quality Measure (eCQM)
      1.9 Care Setting
      1.10 Measure Rationale

      This measure, CBE 0384e, is paired with CBE 0383 Percentage of visits for patients, regardless of age, with a diagnosis of cancer currently receiving chemotherapy or radiation therapy who report having pain with a documented plan of care to address pain. This measure evaluates if pain intensity is quantified at each visit among cancer patients undergoing chemotherapy or radiation, and CBE 0383 evaluates if each patient visit includes a documented plan of care, amongst cancer patients who reported having pain. 

      1.20 Types of Data Sources
      1.25 Data Source Details

      N/A

      1.14 Numerator

      Patient visits in which pain intensity is quantified

      Pain intensity should be quantified using a standard instrument, such as a 0-10 numerical rating scale, visual analog scale, a categorical scale, or pictorial scale. Examples include the Faces Pain Rating Scale and the Brief Pain Inventory (BPI).

      1.14a Numerator Details

      Time period for data collection: At each visit within the measurement period

       

      Guidance: Pain intensity should be quantified using a standard instrument, such as a 0-10 numerical rating scale, visual analog scale, a categorical scale, or pictorial scale. Examples include the Faces Pain Rating Scale and the Brief Pain Inventory (BPI).

       

      For more details, MAT export is attached to this submission. 

      1.15 Denominator

      All patient visits, regardless of patient age, with a diagnosis of cancer currently receiving chemotherapy or radiation therapy

       

      For patients receiving radiation therapy, pain intensity should be quantified at each radiation treatment management encounter where the patient and physician have a face-to-face or telehealth interaction. Due to the nature of some applicable coding related to radiation therapy (e.g., delivered in multiple fractions), the billing date for certain codes may or may not be the same as the face-to-face or telehealth encounter date. In this instance, for the reporting purposes of this measure, the billing date should be used to pull the appropriate patients into the initial population. It is expected, though, that the numerator criteria would be performed at the time of the actual face-to-face or telehealth encounter during the series of treatments. A lookback (retrospective) period of 7 days, including the billing date, may be used to identify the actual face-to-face or telehealth encounter, which is required to assess the numerator. Therefore, pain intensity should be quantified during the face-to-face or telehealth encounter occurring on the actual billing date or within the 6 days prior to the billing date.

       

      For patients receiving chemotherapy, pain intensity should be quantified at each face-to-face or telehealth encounter with the physician while the patient is currently receiving chemotherapy. For purposes of identifying eligible encounters, patients "currently receiving chemotherapy" refers to patients administered chemotherapy on the same day as the encounter or during the 30 days before the date of the encounter AND during the 30 days after the date of the encounter. 

       

      1.15a Denominator Details

      Time period for data collection: 12 consecutive months

       

      Guidance: This eCQM is an episode-based measure. An episode is defined as each eligible encounter for patients with a diagnosis of cancer who are also currently receiving chemotherapy or radiation therapy during the measurement period. 

       

      For patients receiving radiation therapy, pain intensity should be quantified at each radiation treatment management encounter where the patient and physician have a face-to-face or telehealth interaction. Due to the nature of some applicable coding related to radiation therapy (e.g., delivered in multiple fractions), the billing date for certain codes may or may not be the same as the face-to-face or telehealth encounter date. In this instance, for the reporting purposes of this measure, the billing date should be used to pull the appropriate patients into the initial population. It is expected, though, that the numerator criteria would be performed at the time of the actual face-to-face or telehealth encounter during the series of treatments. A lookback (retrospective) period of 7 days, including the billing date, may be used to identify the actual face-to-face or telehealth encounter, which is required to assess the numerator. Therefore, pain intensity should be quantified during the face-to-face or telehealth encounter occurring on the actual billing date or within the 6 days prior to the billing date.

       

      For patients receiving chemotherapy, pain intensity should be quantified at each face-to-face or telehealth encounter with the physician while the patient is currently receiving chemotherapy. For purposes of identifying eligible encounters, patients "currently receiving chemotherapy" refers to patients administered chemotherapy on the same day as the encounter or during the 30 days before the date of the encounter AND during the 30 days after the date of the encounter.

       

      For more details, MAT export is attached to this submission. 

       

      1.15b Denominator Exclusions

      None

      1.15c Denominator Exclusions Details

      None

      1.12a Attach MADiE Output
      1.13a Attach Data Dictionary
      1.16 Type of Score
      1.17 Measure Score Interpretation
      Better quality = Higher score
      1.18 Calculation of Measure Score

      eCQM flow diagram is attached to this submission.

       

      This measure is comprised of two populations but is intended to result in one reporting rate. The reporting rate is the aggregate of Population 1 and Population 2, resulting in a single performance rate. For the purposes of this measure, the single performance rate can be calculated as follows: 

      Performance Rate = (Numerator 1 + Numerator 2)/ (Denominator 1 + Denominator 2)

       

      Calculation algorithm for Population 1: Patient visits for patients with a diagnosis of cancer currently receiving chemotherapy

      1. Find the patient visits that meet the initial population (i.e., the general group of patient visits that a set of performance measures is designed to address).

      2. From the patient visits within the initial population criteria, find the visits that qualify for the denominator (i.e., the specific group of patient visits for inclusion in a specific performance measure based on defined criteria). Note: in some cases, the initial population and denominator are identical.

      3. From the patient visits within the denominator, find the visits that meet the numerator criteria (i.e., the group of patient visits in the denominator for whom a process or outcome of care occurs). Validate that the number of patient visits in the numerator is less than or equal to the number of patient visits in the denominator.

       

      If the visit does not meet the numerator, this case represents a quality failure.

       

      Calculation algorithm for Population 2: Patient visits for patients with a diagnosis of cancer currently receiving radiation therapy

      1. Find the patient visits that meet the initial population (i.e., the general group of patient visits that a set of performance measures is designed to address).

      2. From the patient visits within the initial population criteria, find the visits that qualify for the denominator (i.e., the specific group of patient visits for inclusion in a specific performance measure based on defined criteria). Note: in some cases, the initial population and denominator are identical.

      3. From the patient visits within the denominator, find the visits that meet the numerator criteria (i.e., the group of patient visits in the denominator for whom a process or outcome of care occurs). Validate that the number of patient visits in the numerator is less than or equal to the number of patient visits in the denominator.

       

      If the visit does not meet the numerator, this case represents a quality failure.

      1.18a Attach measure score calculation diagram
      1.19 Measure Stratification Details

      We encourage the results of this measure to be stratified by race, ethnicity, administrative sex, and payer, and have included these variables as recommended data elements to be collected.

      1.26 Minimum Sample Size

      It is recommended to adhere to the standard CMS guideline, which stipulates a minimum of 20 denominator counts to calculate the measure. In addition, it is advisable to incorporate data from patients with diverse attributes for optimal results.

      Most Recent Endorsement Activity
      Advanced Illness and Post-Acute Care Fall 2023
      Initial Endorsement
      Last Updated
      Steward Organization
      American Society of Clinical Oncology
      Steward Address

      United States

      Measure Developer POC

      Caitlin Drumheller
      American Society of Clinical Oncology
      2318 Mill Road
      Suite 800
      Alexandria, VA 22314
      United States

        Evidence
        2.1 Attach Logic Model
        2.2 Evidence of Measure Importance

        Cancer is the second leading cause of death in the US (1) and there is an estimated incidence rate of over 1.9 million cases in 2023. (2) Pain is one of the most common and debilitating symptoms reported amongst cancer patients and in fact ICD-11 contains a new classification for chronic cancer-related pain, defining it as chronic pain caused by the primary cancer itself, or metastases, or its treatment. A systematic review found that 55 percent of patients undergoing anticancer treatment reported pain (3) and chemotherapy and radiation specifically are associated with several distinct pain syndromes. (4) Each year, over a million cancer patients in the US receive chemotherapy or radiation. (5) Severe pain increases the risk of anxiety and depression (4) and a recent study showed that cancer patients who reported pain had worse employment and financial outcomes; the greater the pain, the worse the outcomes. (6) Cancer patients have also reported that pain interferes with their mood, work, relationships with other people, sleep, and overall enjoyment of life. (7)

         

        Assessing pain and developing a plan of care (i.e., pain management) are critical for symptom control, pain management, and the cancer patient’s overall quality of life; it is an essential part of the oncologic management of a cancer patient (see below for specific clinical guideline recommendations). (8) However, many oncology patients report insufficient pain control. (9) A retrospective chart review analysis found an 84 percent adherence to the documentation of pain intensity and 43 percent adherence to pain re-assessment within an hour of medication administration. (10) An observational study found that over half of its cancer patients had a negative pain management index score, indicating that the prescribed pain treatments were not commensurate with the pain intensity reported by the patient. (11) Disparities exist as well, for example, a recent study evaluated opioid prescription fills and potency among cancer patients near end of life between 2007-2019. The study found that while all patients had a steady decline in opioid access, Black and Hispanic patients were less likely to receive opioids than White patients (Black, -4.3 percentage points, 95% CI; Hispanic, -3.6 percentage points, 95% CI) and received lower daily doses (Black, -10.5 MMED, 95% CI; Hispanic, -9.1 MMED, 95% CI). (12)

         

        Although there have been some improvements, as evidenced by data obtained from the CMS Quality Payment Program, subpar pain management amongst cancer patients persists. The intent of the paired measures Percentage of patient visits, regardless of patient age, with a diagnosis of cancer currently receiving chemotherapy or radiation therapy in which pain intensity is quantified and Percentage of visits for patients, regardless of age, with a diagnosis of cancer currently receiving chemotherapy or radiation therapy who report having pain with a documented plan of care to address pain is to improve pain management, thereby improving the function and quality of life of the cancer patient.

         

        Specific clinical practice guideline recommendations that support this measure are: (8) 

        1. Screen all patients for pain at each contact.
        2. Routinely quantify and document pain intensity and quality as characterized by the patient (whenever possible). Include patient reporting of breakthrough pain, treatments used and their impact on pain, satisfaction with pain relief, pain interference, provider assessment of impact on function, and any special issues for the patient relevant to pain treatment and access to care.
        3. Perform comprehensive pain assessment if new or worsening pain is present and regularly for persisting pain.
        4. Perform pain reassessment at specified intervals to ensure that analgesic therapy is providing maximum benefit with minimal adverse effects, and that the treatment plan is followed.
        5. Pain intensity rating scales can be used as part of universal screening and comprehensive pain assessment.

        All recommendations are Category 2A - Based upon lower-level evidence, there is uniform NCCN consensus that the intervention is appropriate.

         

        References:

        1. Centers for Disease Control and Prevention. (2023, January 18). Leading Causes of Death. National Center for Health Statistics. https://www.cdc.gov/nchs/fastats/leading-causes-of-death.htm
        2. National Cancer Institute. (2018). Cancer of Any Site - Cancer Stat Facts. Surveillance, Epidemiology, and End Results Program. https://seer.cancer.gov/statfacts/html/all.html 
        3. Van den Beuken-van Everdingen, M. H., Hochstenbach, L. M., Joosten, E. A., Tjan-Heijnen, V. C., & Janssen, D. J. (2016). Update on Prevalence of Pain in Patients With Cancer: Systematic Review and Meta-Analysis. Journal of Pain and Symptom Management51(6), 1070–1090.e9. https://doi.org/10.1016/j.jpainsymman.2015.12.340
        4. National Cancer Institute. (2019, March 6). Cancer Pain (PDQ®)–Patient Version. https://www.cancer.gov/about-cancer/treatment/side-effects/pain/pain-pdq 
        5. Centers for Disease Control and Prevention. (2022, November 2). Information for Health Care Providers on Infections During Chemotherapy. https://www.cdc.gov/cancer/preventinfections/index.htm 
        6. Halpern, M. T., de Moor, J. S., & Yabroff, K. R. (2022). Impact of Pain on Employment and Financial Outcomes Among Cancer Survivors. Journal of Clinical Oncology: Official Journal of the American Society of Clinical Oncology40(1), 24–31. https://doi.org/10.1200/JCO.20.03746
        7. Moryl, N., Dave, V., Glare, P., Bokhari, A., Malhotra, V. T., Gulati, A., Hung, J., Puttanniah, V., Griffo, Y., Tickoo, R., Wiesenthal, A., Horn, S. D., & Inturrisi, C. E. (2018). Patient-Reported Outcomes and Opioid Use by Outpatient Cancer Patients. The Journal of Pain, 19(3), 278–290. https://doi.org/10.1016/j.jpain.2017.11.001
        8. National Comprehensive Cancer Network® (NCCN). (July 31, 2023). NCCN Clinical Practice Guidelines in Oncology. Adult Cancer Pain Version 2.2023. http://www.nccn.org
        9. Jacqueline C. Dela Pena, Vincent D. Marshall & Michael A. Smith. (2022). Impact of NCCN Guideline Adherence in Adult Cancer Pain on Length of Stay. Journal of Pain & Palliative Care Pharmacotherapy, 36:2, 95-102, DOI: 10.1080/15360288.2022.2066746
        10. El Rahi, C., Murillo, JR., & Zaghloul, H. (September 2017). Pain Assessment Practices in Patients with Cancer Admitted to the Oncology Floor. J Hematol Oncol Pharm, 7(3):109-113. https://jhoponline.com/issue-archive/2017-issues/jhop-september-2017-vol-7-no-3/17246-pain-assessment-practices-in-patients-with-cancer-admitted-to-the-oncology-floor 
        11. Thronæs, M., Balstad, T. R., Brunelli, C., Løhre, E. T., Klepstad, P., Vagnildhaug, O. M., Kaasa, S., Knudsen, A. K., & Solheim, T. S. (2020). Pain management index (PMI)-does it reflect cancer patients' wish for focus on pain? Supportive Care in Cancer: Official Journal of the Multinational Association of Supportive Care in Cancer28(4), 1675–1684. https://doi.org/10.1007/s00520-019-04981-
        12. Enzinger, A. C., Ghosh, K., Keating, N. L., Cutler, D. M., Clark, C. R., Florez, N., Landrum, M. B., & Wright, A. A. (2023). Racial and Ethnic Disparities in Opioid Access and Urine Drug Screening Among Older Patients With Poor-Prognosis Cancer Near the End of Life. Journal of clinical oncology : Official Journal of the American Society of Clinical Oncology41(14), 2511–2522. https://doi.org/10.1200/JCO.22.01413 
        2.6 Meaningfulness to Target Population

        A 2022 study evaluated patient and caregiver perspectives on cancer-related quality measures, to inform priorities for health system implementation. Measure concepts related to pain management plans and improvement in pain were nominated as part of the top five concepts. The study notes that the patient and caregiver panel put much emphasis on the importance of routine pain screening, management, and follow-up. (1) 

         

        References:

         

        1. O'Hanlon, C. E., Giannitrapani, K. F., Lindvall, C., Gamboa, R. C., Canning, M., Asch, S. M., Garrido, M. M., ImPACS Patient and Caregiver Panel, Walling, A. M., & Lorenz, K. A. (2022). Patient and Caregiver Prioritization of Palliative and End-of-Life Cancer Care Quality Measures. Journal of general internal medicine37(6), 1429–1435. https://doi.org/10.1007/s11606-021-07041-8 
        Table 1. Performance Scores by Decile
        Performance Gap
        Overall Minimum Decile_1 Decile_2 Decile_3 Decile_4 Decile_5 Decile_6 Decile_7 Decile_8 Decile_9 Decile_10 Maximum
        Mean Performance Score SEE LOGIC MODEL ATTACHMENT
        N of Entities
        N of Persons / Encounters / Episodes
          Equity
          3.1 Contributions Toward Advancing Health Equity

          See measure importance.

            Feasibility
            4.1 Feasibility Assessment

            Not applicable during the Fall 2023 cycle.

            4.2 Attach Feasibility Scorecard
            4.3 Feasibility Informed Final Measure

            Feedback from EHRs, cancer registries, and oncology practices provides compelling evidence that this measure is easy to implement and presents minimal feasibility challenges. The necessary data elements required for the denominator (active cancer diagnosis, office visit, chemotherapy administration and/or radiation treatment) can be found within structured fields and are recorded using commonly accepted coding standards. The same applies to the numerator data element, which requires documentation of the pain assessment result.

             

            The measure's data capture can be seamlessly integrated into existing physician workflows and data collection tools without requiring any significant modifications. Numerous healthcare practices have already set up their workflows to gather this information, highlighting its easy adoption. This is evident from the considerable number of practices that report this measure to the Centers for Medicare and Medicaid Services (CMS) via the Merit-based Incentive Payment System (MIPS) program.


            This measure has been widely adopted and proven to be effective. It has been implemented without any issues or feasibility concerns. Therefore, no adjustments to the measure specifications are needed.

             

            4.4 Proprietary Information
            Proprietary measure or components with fees
            4.4a Fees, Licensing, or Other Requirements

            As the world’s leading professional organization for physicians and others engaged in clinical cancer research and cancer patient care, American Society of Clinical Oncology, Inc. (“Society”) and its affiliates1 publishes and presents a wide range of oncologist‐approved cancer information, educational and practice tools, and other content. The ASCO trademarks, including without limitation ASCO®, American Society of Clinical Oncology®, JCO®, Journal of Clinical Oncology®, Cancer.Net™, QOPI®, QOPI Certification Program™, CancerLinQ®, CancerLinQ Discovery®, and Conquer Cancer®, are among the most highly respected trademarks in the fields of cancer research, oncology education, patient information, and quality care. This outstanding reputation is due in large part to the contributions of ASCO members and volunteers. Any goodwill or commercial benefit from the use of ASCO content and trademarks will therefore accrue to the Society and its respective affiliates and further their tax‐exempt charitable missions. Any use of ASCO content and trademarks that may depreciate their reputation and value will be prohibited.

             

            ASCO does not charge a licensing fee to not-for-profit hospitals, healthcare systems, or practices to use the measure for quality improvement, research or reporting to federal programs. ASCO encourage all of these not-for-profit users to obtain a license to use the measure so ASCO can:

            • Keep users informed about measure updates and/or changes
            • Learn from measure users about any implementation challenges to inform future measure updates and/or changes
            • Track measure utilization (outside of federal reporting programs) and performance rates

             

            ASCO has adopted the Council of Medical Specialty Society’s Code for Interactions with Companies (chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://cmss.org/wp-content/uploads/2016/02/CMSS-Code-for-Interactions-with-Companies-Approved-Revised-Version-4.13.15-with-Annotations.pdf), which provides guidance on interactions with for‐profit entities that develop produce, market or distribute drugs, devices, services or therapies used to diagnose, treat, monitor, manage, and alleviate health conditions. The Society’s Board of Directors has set Licensing Standards of American Society of Clinical Oncology (chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://old-prod.asco.org/sites/new-www.asco.org/files/content-files/about-asco/pdf/ASCO-Licensing-Standards-Society-and-affiliates.pdf) to guide all licensing arrangements. 

             

            In addition, ASCO has adopted the Council of Medical Specialty Society’s Policy on Antitrust Compliance (chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://cmss.org/wp-content/uploads/2015/09/Antitrust-policy.pdf), which provided guidance on compliance with all laws applicable to its programs and activities, specifically including federal and state antitrust laws, including guidance to not discuss, communicate, or make announcements about fixing prices, allocating customers or markets, or unreasonably restraining trade.

             

            Contact Us:

            • If you have questions about the ASCO Licensing Standards or would like to pursue a licensing opportunity, please content ASCO’s Division of Licensing, Rights & Permissions at [email protected].
            • Individual authors and others seeking one‐time or limited permissions should contact [email protected]. ASCO members seeking to use an ASCO trademark in connection with a grant, award, or quality initiative should contact the administrator of that particular program.

            1 Unless otherwise specified, the term “ASCO” in these Licensing Standards refers collectively to American Society of Clinical Oncology, Inc., the ASCO Association, Conquer Cancer Foundation of the American Society of Clinical Oncology, CancerLinQ LLC, QOPI Certification Program, LLC, and all other affiliates of the American Society of Clinical Oncology, Inc.

              Testing Data
              5.1.1 Data Used for Testing

              Six datasets provided by CMS' MIPS program and publicly reported were used to test the measure's reliability:

              1. A data set of 580 individual clinicians who reported on the measure in the calendar year 2019 with 556,388 qualifying patient encounters.
              2. A data set of 256 practices that reported on the measure in the calendar year 2019 with 1,147,716 qualifying patient encounters.
              3. A data set of 479 individual clinicians who reported on the measure in the calendar year 2020 with 435,364 qualifying patient encounters.
              4. A data set of 345 practices that reported on the measure in the calendar year 2020 with 1,326,716 qualifying patient encounters.
              5. A data set of 510 individual clinicians who reported on the measure in the calendar year 2021 with 419,712 qualifying patient encounters.
              6. A data set of 353 practices that reported on the measure in the calendar year 2021 with 1,371,688 qualifying patient encounters.

               

              The data source used to test the measure’s validity is 2022 patient data from the McKesson Practice Insights QCDR. McKesson’s Practice Insights QCDR is an oncology-specific reporting and analytics platform that supports a variety of practice value-based care initiatives. The web-based reporting system is fully integrated with the oncology-specific iKnowMed Generation 2 technology, leveraging the clinical data contained within the EHR system and enabling the automated calculation of quality measures and analytics to support improved patient care. Through Practice Insights QCDR, which provides continuous data monitoring and feedback, practices are enabled to exceed the simple task of participating in quality programs with the goal to achieve optimized patient care and reduced costs. Practice Insights not only supports successful participation in the MIPS program, but it also serves as a powerful reporting platform for practices pursuing other value-based care initiatives and alternative payment models (APMs), including the Enhancing Oncology Model (EOM).

               

              For the purpose of conducting validity testing, 10 community-based oncology practices were randomly selected from the full list of Practice Insights QCDR participants, representing 3% of all 2022 MIPS program participants.  From these, a randomized sample of 50 patients per practice, for a total of 500 patients, were selected for full medical record chart audits.

              5.1.2 Differences in Data

              To conduct data element testing with greater granularity, we acquired an additional data set from the McKesson Practice Insights QCDR as the CMS-provided MIPS individual clinician and practice performance data sets were not detailed enough. The CMS-provided data sets were utilized for accountable entity-level testing, while the Practice Insights QCDR-provided data set was used to carry out encounter/patient-level testing.

              5.1.3 Characteristics of Measured Entities

              The clinicians and practices included in the reliability analysis represented all 49 states of the continental United States and ranged from very small single proprietorships to large academic institutions according to the information they provided to the CMS. For validity analysis, McKesson’s Practice Insights QCDR randomly selected 10 community-based practices across the United States.

              5.1.4 Characteristics of Units of the Eligible Population

              CMS did not capture nor provide any patient-level socio-demographic variables and therefore no patient demographic data is available. McKesson's Practice Insights QCDR masked patients' demographic data to protect privacy during medical chart audits and did not provide patient demographics.

              5.2.2 Method(s) of Reliability Testing

              An assessment of the measure's reliability was performed through the utilization of signal-to-noise analysis, a method that determines the precision of the actual construct in comparison to the random variation. The signal-to-noise ratio is determined by calculating the ratio of between unit variance to total variance. This analysis provides valuable insight into the measure's reliability and its ability to produce consistent results.

              5.2.3 Reliability Testing Results

              Among the average of 523 individual clinicians and 318 practices over the 3 calendar years, the reliability of the measure scores ranged from 0.826 to 1.00. The average reliability score was an almost perfect 0.996. 

               

              Overall, 100% of clinicians and practices had measure scores with reliabilities of 0.70 or higher, a commonly accepted reliability threshold (Adams 2010). The reliability values were consistently close to the ideal, indicating that the clinician performance rates were highly reliable, and any measurement error was minimal.

               

              Adams, J. L., Mehrotra, A., Thomas, J. W., & McGlynn, E. A. (2010). Physician cost profiling—reliability and risk of misclassification. New England Journal of Medicine, 362(11), 1014-1021.

              5.2.4 Interpretation of Reliability Results

              Based on the available data, it is evident that individual clinicians and practices, even those with a minimal sample size, display reliability coefficients that exceed 0.80. This result indicates that the measure is highly reliable, both at individual clinician and practice levels. Therefore, the performance scores provide a true reflection of the quality of care.

              Table 2. Accountable Entity Level Reliability Testing Results by Denominator, Target Population Size
              Accountable Entity-Level Reliability Testing Results
                Overall Minimum Decile_1 Decile_2
              Reliability SEE LOGIC MODEL ATTACHMENT
              Mean Performance Score
              N of Entities
              5.3.3 Method(s) of Validity Testing

              For the purpose of checking the validity of the data elements in this measure, a random sample of 500 patients from 10 different test sites was selected. Both a measure abstractor and an automated algorithm were used to score patients on each data element of the measure. The agreement between the two scoring methods was evaluated using the Kappa statistic. Denominator and numerator data elements were assessed for all 500 patients. Since this measure does not have any denominator exclusion or exception data element, these data elements were not tested.

              5.3.4 Validity Testing Results

              Measure Data Element    Measure Component    Kappa Estimate    Standard Error    95% Confidence Limits
              Denominator    Cancer Diagnosis That's Active    1.0000    0.0000    1.0000    1.0000
              Denominator    Office Visit    1.0000    0.0000    1.0000    1.0000
              Denominator    Chemotherapy Administration    0.9509    0.0218    0.9081    0.9937
              Denominator    Radiation Treatment Management    0.9081    0.0914    0.7289    1.0000
              Numerator    Pain Assessment Documented    1.0000    0.0000    1.0000    1.0000
               

              5.3.5 Interpretation of Validity Results

              The calculated Kappa coefficient was 0.96 (with a 95% confidence interval of 0.91 to 1.00) for the denominator data element and 1.00 (with a 95% confidence interval of 1.00 to 1.00) for the numerator data element.

               

              The Kappa coefficients were interpreted using the benchmarks for Cohen's Kappa established by Landis and Koch in 1977, which are widely recognized in the field of psychometrics:

              • 0.8 to 1.0 – almost perfect agreement;
              • 0.6 to 0.8 – substantial agreement;
              • 0.4 to 0.6 – moderate agreement;
              • 0.2 to 0.4 – fair agreement;
              • Zero to 0.2 – slight agreement; and
              • Zero or lower – poor agreement.

               

              The evaluation benchmarks suggest that the measure accurately distinguishes between good and poor quality, with nearly perfect validity for both the measure's denominator and numerator.

               

              Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 159-174.

              5.4.1 Methods Used to Address Risk Factors
              5.4.1b Rationale For No Adjustment or Stratification

              N/A

                Use
                6.1.4 Program Details
                Name of the program and sponsor
                Merit-based Incentive Payment System (MIPS) reporting program, Center for Medicare and Medicaid Services (CMS).
                Purpose of the program
                MIPS encourages improvement in clinical practice and supporting advances in technology that allow for easy exchange of information.
                Geographic area and percentage of accountable entities and patients included
                MIPS eligible providers may earn performance-based payment adjustments for the services provided to Medicare patients in the USA.
                Applicable level of analysis and care setting

                Level of measurement and setting: Clinician/Group Level; Registry Data Source; Outpatient Services/Ambulatory Care Setting

                 

                Purpose: MIPS takes a comprehensive approach to payment by basing consideration of quality on a set of evidence-based measures that were primarily developed by clinicians, thus encouraging improvement in clinical practice and supporting advances in technology that allow for easy exchange of information. 

                 

                Geographic area and number and percentage of accountable entities and patients included: MIPS eligible providers may earn performance-based payment adjustments for the services provided to Medicare patients in the USA. Eligible providers include: Physicians (including doctors of medicine, osteopathy, dental surgery, dental medicine, podiatric medicine, and optometry), Osteopathic practitioners, Chiropractors, Physician assistants, Nurse practitioners, Clinical nurse specialists, Certified registered nurse anesthetists, Physical therapists, Occupational therapists, Clinical psychologists, Qualified speech-language pathologists, Qualified audiologists, Registered dietitians or nutrition professionals.

                Name of the program and sponsor
                Practice Insights by McKesson in Collaboration with The US Oncology Network – QCDR
                Purpose of the program
                Practice Insights is a performance analytics tool that helps analyze data generated throughout the patient journey.
                Geographic area and percentage of accountable entities and patients included
                Represents over 10,000 oncology providers nationwide.
                Applicable level of analysis and care setting

                Level of measurement and setting: Oncology practices. 

                 

                Purpose: Practice Insights by McKesson in Collaboration with The US Oncology Network – QCDR. Practice Insights is a performance analytics tool that helps analyze data generated throughout the patient journey to gain proactive, actionable insights into quality initiatives, value-based care programs, performance metrics, productivity measures and peer/industry benchmarks. Practice Insights seamlessly pulls data from multiple sources to create a holistic roadmap that supports the clinical, financial and operational needs of oncology practices.

                 

                Geographic area and number and percentage of accountable entities and patients included: The US Oncology Network (“The Network”) represents over 10,000 oncology physicians, nurses, clinicians, and cancer care specialists nationwide and is one of the nation’s largest and most innovative networks of community-based oncology physicians, treating more than 1.2 million cancer patients annually in more than 450 locations across 25 states. The Network unites over 1,400 like-minded physicians around a common vision of expanding patient access to the highest quality, state-of-the-art care close to home and at lower costs for patients and the health care system.

                Name of the program and sponsor
                ASCO Certified: Patient-Centered Cancer Care Standards
                Purpose of the program
                The new program certifies oncology group practices and health systems that meet a single set of comprehensive, evidence-based oncology medical home standards from ASCO and the Community Oncology Alliance.
                Geographic area and percentage of accountable entities and patients included
                ASCO Certified was informed by a pilot of 12 practice groups and health systems across 95 service sites and 500 oncologists. The cohort comprised a variety of settings, including community, hospital, academic and rural.
                Applicable level of analysis and care setting

                Oncology group practices and health systems. 

                6.2.1 Actions of Measured Entities to Improve Performance

                Providers are evaluated on if pain intensity is quantified among cancer patients undergoing chemotherapy or radiation; this is an every-visit measure. ASCO has not received feedback that the measure negatively impacts the provider’s workflow. Per the NQF Cancer CDP Fall 2018 Report, the panel agreed that data for this measure are routinely collected, and the measure is feasible. 

                6.2.2 Feedback on Measure Performance

                ASCO’s measure development team allows for feedback and measure inquiries from implementers and reporters via email ([email protected]).  In addition, we receive questions and feedback from ONC JIRA.  To date, questions related to coding guidance and the intent of the measure have come through. Otherwise, ASCO has not received feedback on these measures through those avenues. 

                6.2.3 Consideration of Measure Feedback

                N/A

                6.2.4 Progress on Improvement

                In evaluating the QPP data, the average performance rate at the individual clinician level hovers around 89 percent, signaling some improvement. However, performance at the practice level remains quite low, indicating that a gap remains. 

                6.2.5 Unexpected Findings

                At this time, we are not aware of any unintended consequences related to this measure. We take unintended consequences very seriously and therefore continuously monitor to identify actions that can be taken to mitigate them.

                  Public Comments
                  First Name
                  Amanda
                  Last Name
                  Overholt

                  Submitted by Amanda on Mon, 01/08/2024 - 15:37

                  Permalink

                  Importance

                  Importance Rating
                  Importance

                  Strengths:

                  • The developer cites evidence regarding the incident rate of over 1.9 million cancer cases in 2023 and the prevalence of pain among cancer patients during treatment. There is a logic model linking the process where providers queries cancer patients undergoing chemotherapy or radiation about their pain intensity, optimizing pain management therapies, which leads to improved function by way of symptom control and pain management, thereby improving the quality of life of the cancer patient.
                  • The developer cites evidence of insufficient pain control for oncology patients, and disparities exist in pain control management. National Comprehensive Cancer Network's clinical practice guideline recommendations support this measure by recommending: 
                    • Screening all patients for pain at each contact.
                    • Routinely quantifying and documenting pain intensity and quality as characterized by the patient (whenever possible). Include patient reporting of breakthrough pain, treatments used and their impact on pain, satisfaction with pain relief, pain interference, provider assessment of impact on function, and any special issues for the patient relevant to pain treatment and access to care.
                    • Performing comprehensive pain assessment if new or worsening pain is present and regularly for persisting pain.
                    • Performing pain reassessment at specified intervals to ensure that analgesic therapy is providing maximum benefit with minimal adverse effects, and that the treatment plan is followed.
                    • Pain intensity rating scales can be used as part of universal screening and comprehensive pain assessment.
                  • The developer cites disparities in opioid access and dosage among different racial groups, noting that Black and Hispanic patients were less likely to receive opioids than White patients (Black, -4.3 percentage points, 95% CI; Hispanic, -3.6 percentage points, 95% CI) and received lower daily doses (Black, -10.5 MMED, 95% CI; Hispanic, -9.1 MMED, 95% CI).
                  • The mean practice-level performance score varied from 0.68 (2019) to 0.50 (2021), and there remains room for improvement in the bottom 7-8 deciles. 
                     

                  Limitations:

                  • Although no direct patient input on the meaningfulness of the measure, the developer cites a 2022 study reporting that the study's patient and caregiver panel placed emphasis on the importance of routine pain screening, management, and follow-up.
                  • There appears to be little room for improvement in clinician-level performance scores, with a mean ranging from 0.88 to 0.90, and meaningful improvement limited to the bottom 3-4 deciles. Developers note that participants are allowed to self-select measures and may select those reflecting high performance rates, which could potentially mask a drop in practice-level performance.

                   

                  Rationale:

                  • There is a business case supported by credible evidence depicting a link between health care processes to desired outcomes for cancer patients. Actions providers can take to reach the desired outcome are outlined. Additionally, a gap in care remains that warrants this measure. Evidence cited showing disparities in access to opioids based on race/ethnicity suggests the possibility of a similar disparity in the measure focus, but this is not documented.

                  Feasibility Acceptance

                  Feasibility Rating
                  Feasibility Acceptance

                  Strengths:

                  • The developer did not identify any data availability issues within the eCQM Feasibility Scorecard, which evaluates data availability, accuracy, and workflow.
                  • Data elements required for the numerator and denominator can be found within structured fields and are recorded using commonly accepted coding standards. The developer notes that the measure's data capture can be seamlessly integrated into existing physician workflows and data collection tools without requiring any significant modifications.
                  • There are no fees to use this measure, however, the developer encourages all not-for-profit users to obtain a license to use the measure. Guidance on interactions with for-profit entities is provided.

                   

                  Limitations:

                  None

                   

                  Rationale:

                  • The necessary data elements required for the numerator and denominator can be found within structured fields and are recorded using commonly accepted coding standards. There are no fees for not-for-profit hospitals, healthcare systems, or practices to use the measure. Guidance on interactions with for-profit entities is provided.

                  Scientific Acceptability

                  Scientific Acceptability Reliability Rating
                  Scientific Acceptability Reliability

                  Strengths:

                  • The measure is well-defined and precisely specified.
                  • Across all years analyzed and individual clinician and practice levels, the reliability scores ranged from 0.826 to 1.000 with an overall average of 0.996. Within year and accountable entity level, the average reliability ranged from 0.994 to 0.998 and the vast majority of facilities had reliability greater than 0.9.
                  • Across all years analyzed and individual clinician and practice levels, hundreds of accountable entities and hundreds of thousands of patient encounters were included in the reliability analysis.
                  • The data were retrieved from 2021-2023 performance reports and reflect calendar years 2019-2021.

                   

                  Limitations:

                  • The Calculation Algorithms for Populations 1 and 2 are very generic and lack details specific to this particular measure.

                   

                  Rationale:

                  • Measure score reliability testing (accountable entity level reliability) performed. All practice levels have a reliability which exceeds the accepted threshold of 0.6. Sample size for each year and accountable entity level analyzed is sufficient.
                  Scientific Acceptability Validity Rating
                  Scientific Acceptability Validity

                  Strengths:

                  • The developer tested the validity of the data elements (both numerator and denominator) using a random sample of 500 patient encounters across 10 test sites. The developer scored encounters on each data element using both a measure abstractor and an automated algorithm and then evaluated agreement between the two scoring methods using the Kappa statistic.
                  • Results: 
                    Kappa coefficient for the denominator data element was 0.96 (with a 95% confidence interval of 0.91 to 1.00)
                    Kappa coefficient for the numerator data element was 1.00 (with a 95% confidence interval of 1.00 to 1.00)
                    Based on these results the developer reports measure accurately distinguishes between good and poor quality.
                  • There are no denominator or numerator exclusions for this measure.

                   

                  Limitations:

                  None

                   

                  Rationale:

                  • The developer tested the validity of the data elements (both numerator and denominator) using a random sample of 500 patient encounters across 10 test sites. The developer scored encounters on each data element using both a measure abstractor and an automated algorithm and then evaluated agreement between the two scoring methods using the Kappa statistic.
                  • Results: 
                    Kappa coefficient for the denominator data element was 0.96 (with a 95% confidence interval of 0.91 to 1.00)
                    Kappa coefficient for the numerator data element was 1.00 (with a 95% confidence interval of 1.00 to 1.00)
                    Based on these results the developer reports measure accurately distinguishes between good and poor quality.
                  • There are no denominator or numerator exclusions for this measure.

                  Equity

                  Equity Rating
                  Equity

                  Strengths:

                  N/A

                   

                  Limitations:

                  • Developers use this section to refer to importance section; no information is provided in importance (or elsewhere) demonstrating this submission addresses equity as intended; extent of the support for this criterion appears to be from single study reporting racial/ethnic disparity in opioid prescribing.

                   

                  Rationale:

                  • Developer did not address this optional criterion

                  Use and Usability

                  Use and Usability Rating
                  Use and Usability

                  Strengths:

                  • Measure currently in use in MIPS (eligible entities can receive performance-based incentives) and the Enhancing Oncology Model (EOM-4; practices take on financial and performance accountability for episodes of care)
                  • Other tools for QI are Practice Insights by McKesson, a performance analytics tool used by subscribing providers, and the Patient-Centered Cancer Care Standards ASCO Certification
                  • A performance gap remains at the practice level, where there could be meaningful improvements in at least the bottom 8 deciles
                  • Providers can send feedback via the CMS Helpdesk or via email to ASCO. They report the only feedback to date has been related to coding guidance and measure intent
                  • No unexpected findings are reported

                   

                  Limitations:

                  • Based on review of logic model/testing attachment, meaningful improvement in the clinician-level measure is probably limited to the bottom 4 deciles, and no improvement has been made 2019-2021; mean performance at the practice level falls between 2019 and 2021 (0.68 to 0.50, however, developer does not provide a rationale for this decline.

                   

                  Rationale:

                  • The measure is in use in two federal programs, and tools for QI include participation in a McKesson analytics platform (Practice Insights) and an ASCO-sponsored certification program. No feedback that would affect measure specifications, or unexpected findings, are reported.
                  • Room for meaningful improvement in the clinician-level measure is minimal. There appears to be substantial room for improvement in the practice-level measure, but there has been a performance decline from 2019-2021, for which no rationale is offered.

                  Summary

                  N/A

                  First Name
                  Andrew
                  Last Name
                  Kohler

                  Submitted by Andrew on Wed, 01/10/2024 - 11:53

                  Permalink

                  Importance

                  Importance Rating
                  Importance

                  The subject nature of pain qualification makes this study important, full stop. In the face of the current climate of the opiate crisis, providers are much more stringent and aware of the risks of opiates. This sets up cancer patients to lack the needed medication, and interdisciplinary care outlined in the study (dirth of healthcare access at larger throughout the country).

                   

                  The study is important and this is outlined by the developer

                   

                   

                  Copying the staff notes here to ensure they are available to me later:

                  • The developer cites evidence regarding the incident rate of over 1.9 million cancer cases in 2023 and the prevalence of pain among cancer patients during treatment. There is a logic model linking the process where providers queries cancer patients undergoing chemotherapy or radiation about their pain intensity, optimizing pain management therapies, which leads to improved function by way of symptom control and pain management, thereby improving the quality of life of the cancer patient.
                  • The developer cites evidence of insufficient pain control for oncology patients, and disparities exist in pain control management. National Comprehensive Cancer Network's clinical practice guideline recommendations support this measure by recommending: 
                    • Screening all patients for pain at each contact.
                    • Routinely quantifying and documenting pain intensity and quality as characterized by the patient (whenever possible). Include patient reporting of breakthrough pain, treatments used and their impact on pain, satisfaction with pain relief, pain interference, provider assessment of impact on function, and any special issues for the patient relevant to pain treatment and access to care.
                    • Performing comprehensive pain assessment if new or worsening pain is present and regularly for persisting pain.
                    • Performing pain reassessment at specified intervals to ensure that analgesic therapy is providing maximum benefit with minimal adverse effects, and that the treatment plan is followed.
                    • Pain intensity rating scales can be used as part of universal screening and comprehensive pain assessment.
                  • The developer cites disparities in opioid access and dosage among different racial groups, noting that Black and Hispanic patients were less likely to receive opioids than White patients (Black, -4.3 percentage points, 95% CI; Hispanic, -3.6 percentage points, 95% CI) and received lower daily doses (Black, -10.5 MMED, 95% CI; Hispanic, -9.1 MMED, 95% CI).
                  • The mean practice-level performance score varied from 0.68 (2019) to 0.50 (2021), and there remains room for improvement in the bottom 7-8 deciles. 
                     

                  Limitations:

                  • Although no direct patient input on the meaningfulness of the measure, the developer cites a 2022 study reporting that the study's patient and caregiver panel placed emphasis on the importance of routine pain screening, management, and follow-up.
                  • There appears to be little room for improvement in clinician-level performance scores, with a mean ranging from 0.88 to 0.90, and meaningful improvement limited to the bottom 3-4 deciles. Developers note that participants are allowed to self-select measures and may select those reflecting high performance rates, which could potentially mask a drop in practice-level performance.

                   

                  Rationale:

                  • There is a business case supported by credible evidence depicting a link between health care processes to desired outcomes for cancer patients. Actions providers can take to reach the desired outcome are outlined. Additionally, a gap in care remains that warrants this measure. Evidence cited showing disparities in access to opioids based on race/ethnicity suggests the possibility of a similar disparity in the measure focus, but this is not documented.

                  Feasibility Acceptance

                  Feasibility Rating
                  Feasibility Acceptance

                  See note below, agree - basic equation is in place.

                   

                  Staff note copy:

                  • The necessary data elements required for the numerator and denominator can be found within structured fields and are recorded using commonly accepted coding standards. There are no fees for not-for-profit hospitals, healthcare systems, or practices to use the measure. Guidance on interactions with for-profit entities is provided.

                  Scientific Acceptability

                  Scientific Acceptability Reliability Rating
                  Scientific Acceptability Reliability

                  Power and CI comply with expected medical and scientific standards

                  Scientific Acceptability Validity Rating
                  Scientific Acceptability Validity

                  See above

                   

                  Staff note copied here:

                   

                  Rationale:

                  • The developer tested the validity of the data elements (both numerator and denominator) using a random sample of 500 patient encounters across 10 test sites. The developer scored encounters on each data element using both a measure abstractor and an automated algorithm and then evaluated agreement between the two scoring methods using the Kappa statistic.
                  • Results: 
                    Kappa coefficient for the denominator data element was 0.96 (with a 95% confidence interval of 0.91 to 1.00)
                    Kappa coefficient for the numerator data element was 1.00 (with a 95% confidence interval of 1.00 to 1.00)
                    Based on these results the developer reports measure accurately distinguishes between good and poor quality.
                  • There are no denominator or numerator exclusions for this measure.

                  Equity

                  Equity Rating
                  Equity

                  As mentioned in the other two study component reviews - this is addressed, yet not expounded upon. The information is needed and valuable.

                  Use and Usability

                  Use and Usability Rating
                  Use and Usability

                  See other study component notes.

                   

                  Staff notes copy here:

                  • Measure currently in use in MIPS (eligible entities can receive performance-based incentives) and the Enhancing Oncology Model (EOM-4; practices take on financial and performance accountability for episodes of care)
                  • Other tools for QI are Practice Insights by McKesson, a performance analytics tool used by subscribing providers, and the Patient-Centered Cancer Care Standards ASCO Certification
                  • A performance gap remains at the practice level, where there could be meaningful improvements in at least the bottom 8 deciles
                  • Providers can send feedback via the CMS Helpdesk or via email to ASCO. They report the only feedback to date has been related to coding guidance and measure intent
                  • No unexpected findings are reported

                   

                  Limitations:

                  • Based on review of logic model/testing attachment, meaningful improvement in the clinician-level measure is probably limited to the bottom 4 deciles, and no improvement has been made 2019-2021; mean performance at the practice level falls between 2019 and 2021 (0.68 to 0.50, however, developer does not provide a rationale for this decline.

                  rationale

                   

                  The measure is in use in two federal programs, and tools for QI include participation in a McKesson analytics platform (Practice Insights) and an ASCO-sponsored certification program. No feedback that would affect measure specifications, or unexpected findings, are reported.

                  • Room for meaningful improvement in the clinician-level measure is minimal. There appears to be substantial room for improvement in the practice-level measure, but there has been a performance decline from 2019-2021, for which no rationale is offered.

                  Summary

                  Met or not met with addressable factors - 

                  First Name
                  Yaakov
                  Last Name
                  Liss

                  Submitted by Yaakov Liss on Mon, 01/15/2024 - 15:31

                  Permalink

                  Importance

                  Importance Rating
                  Importance

                  I don't comletely understand the difference between measure 0384 and this measure (0384e).  Please clarify. 

                  Feasibility Acceptance

                  Feasibility Rating
                  Feasibility Acceptance

                  How is the documentations of pain intensity being done by providers and how is this being captured in EMRs in order to count for this measure?

                   

                  Are radiation oncologists the providers who need to meet this measure for patients undergoing radiation?  If so, how does this work with including a single measure at the level of providers when the providers are both medical oncologists and radiation oncologists?

                  Scientific Acceptability

                  Scientific Acceptability Reliability Rating
                  Scientific Acceptability Reliability

                  N/A

                  Scientific Acceptability Validity Rating
                  Scientific Acceptability Validity

                  N/A

                  Equity

                  Equity Rating
                  Equity

                  N/A

                  Use and Usability

                  Use and Usability Rating
                  Use and Usability

                  This is a maintenance measure so presumably this is not an issue.

                  Summary

                  Should we be concerned with the effect that this measure may have on opiate prescribing?  What monitoring in process is in place to ensure that there are no unintended consequences as a result of this measure?

                  First Name
                  Stephen
                  Last Name
                  Weed

                  Submitted by Stephen Weed on Tue, 01/16/2024 - 19:59

                  Permalink

                  Importance

                  Importance Rating
                  Importance

                  As I mentioned in my 0384 comments, this is an important clinical aspect of overall care and a huge patient concern. The type of pain monitoring seems to differ depending on the facility used for follow up care. Whether or not this can be corrected, knowledge is power. Both patients and caregivers would be well served to understand the implications of the results to date. 
                  The 2021 practice performance table shows significant room for improvement with facilities.  This measure is essentially unchanged in the three years of reported results. In 2021, 108 of the 353 facilities reporting showed the minimum Pearson score. This represented over 15% of the total encounters.

                   

                  Feasibility Acceptance

                  Feasibility Rating
                  Feasibility Acceptance

                  No concerns here. 

                  Scientific Acceptability

                  Scientific Acceptability Reliability Rating
                  Scientific Acceptability Reliability

                  Well defined data profile.

                  Scientific Acceptability Validity Rating
                  Scientific Acceptability Validity

                  Well defined data profile.

                  Equity

                  Equity Rating
                  Equity

                  not required. I don't think much is gained by drilling down on demographics. Many people understand the links between communities of color, healthcare access and quality of care. 

                  Use and Usability

                  Use and Usability Rating
                  Use and Usability

                  The data is being gathered but this measure alone does not seem to be an incentive for the facilities or clinicians to improve the scores. While not wanting to suggest change will be easy, it would serve everyone for ASCO and it associated organizations to provide some context for a path to improvement. 

                  Summary

                  The amount of facilities that do not report pain management testing is very disturbing. 

                  First Name
                  Gerri
                  Last Name
                  Lamb

                  Submitted by Gerri Lamb on Wed, 01/17/2024 - 14:05

                  Permalink

                  Importance

                  Importance Rating
                  Importance

                  The staff review provides an overview of research support for the importance of this measure. Research provided overviews incidence of pain assessment in a limited number of studies using chart review and/or observation. Support also is provided for lack of alignment between pain assessment and treatment suggesting that simply assessing pain does not necessary lead to adequate treatment - a situation of necessary but not sufficient. Research support should be provided for the extent to which pain assessment increases the likelihood of a documented plan of care (companion measure 383) and adequate treatment. 

                  Feasibility Acceptance

                  Feasibility Rating
                  Feasibility Acceptance

                  The measure developers indicate that all data required to report this measure currently are in structured fields which would support feasibility. They state that there is extensive feedback from a number of sources that this measure is easy to implement, however no data is provided to support these statements.  

                  Scientific Acceptability

                  Scientific Acceptability Reliability Rating
                  Scientific Acceptability Reliability

                  Reliability of this measure evaluated with signal to noise analyses from recent data. All were in acceptable limits. 

                  Scientific Acceptability Validity Rating
                  Scientific Acceptability Validity

                  Validity was evaluated using a recent (2022) data set.  Kappa statistics were used to compare manual abstraction and an automated algorithm.  Please ask the measure developers to explain how this calculation is considered a measure of validity rather than reliability. 

                  Equity

                  Equity Rating
                  Equity

                  The measure developers "encourage" stratification of the measure by race and ethnicity, but the measure is not required to be risk stratified. A limited amount of information about pain treatment and opioid access by race is provided in the importance narrative. No subsequent data relevant to equity specifically for this measure are provided. 

                  Use and Usability

                  Use and Usability Rating
                  Use and Usability

                  As noted in the staff review, there is limited room for improvement for this measure at the clinician level, more so at the practice level. Further analysis of usability should be provided to support continued use in MIPS and other quality programs. 

                  Summary

                  The measure has strong face validity; importance should be bolstered with provision of research connecting/showing a relationship between  measurement of pain intensity and appropriateness of treatment (a connection that is illustrated in the logic model). Some of the literature provided suggests that pain measurement doesn't necessarily improve adequacy of treatment. Limited data are provided to support statements of feasibility and equity. Scientific acceptability is supported with recent data from QCDR.  An explanation of the use of Kappa to support validity would be helpful. As noted in the staff review, this measure may be topped out at the clinician level - this should be discussed further.  

                  First Name
                  Erin
                  Last Name
                  Crum

                  Submitted by Erin Crum on Fri, 01/19/2024 - 13:42

                  Permalink

                  Importance

                  Importance Rating
                  Importance

                  Monitoring pain intensity is undoubtably valuable to patient care.  However, simply asking a patient about their pain intensity without requiring the clinician to develop plan to address elevated pain is inadequate. This is evidenced by data provided by the measure steward, as well Oncology Care Model data showing that although practices tend to perform high on measures associated with collecting a pain score, avoidable pain continues to be one of the most prevalent reasons for hospital ED visits.  Performance benchmarks indicate high performance for practices and individual clinicians asking about pain levels, but we know that pain is a persistent, unmanaged issue for a large percentage of patients with cancer.  Given the current state of our inadequate pain management and high performance on existing measures, perhaps a more relevant quality measure would be: 1) A combined quality measure to assess both pain intensity and plan of care for pain, or 2) A patient-reported outcome measure indicating pain improvement within a certain time period of follow up.

                  Feasibility Acceptance

                  Feasibility Rating
                  Feasibility Acceptance

                  Agree that all measure data elements can be documented in discrete fields within most EHRs.  Furthermore, both the eCQM (0384e)and MIPS CQM (registry- 0384) version of this measure have been fully implemented for the Oncology Care Model and Enhancing Oncology Care Model, indicating that EHRs have been able to accommodate the registry-version of the measure specification, in addition to the eCQM.   This sets a precedent that the pain intensity and pain care plan measures could be combined to create a single, more comprehensive measure.

                  Scientific Acceptability

                  Scientific Acceptability Reliability Rating
                  Scientific Acceptability Reliability

                  Data submitted supports the measure’s feasibility, validity and reliability.  Sample size is statistically valid and data element-level testing is robust.

                  Scientific Acceptability Validity Rating
                  Scientific Acceptability Validity

                  Data submitted supports the measure’s feasibility, validity and reliability.  Sample size is statistically valid and data element-level testing is robust.

                  Equity

                  Equity Rating
                  Equity

                  Not addressed at this time, but optional.

                  Use and Usability

                  Use and Usability Rating
                  Use and Usability

                  Users would benefit from clarification around the definition of pain.  For example, it is unclear if all pain should be documented or specifically cancer-related pain.  Working with oncologists, there has been much discussion around whether they should be addressing unrelated pain, such as chronic back pain, a recent broken bone, or nasal sinus pain from a cold.  Clearer guidance within the measure specification would resolve this.

                  In addition to this, in the measure’s current state, there is not clear guidance on situations where a patient is under the care of a medical oncologist and radiation oncologist simultaneously.  For example, if a patient sees both the medical oncologist and radiation oncologist on the same visit day, should both physicians document a pain scale and subsequent plan of care for pain?  Should the patient be in the denominator twice for that same day?  Moreover, how does this impact the patient’s experience of care?

                  Summary

                  The data and additional content provided by the measure steward supports reindorsement. However, this measure has been available for decades, stemming back to the Meaningful Use and Physician Quality Reporting System days.  Asking patients about pain has become standard of care, but effectively managing pain occurs less frequently.  Current 2023 CMS Benchmarks for both the versions of this measure specification are high and considered “topped out” - eCQM (93%) and MIPS CQM (85%). What is more relevant to measure is comprehensive pain assessment and management. Therefore, the ideal version of this measure would be to combine both MIPS 143 (#0384/0384e) and MIPS 144 (0383) which would assess the percentage of cancer patients on treatment that have had their pain assessed, and if pain is present, do they have a plan of care in place with the care team.  This is essentially what CMMI has done for the Enhancing Oncology Model.  This would raise the bar, creating greater opportunity for performance improvement and ensure that action is taken when there is a positive pain score.  In addition, it would ensure that the same patient population is being addressed across these two activities (pain score and plan).  In it's current state, there is no full view of all patients eligible to be screened with the numerator = (no pain + pain with plan).

                  Combining the pain intensity and pain care plan into one measure is feasible.  Both the eCQM and MIPS CQM (registry) version of this measure have been fully implemented for the Oncology Care Model and Enhancing Oncology Care Model, indicating that EHRs have been able to accommodate the registry-version of the measure specification, in addition to the eCQM.  Furthermore, there is a precedence for other similar measures: Depression Screening and Plan of Care, Tobacco Screening and Plan, BMI Screening and Plan, Alcohol Use Screening and Plan, to name a few.  These measures all require screening the full eligible patient population, and if positive, a plan must be documented. 

                  First Name
                  Raina
                  Last Name
                  Josberger

                  Submitted by Raina Josberger on Fri, 01/19/2024 - 14:46

                  Permalink

                  Importance

                  Importance Rating
                  Importance

                  Many lives are impacted by cancer and its treatments.  

                  Feasibility Acceptance

                  Feasibility Rating
                  Feasibility Acceptance

                  can capture needed datapoints from structured fields. 

                  Scientific Acceptability

                  Scientific Acceptability Reliability Rating
                  Scientific Acceptability Reliability

                  measure is well-defined and precise

                  Scientific Acceptability Validity Rating
                  Scientific Acceptability Validity

                  measure met validity tests.

                  Equity

                  Equity Rating
                  Equity

                  not addressed

                  Use and Usability

                  Use and Usability Rating
                  Use and Usability

                  it is in two measurement programs

                  Summary

                  N/A

                  First Name
                  Karie
                  Last Name
                  Fugate

                  Submitted by Karie Fugate on Fri, 01/19/2024 - 15:11

                  Permalink

                  Importance

                  Importance Rating
                  Importance

                  As a patient/caregiver this quality measure (and 0384e) is very important as it discusses encounters with cancer patients receiving chemotherapy or radiation and evaluates their pain intensity (routine screening and management). I did notice in the measure it states, “Although there have been some improvements, as evidenced by data obtained from the CMS Quality Payment Program, subpar pain management amongst cancer patients persists.” This may be addressed in CBE 0383 as that measure discusses a documented plan of care for a cancer patient.

                   

                  Also stated in this measure:

                  Evidence of Performance Gap or Measurement Gap

                  The MIPS-Quality program data were retrieved from 2021-2023 performance reports and reflect calendar years 2019-2021. The average performance rates suggest continued room for improvement in practice performance rates. 

                   

                  It is possible that when data for 2022 – 2023 is available there may be increased performance rates. 

                   

                  On a side note, in the Measure Specifications - Measure Rationale section – calculation of measure score – it says, “If the visit does not meet the numerator, this case represents a quality failure.” To a patient/caregiver this would represent an exclusion – not failure.

                  Feasibility Acceptance

                  Feasibility Rating
                  Feasibility Acceptance

                  The developer notes that the measure's data capture can be seamlessly integrated into existing physician workflows and data collection tools without requiring any significant modifications.

                  Scientific Acceptability

                  Scientific Acceptability Reliability Rating
                  Scientific Acceptability Reliability

                  N/A

                  Scientific Acceptability Validity Rating
                  Scientific Acceptability Validity

                  N/A

                  Equity

                  Equity Rating
                  Equity

                  The developer notes the below but did not address how this quality-of-care gap will be addressed with this measure. This may be addressed in CBE 0383 as that measure discusses a documented plan of care for a cancer patient.

                   

                  “Disparities exist as well, for example, a recent study evaluated opioid prescription fills and potency among cancer patients near end of life between 2007-2019. The study found that while all patients had a steady decline in opioid access, Black and Hispanic patients were less likely to receive opioids than White patients (Black, -4.3 percentage points, 95% CI; Hispanic, -3.6 percentage points, 95% CI) and received lower daily doses (Black, -10.5 MMED, 95% CI; Hispanic, -9.1 MMED, 95% CI).”

                  Use and Usability

                  Use and Usability Rating
                  Use and Usability

                  The developer notes the below; there is no discussion on how this will be addressed at the practice level. This may be addressed in CBE 0383 as that measure discusses a documented plan of care for a cancer patient.

                   

                  Progress on Improvement

                  In evaluating the QPP data, the average performance rate at the individual clinician level hovers around 89 percent, signaling some improvement. However, performance at the practice level remains quite low, indicating that a gap remains. 

                  Summary

                  N/A

                  First Name
                  Nicole
                  Last Name
                  Keane

                  Submitted by Nicole Keane on Fri, 01/19/2024 - 17:16

                  Permalink

                  Importance

                  Importance Rating
                  Importance

                  Business case supported by evidence.  Unclear if measure variation remains as participants are allowed to self-select measures and may select those reflecting high performance rates, which could potentially mask a drop in performance.

                  Feasibility Acceptance

                  Feasibility Rating
                  Feasibility Acceptance

                  The necessary data elements required can be found within structured fields and are recorded using ICD-10.

                  Scientific Acceptability

                  Scientific Acceptability Reliability Rating
                  Scientific Acceptability Reliability

                  Current performance data used. Sample size for each year and accountable entity level analyzed is sufficient.

                  Scientific Acceptability Validity Rating
                  Scientific Acceptability Validity

                  Kappa coefficient threshold met for reliability.

                  Equity

                  Equity Rating
                  Equity

                  No information provided.

                  Use and Usability

                  Use and Usability Rating
                  Use and Usability

                  Measure currently in use in the CMS Merit-based Payment System (MIPS). Providers can send feedback via the CMS Helpdesk or via email to ASCO. 

                  Summary

                  See domain comments

                  First Name
                  Lama
                  Last Name
                  EL Zein

                  Submitted by Lama El Zein on Sat, 01/20/2024 - 10:08

                  Permalink

                  Importance

                  Importance Rating
                  Importance

                  Critical point of assessment of pain in all setting of cancer journey (chemo and radiation) and importance of addressing it 

                  Feasibility Acceptance

                  Feasibility Rating
                  Feasibility Acceptance

                  feasibility with use of defined area in EMR is achievable 

                  Scientific Acceptability

                  Scientific Acceptability Reliability Rating
                  Scientific Acceptability Reliability

                  decent reliability score 

                  Scientific Acceptability Validity Rating
                  Scientific Acceptability Validity

                  good validity analysis 

                  Equity

                  Equity Rating
                  Equity

                  The developer cites disparities in opioid access and dosage among different racial groups, noting that Black and Hispanic patients were less likely to receive opioids than White patients (Black, -4.3 percentage points, 95% CI; Hispanic, -3.6 percentage points, 95% CI) and received lower daily doses (Black, -10.5 MMED, 95% CI; Hispanic, -9.1 MMED, 95% CI).

                  reporting per categorie is important to link it to care plan and use of opioid appropriately in all population 

                  Use and Usability

                  Use and Usability Rating
                  Use and Usability

                  need to understand the' no improvement " from 2019-2021 

                  Summary

                  given this in general fall into advanced illness, I think it will keep improving addressing pain in this population with tweaks above it should be overall  pain , as patient get older , pain as SE from radiation or chemo might not be specific to an area 

                  First Name
                  Emily
                  Last Name
                  Martin

                  Submitted by Emily Martin on Sun, 01/21/2024 - 20:06

                  Permalink

                  Importance

                  Importance Rating
                  Importance

                  There is strong evidence for the need for improved/standardized assessment and management of pain among patients undergoing chemotherapy and radiation therapy.

                  Feasibility Acceptance

                  Feasibility Rating
                  Feasibility Acceptance

                  Data elements can easily be collected.

                  Scientific Acceptability

                  Scientific Acceptability Reliability Rating
                  Scientific Acceptability Reliability

                  Strong reliability scores.

                  Scientific Acceptability Validity Rating
                  Scientific Acceptability Validity

                  Strong validity scores.

                  Equity

                  Equity Rating
                  Equity

                  Not addressed but optional.

                  Use and Usability

                  Use and Usability Rating
                  Use and Usability

                  The current use in MIPS and EOM models do not show improvement but this may due to variables other than effectiveness of this measure.

                  Summary

                  Domains meet criteria and or have readily addressable changes.

                  First Name
                  Sarah
                  Last Name
                  Thirlwell

                  Submitted by Sarah Thirlwell on Sun, 01/21/2024 - 21:31

                  Permalink

                  Importance

                  Importance Rating
                  Importance

                  Measure developers address the importance of this measure for a sub-group of oncology patients who receive radiation therapy treatment and those who receive a chemotherapy administration procedure.  Since the endorsement of this measure in 2017, an increasing number of oncology patients receive other forms of treatments that are not addressed by the developers.  Cancer patients receiving other treatment modalities also experience pain and the developers could consider expanding this measure to include these other sub-groups of oncology patients as additional populations in the reported rate of this measure.

                  Feasibility Acceptance

                  Feasibility Rating
                  Feasibility Acceptance

                  The measure specifications do not specify who documents the pain intensity or collect information regarding who documented the score in the electronic health record.  Differences in scores across different physician practices or clinicians may reflect differences in which oncology team member, from a medical assistant to the oncologist, who asked the patient's score.

                  The measure specifications do not include any exclusions.  While this decreases the burden of data collection, it does not allow for capture of differences in scores and/or exclusions according to patients' cognitive ability to respond to a standard pain instrument or account for patients' choice to decline to provide a rating.

                  Scientific Acceptability

                  Scientific Acceptability Reliability Rating
                  Scientific Acceptability Reliability

                  Measure developers provide evidence of reliability testing and strong inter-rater reliability.

                  Scientific Acceptability Validity Rating
                  Scientific Acceptability Validity

                  Measure developers provide evidence of validity testing and strong validity.

                  Equity

                  Equity Rating
                  Equity

                  Measure developers indicated that differences could exist and that care settings are encouraged to track additional data that could reflect differences in health equity but these have not been included in measure specifications and analyses according to those data were not reported. 

                  Use and Usability

                  Use and Usability Rating
                  Use and Usability

                  Measure developers indicate the use and usability of this measure nationally and internationally as an indicator of quality of oncology care.

                  Summary

                  n/a

                  First Name
                  Brigette
                  Last Name
                  DeMarzo

                  Submitted by Brigette DeMarzo on Mon, 01/22/2024 - 12:23

                  Permalink

                  Importance

                  Importance Rating
                  Importance

                  Agree with PQM staff assessment

                  Feasibility Acceptance

                  Feasibility Rating
                  Feasibility Acceptance

                  Agree with PQM staff assessment

                  Scientific Acceptability

                  Scientific Acceptability Reliability Rating
                  Scientific Acceptability Reliability

                  Agree with PQM staff assessment

                  Scientific Acceptability Validity Rating
                  Scientific Acceptability Validity

                  Agree with PQM staff assessment

                  Equity

                  Equity Rating
                  Equity

                  Agree with PQM staff assessment; was unable to find anything specific to equity

                  Use and Usability

                  Use and Usability Rating
                  Use and Usability

                  Would like to see more clarity on how pain is being assessed and potential endorsement of a universal measurement tool (e.g., PROMIS-Pain)

                  Summary

                  Overall, agree with PQM staff comments. Would like to see more on equity and standardization of PROMs used to assess for pain.

                  First Name
                  Dima
                  Last Name
                  Raskolnikov

                  Submitted by Dima Raskolnikov on Mon, 01/22/2024 - 17:24

                  Permalink

                  Importance

                  Importance Rating
                  Importance

                  agree with staff assessment

                  Feasibility Acceptance

                  Feasibility Rating
                  Feasibility Acceptance

                  agree with staff comments; seems straight forward

                  Scientific Acceptability

                  Scientific Acceptability Reliability Rating
                  Scientific Acceptability Reliability

                  n/a

                  Scientific Acceptability Validity Rating
                  Scientific Acceptability Validity

                  n/a

                  Equity

                  Equity Rating
                  Equity

                  not addressed in submission

                  Use and Usability

                  Use and Usability Rating
                  Use and Usability

                  Would like to see more guidance on how this should be measured: pain or just cancer-related pain? what is the justification for the scale that is used?

                  Summary

                  n/a

                  First Name
                  Heather
                  Last Name
                  Thompson

                  Submitted by Heather Thompson on Mon, 01/22/2024 - 21:56

                  Permalink

                  Importance

                  Importance Rating
                  Importance

                  Measure importance is clearly outlined and supported by the literature.

                  Feasibility Acceptance

                  Feasibility Rating
                  Feasibility Acceptance

                  Data is easily obtained from existing entries in the electronic medical record and is built into existing workflow processes.

                  Scientific Acceptability

                  Scientific Acceptability Reliability Rating
                  Scientific Acceptability Reliability

                  High reliability metrics.

                  Scientific Acceptability Validity Rating
                  Scientific Acceptability Validity

                  High validity metrics.

                  Equity

                  Equity Rating
                  Equity

                  Additional literature review and/or review of existing data utilizing other patient identifying factors, could be performed to further investigate opportunities for equity improvement.

                  Use and Usability

                  Use and Usability Rating
                  Use and Usability

                  Data can be used to identify gaps in care related to pain management.  The measure identifies patients who report pain but for which no plan of care has been developed to address the patient pain management needs.

                  Summary

                  Measure provides transparency around effective pain management care planning.  While data indicates that there may be little improvement and/or decline in outcomes, this is not likely due to the measure itself but perhaps more indicative of a lack of quality performance improvement efforts/programs.

                  First Name
                  Morris
                  Last Name
                  Hamilton

                  Submitted by Morris Hamilton on Mon, 01/22/2024 - 22:12

                  Permalink

                  Importance

                  Importance Rating
                  Importance

                  The information provided adequately summarizes the reason this measure is important and identifies that gaps in the target population currently exist.

                  Feasibility Acceptance

                  Feasibility Rating
                  Feasibility Acceptance

                  All necessary information has been provided. 

                  Scientific Acceptability

                  Scientific Acceptability Reliability Rating
                  Scientific Acceptability Reliability

                  The developers present ANOVA signal-to-noise ratios. At 0.826 and above, estimated entity-level reliability exceeds conventional standards of reliability. Encounter-level reliability is provided in validity section.

                  Scientific Acceptability Validity Rating
                  Scientific Acceptability Validity

                  With very high kappa values, encounter-level validity is satisfied. The elements of the measure appear accurately measured.

                   

                  Entity-level validity is not provided. As a maintenance measure that has been in existence for several years, the submission should also include measures of concurrent validity. How correlated is this measure to other measures related to patient quality for pain or cancer? Are the correlations reasonable?

                  Equity

                  Equity Rating
                  Equity

                  The developers indicate that demographic data are not available at the patient-level; however, they do not acknowledge that geographic data of the providers may be available. A comparison of measure performance by Area Deprivation Index may be feasible and may elucidate some information about the relationship between measure performance and equity. Though this domain is optional, I encourage the developers to investigate further.

                  Use and Usability

                  Use and Usability Rating
                  Use and Usability

                  While the developers provide good evidence to suggest that providers can improve, the developers provide flawed evidence of improvement.

                   

                  Without a stable cohort to compare across years, the claim that there is a decline or improvement in performance is spurious. The developers must first present trends across a stable cohort and then argue that the trends are evidence of usability. Note that 2019-2021 spans the PHE, which may further confound interpretation of trends.

                  Summary

                  Overall, the measure is well-defined, important, feasible, and reliable. It is currently in use in several federal programs. It is also an eCQM. The developers should provide additional analyses to improve their submission. At this time, entity-level validity and usability cannot be adequately evaluated. The developers may also consider using geographic data for participants to investigate equity relationships further.

                  First Name
                  Paul
                  Last Name
                  Tatum

                  Submitted by Paul Tatum on Mon, 01/22/2024 - 23:48

                  Permalink

                  Importance

                  Importance Rating
                  Importance

                  I think the data around practice gap still speaks to importance.THe disparities gap also speaks to importance

                  Feasibility Acceptance

                  Feasibility Rating
                  Feasibility Acceptance

                  THis line says it all:  This is evident from the considerable number of practices that report this measure to the Centers for Medicare and Medicaid Services (CMS) via the Merit-based Incentive Payment System (MIPS) program.

                  Scientific Acceptability

                  Scientific Acceptability Reliability Rating
                  Scientific Acceptability Reliability

                  agree with staff rating

                  Scientific Acceptability Validity Rating
                  Scientific Acceptability Validity

                  The 500 patient sampling over 10 practice sites kappa seems to suggest this is valid

                  Equity

                  Equity Rating
                  Equity

                  the gap is certainly there

                  Use and Usability

                  Use and Usability Rating
                  Use and Usability

                  MIPS use

                  Summary

                  important, feasible, valid and in useI think the finding that individual clinician level hovers around 89 percent, signaling some improvement but  performance at the practice level remains quite low, indicating that a gap remains is excellent fodder for the next generation of quality measure in this space.

                   

                  I am please to see with all the focus on potential harms of opioids which some have worried has led to potential restriction in access to pain meds for cancer patients, that the developers  are not aware of any unintended consequences related to this measure.