Skip to main content

Oncology: Medical and Radiation – Pain Intensity Quantified

CBE ID
0384
Endorsed
New or Maintenance
Is Under Review
No
Measure Description

This measure looks at the percentage of patient visits, regardless of patient age, with a diagnosis of cancer currently receiving chemotherapy or radiation therapy in which pain intensity is quantified. This measure is to be submitted at each denominator eligible visit occurring during the performance period for patients with a diagnosis of cancer who are seen during the performance period / measurement period. The time period for data collection is intended to be 12 consecutive months.

 

There are two submission criteria for this measure: 

 

1)           All patient visits for patients with a diagnosis of cancer currently receiving chemotherapy

OR

2)           All patient visits for patients with a diagnosis of cancer currently receiving radiation therapy.

 

This measure is comprised of two populations but is intended to result in one reporting rate. This is a proportion measure and better quality is associated with a higher score.

  • Measure Type
    Composite Measure
    No
    Electronic Clinical Quality Measure (eCQM)
    Measure Rationale

    This measure, CBE 0384, is paired with CBE 0383 Percentage of visits for patients, regardless of age, with a diagnosis of cancer currently receiving chemotherapy or radiation therapy who report having pain with a documented plan of care to address pain. This measure evaluates if pain intensity is quantified at each visit among cancer patients undergoing chemotherapy or radiation, and CBE 0383 evaluates if each patient visit includes a documented plan of care, amongst cancer patients who reported having pain.  

    MAT output not attached
    Attached
    Data dictionary not attached
    Yes
    Numerator

    Submission Criteria 1

    Patient visits in which pain intensity is quantified

     

    Submission Criteria 2

    Patient visits in which pain intensity is quantified

     

    Numerator Instructions:

    Pain intensity should be quantified using a standard instrument, such as a 0-10 numerical rating scale, visual analog scale, a categorical scale, or pictorial scale. Examples include the Faces Pain Rating Scale and the Brief Pain Inventory (BPI).

    Numerator Details

    Time period for data collection: At each visit within the measurement period

     

    Guidance: Pain intensity should be quantified using a standard instrument, such as a 0-10 numerical rating scale, visual analog scale, a categorical scale, or pictorial scale. Examples include the Faces Pain Rating Scale and the Brief Pain Inventory (BPI).

     

    The measure has two submission criteria to capture 1) visits for patients undergoing chemotherapy and 2) visits for patients undergoing radiation therapy. 

     

    For the Submission Criteria 1 and Submission Criteria 2 numerators, report one of the following CPT Category II codes to submit the numerator option for patient visits in which pain intensity was quantified:

     

    1125F: Pain severity quantified; pain present

    OR

    1126F: Pain severity quantified; no pain present

    Denominator

    Submission Criteria 1

    All patient visits, regardless of patient age, with a diagnosis of cancer currently receiving chemotherapy

     

    Denominator Instructions:

    The two chemotherapy administrations must occur on different days within the timeframe of on or within 30 days before the denominator eligible encounter and on or within 30 days after the denominator eligible encounter. Two chemotherapy administrations performed on the same day will not meet the patient procedure requirement.

     

    Submission Criteria 2

     

    All patient visits, regardless of patient age, with a diagnosis of cancer currently receiving radiation therapy

     

    DENOMINATOR NOTE: For the reporting purposes for this measure, in instances where CPT code 77427 is reported, the billing date, which may or may not be the same date as the face-to-face or telehealth encounter with the physician, should be used to pull the appropriate patient population into the denominator. It is expected, though, that the numerator criteria would be performed at the time of the actual face-to-face or telehealth encounter during the series of treatments. A lookback (retrospective) period of 7 days, including the billing date, may be used to identify the actual face-to-face or telehealth encounter, which is required to assess the numerator. Therefore, pain intensity should be quantified during the face-to-face or telehealth encounter occurring on the actual billing date or within the 6 days prior to the billing date.

    Denominator Details

    Time period for data collection: 12 consecutive months

     

    The measure has two submission criteria to capture 1) visits for patients undergoing chemotherapy and 2) visits for patients undergoing radiation therapy. 

     

    Guidance: For patients receiving radiation therapy, pain intensity should be quantified at each radiation treatment management encounter where the patient and physician have a face-to-face interaction. Due to the nature of some applicable coding related to the radiation therapy (eg, delivered in multiple fractions), the billing date for certain codes may or may not be the same as the face-to-face encounter date. For patients receiving chemotherapy, pain intensity should be quantified at each face-to-face encounter with the physician while the patient is currently receiving chemotherapy. For purposes of identifying eligible encounters, patients "currently receiving chemotherapy" refers to patients administered chemotherapy within 30 days prior to the encounter AND administered chemotherapy within 30 days after the date of the encounter.

     

    Submission Criteria 1 Denominator: Visits for patients with a diagnosis of cancer currently receiving chemotherapy

    Diagnosis for cancer (ICD-10-CM) - Due to character limitation, please see codes in the attached Excel file.

    AND

    Patient encounter during the performance period (CPT) – to be used to evaluate remaining denominator criteria and for numerator evaluation: 99202, 99203, 99204, 99205, 99212, 99213, 99214, 99215

    Note: Patient encounters for this measure conducted via telehealth (e.g., encounters coded with GQ, GT, 95, or POS 02 modifiers) are allowable.

    AND

    Patient procedure within 30 days before denominator eligible encounter: 51720, 96401, 96402, 96405, 96406, 96409, 96411, 96413, 96415, 96416, 96417, 96420, 96422, 96423, 96425, 96440, 96446, 96450, 96521, 96522, 96523, 96542, 96549

    AND

    Patient procedure within 30 days after denominator eligible encounter: 51720, 96401, 96402, 96405, 96406, 96409, 96411, 96413, 96415, 96416, 96417, 96420, 96422, 96423, 96425, 96440, 96446, 96450, 96521, 96522, 96523, 96542, 96549

     

    Submission Criteria 2 Denominator: Visits for patients with a diagnosis of cancer currently receiving radiation therapy

    Diagnosis for cancer (ICD-10-CM) - Due to character limitation, please see codes in the attached Excel file.

    AND

    Patient procedure during the performance period (CPT) – Procedure codes: 77427, 77431, 77432, 77435

    DENOMINATOR NOTE: For the reporting purposes for this measure, in instances where CPT code 77427 is reported, the billing date, which may or may not be the same date as the face-to-face or telehealth encounter with the physician, should be used to pull the appropriate patient population into the denominator. It is expected, though, that the numerator criteria would be performed at the time of the actual face-to-face or telehealth encounter during the series of treatments. A lookback (retrospective) period of 7 days, including the billing date, may be used to identify the actual face-to-face or telehealth encounter, which is required to assess the numerator. Therefore, pain intensity should be quantified during the face-to-face or telehealth encounter occurring on the actual billing date or within the 6 days prior to the billing date.

    Denominator Exclusions

    None.

    Denominator Exclusions Details

    None

    Type of Score
    Measure Score Interpretation
    Better quality = Higher score
    Calculation of Measure Score

    PY 2023 measure flow diagram is attached to this submission. 

     

    This measure is comprised of two submission criteria but is intended to result in one reporting rate. The reporting rate is the aggregate of Submission Criteria 1 and Submission Criteria 2, resulting in a single performance rate. For the purposes of this measure, the single performance rate can be calculated as follows: 

    Performance Rate = (Numerator 1 + Numerator 2)/ (Denominator 1 + Denominator 2)

     

    Calculation algorithm for Submission Criteria 1: Visits for patients with a diagnosis of cancer currently receiving chemotherapy

    1. Find the patient visits that qualify for the denominator (i.e., the specific group of patient visits for inclusion in a specific performance measure based on defined criteria). 

    2. From the patient visits within the denominator, find the visits that meet the numerator criteria (i.e., the group of patient visits in the denominator for whom a process or outcome of care occurs). Validate that the number of patient visits in the numerator is less than or equal to the number of patient visits in the denominator.

     

    If the visit does not meet the numerator, this case represents a quality failure.

     

    Calculation algorithm for Submission Criteria 2: Visits for patients with a diagnosis of cancer currently receiving radiation therapy

    1. Find the patient visits that qualify for the denominator (i.e., the specific group of patient visits for inclusion in a specific performance measure based on defined criteria). 

    2. From the patient visits within the denominator, find the visits that meet the numerator criteria (i.e., the group of patient visits in the denominator for whom a process or outcome of care occurs). Validate that the number of patient visits in the numerator is less than or equal to the number of patient visits in the denominator.

     

    If the visit does not meet the numerator, this case represents a quality failure.

     

    Measure Stratification Details

    Available CMS data do not include these supplemental data elements. However, we encourage the results of this measure to be stratified by race, ethnicity, administrative sex, and payer, if feasible given this is an episode-based measure.

    All information required to stratify the measure results
    Off
    All information required to stratify the measure results
    Off
    Testing Data Sources
    Data Sources

    N/A

    Minimum Sample Size

    It is recommended to adhere to the standard CMS guideline, which stipulates a minimum of 20 denominator counts to calculate the measure. In addition, it is advisable to incorporate data from patients with diverse attributes for optimal results. 

  • Evidence of Measure Importance

    Cancer is the second leading cause of death in the US (1) and there is an estimated incidence rate of over 1.9 million cases in 2023. (2) Pain is one of the most common and debilitating symptoms reported amongst cancer patients and in fact ICD-11 contains a new classification for chronic cancer-related pain, defining it as chronic pain caused by the primary cancer itself, or metastases, or its treatment. A systematic review found that 55 percent of patients undergoing anticancer treatment reported pain (3) and chemotherapy and radiation specifically are associated with several distinct pain syndromes. (4) Each year, over a million cancer patients in the US receive chemotherapy or radiation. (5) Severe pain increases the risk of anxiety and depression (4) and a recent study showed that cancer patients who reported pain had worse employment and financial outcomes; the greater the pain, the worse the outcomes. (6) Cancer patients have also reported that pain interferes with their mood, work, relationships with other people, sleep, and overall enjoyment of life. (7)

     

    Assessing pain and developing a plan of care (i.e., pain management) are critical for symptom control, pain management, and the cancer patient’s overall quality of life; it is an essential part of the oncologic management of a cancer patient (see below for specific clinical guideline recommendations). (8) However, many oncology patients report insufficient pain control. (9) A retrospective chart review analysis found an 84 percent adherence to the documentation of pain intensity and 43 percent adherence to pain re-assessment within an hour of medication administration. (10) An observational study found that over half of its cancer patients had a negative pain management index score, indicating that the prescribed pain treatments were not commensurate with the pain intensity reported by the patient. (11) Disparities exist as well, for example, a recent study evaluated opioid prescription fills and potency among cancer patients near end of life between 2007-2019. The study found that while all patients had a steady decline in opioid access, Black and Hispanic patients were less likely to receive opioids than White patients (Black, -4.3 percentage points, 95% CI; Hispanic, -3.6 percentage points, 95% CI) and received lower daily doses (Black, -10.5 MMED, 95% CI; Hispanic, -9.1 MMED, 95% CI). (12)

     

    Although there have been some improvements, as evidenced by data obtained from the CMS Quality Payment Program, subpar pain management amongst cancer patients persists. The intent of the paired measures Percentage of patient visits, regardless of patient age, with a diagnosis of cancer currently receiving chemotherapy or radiation therapy in which pain intensity is quantified and Percentage of visits for patients, regardless of age, with a diagnosis of cancer currently receiving chemotherapy or radiation therapy who report having pain with a documented plan of care to address pain is to improve pain management, thereby improving the function and quality of life of the cancer patient.

     

    Specific clinical practice guideline recommendations that support this measure are: (8) 

    1. Screen all patients for pain at each contact.
    2. Routinely quantify and document pain intensity and quality as characterized by the patient (whenever possible). Include patient reporting of breakthrough pain, treatments used and their impact on pain, satisfaction with pain relief, pain interference, provider assessment of impact on function, and any special issues for the patient relevant to pain treatment and access to care.
    3. Perform comprehensive pain assessment if new or worsening pain is present and regularly for persisting pain.
    4. Perform pain reassessment at specified intervals to ensure that analgesic therapy is providing maximum benefit with minimal adverse effects, and that the treatment plan is followed.
    5. Pain intensity rating scales can be used as part of universal screening and comprehensive pain assessment.

    All recommendations are Category 2A - Based upon lower-level evidence, there is uniform NCCN consensus that the intervention is appropriate.

     

    References:

    1. Centers for Disease Control and Prevention. (2023, January 18). Leading Causes of Death. National Center for Health Statistics. https://www.cdc.gov/nchs/fastats/leading-causes-of-death.htm
    2. National Cancer Institute. (2018). Cancer of Any Site - Cancer Stat Facts. Surveillance, Epidemiology, and End Results Program. https://seer.cancer.gov/statfacts/html/all.html 
    3. Van den Beuken-van Everdingen, M. H., Hochstenbach, L. M., Joosten, E. A., Tjan-Heijnen, V. C., & Janssen, D. J. (2016). Update on Prevalence of Pain in Patients With Cancer: Systematic Review and Meta-Analysis. Journal of Pain and Symptom Management51(6), 1070–1090.e9. https://doi.org/10.1016/j.jpainsymman.2015.12.340
    4. National Cancer Institute. (2019, March 6). Cancer Pain (PDQ®)–Patient Version. https://www.cancer.gov/about-cancer/treatment/side-effects/pain/pain-pdq 
    5. Centers for Disease Control and Prevention. (2022, November 2). Information for Health Care Providers on Infections During Chemotherapy. https://www.cdc.gov/cancer/preventinfections/index.htm 
    6. Halpern, M. T., de Moor, J. S., & Yabroff, K. R. (2022). Impact of Pain on Employment and Financial Outcomes Among Cancer Survivors. Journal of Clinical Oncology: Official Journal of the American Society of Clinical Oncology40(1), 24–31. https://doi.org/10.1200/JCO.20.03746
    7. Moryl, N., Dave, V., Glare, P., Bokhari, A., Malhotra, V. T., Gulati, A., Hung, J., Puttanniah, V., Griffo, Y., Tickoo, R., Wiesenthal, A., Horn, S. D., & Inturrisi, C. E. (2018). Patient-Reported Outcomes and Opioid Use by Outpatient Cancer Patients. The Journal of Pain, 19(3), 278–290. https://doi.org/10.1016/j.jpain.2017.11.001
    8. National Comprehensive Cancer Network® (NCCN). (July 31, 2023). NCCN Clinical Practice Guidelines in Oncology. Adult Cancer Pain Version 2.2023. http://www.nccn.org
    9. Jacqueline C. Dela Pena, Vincent D. Marshall & Michael A. Smith. (2022). Impact of NCCN Guideline Adherence in Adult Cancer Pain on Length of Stay. Journal of Pain & Palliative Care Pharmacotherapy, 36:2, 95-102, DOI: 10.1080/15360288.2022.2066746
    10. El Rahi, C., Murillo, JR., & Zaghloul, H. (September 2017). Pain Assessment Practices in Patients with Cancer Admitted to the Oncology Floor. J Hematol Oncol Pharm, 7(3):109-113. https://jhoponline.com/issue-archive/2017-issues/jhop-september-2017-vol-7-no-3/17246-pain-assessment-practices-in-patients-with-cancer-admitted-to-the-oncology-floor 
    11. Thronæs, M., Balstad, T. R., Brunelli, C., Løhre, E. T., Klepstad, P., Vagnildhaug, O. M., Kaasa, S., Knudsen, A. K., & Solheim, T. S. (2020). Pain management index (PMI)-does it reflect cancer patients' wish for focus on pain? Supportive Care in Cancer: Official Journal of the Multinational Association of Supportive Care in Cancer28(4), 1675–1684. https://doi.org/10.1007/s00520-019-04981-
    12. Enzinger, A. C., Ghosh, K., Keating, N. L., Cutler, D. M., Clark, C. R., Florez, N., Landrum, M. B., & Wright, A. A. (2023). Racial and Ethnic Disparities in Opioid Access and Urine Drug Screening Among Older Patients With Poor-Prognosis Cancer Near the End of Life. Journal of clinical oncology : official journal of the American Society of Clinical Oncology41(14), 2511–2522. https://doi.org/10.1200/JCO.22.01413 
    Table 1. Performance Scores by Decile
    Performance Gap
    Overall Minimum Decile_1 Decile_2 Decile_3 Decile_4 Decile_5 Decile_6 Decile_7 Decile_8 Decile_9 Decile_10 Maximum
    Mean Performance Score See logic model attachment
    N of Entities
    N of Persons / Encounters / Episodes
    Meaningfulness to Target Population

    A 2022 study evaluated patient and caregiver perspectives on cancer-related quality measures, to inform priorities for health system implementation. Measure concepts related to pain management plans and improvement in pain were nominated as part of the top five concepts. The study notes that the patient and caregiver panel put much emphasis on the important of routine pain screening, management, and follow-up. (1) 

     

    References:

     

    1. O'Hanlon, C. E., Giannitrapani, K. F., Lindvall, C., Gamboa, R. C., Canning, M., Asch, S. M., Garrido, M. M., ImPACS Patient and Caregiver Panel, Walling, A. M., & Lorenz, K. A. (2022). Patient and Caregiver Prioritization of Palliative and End-of-Life Cancer Care Quality Measures. Journal of general internal medicine37(6), 1429–1435. https://doi.org/10.1007/s11606-021-07041-8
    • Feasibility Assessment

      Not applicable during the Fall 2023 cycle.

      Feasibility Informed Final Measure

      Feedback from EHRs, cancer registries, and oncology practices provides compelling evidence that this measure is easy to implement and presents minimal feasibility challenges. The necessary data elements required for the denominator (active cancer diagnosis, office visit, chemotherapy administration and/or radiation treatment) can be found within structured fields and are recorded using commonly accepted coding standards. The same applies to the numerator data element, which requires documentation of the pain assessment result.

       

      The measure's data capture can be seamlessly integrated into existing physician workflows and data collection tools without requiring any significant modifications. Numerous healthcare practices have already set up their workflows to gather this information, highlighting its easy adoption. This is evident from the considerable number of practices that report this measure to the Centers for Medicare and Medicaid Services (CMS) via the Merit-based Incentive Payment System (MIPS) program.

       

      This measure has been widely adopted and proven to be effective. It has been implemented without any issues or feasibility concerns. Therefore, no adjustments to the measure specifications are needed.

      Proprietary Information
      Proprietary measure or components with fees
      Fees, Licensing, or Other Requirements

      As the world’s leading professional organization for physicians and others engaged in clinical cancer research and cancer patient care, American Society of Clinical Oncology, Inc. (“Society”) and its affiliates1 publishes and presents a wide range of oncologist‐approved cancer information, educational and practice tools, and other content. The ASCO trademarks, including without limitation ASCO®, American Society of Clinical Oncology®, JCO®, Journal of Clinical Oncology®, Cancer.Net™, QOPI®, QOPI Certification Program™, CancerLinQ®, CancerLinQ Discovery®, and Conquer Cancer®, are among the most highly respected trademarks in the fields of cancer research, oncology education, patient information, and quality care. This outstanding reputation is due in large part to the contributions of ASCO members and volunteers. Any goodwill or commercial benefit from the use of ASCO content and trademarks will therefore accrue to the Society and its respective affiliates and further their tax‐exempt charitable missions. Any use of ASCO content and trademarks that may depreciate their reputation and value will be prohibited.

       

      ASCO does not charge a licensing fee to not-for-profit hospitals, healthcare systems, or practices to use the measure for quality improvement, research or reporting to federal programs. ASCO encourage all of these not-for-profit users to obtain a license to use the measure so ASCO can:

      • Keep users informed about measure updates and/or changes
      • Learn from measure users about any implementation challenges to inform future measure updates and/or changes
      • Track measure utilization (outside of federal reporting programs) and performance rates

       

      ASCO has adopted the Council of Medical Specialty Society’s Code for Interactions with Companies (chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://cmss.org/wp-content/uploads/2016/02/CMSS-Code-for-Interactions-with-Companies-Approved-Revised-Version-4.13.15-with-Annotations.pdf), which provides guidance on interactions with for‐profit entities that develop produce, market or distribute drugs, devices, services or therapies used to diagnose, treat, monitor, manage, and alleviate health conditions. The Society’s Board of Directors has set Licensing Standards of American Society of Clinical Oncology (chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://old-prod.asco.org/sites/new-www.asco.org/files/content-files/about-asco/pdf/ASCO-Licensing-Standards-Society-and-affiliates.pdf) to guide all licensing arrangements. 

       

      In addition, ASCO has adopted the Council of Medical Specialty Society’s Policy on Antitrust Compliance (chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://cmss.org/wp-content/uploads/2015/09/Antitrust-policy.pdf), which provided guidance on compliance with all laws applicable to its programs and activities, specifically including federal and state antitrust laws, including guidance to not discuss, communicate, or make announcements about fixing prices, allocating customers or markets, or unreasonably restraining trade.

       

      Contact Us:

      • If you have questions about the ASCO Licensing Standards or would like to pursue a licensing opportunity, please content ASCO’s Division of Licensing, Rights & Permissions at [email protected].
      • Individual authors and others seeking one‐time or limited permissions should contact [email protected]. ASCO members seeking to use an ASCO trademark in connection with a grant, award, or quality initiative should contact the administrator of that particular program.

       

      1 Unless otherwise specified, the term “ASCO” in these Licensing Standards refers collectively to American Society of Clinical Oncology, Inc., the ASCO Association, Conquer Cancer Foundation of the American Society of Clinical

      Oncology, CancerLinQ LLC, QOPI Certification Program, LLC, and all other affiliates of the American Society of Clinical Oncology, Inc.

       

    • Data Used for Testing

      Five datasets provided by CMS' MIPS program and publicly reported were used to test the measure's reliability:

      • A data set of 75 practices that reported on the measure in the calendar year 2019 with 282,919 qualifying patient encounters.
      • A data set of 77 individual clinicians who reported on the measure in the calendar year 2020 with 63,513 qualifying patient encounters.
      • A data set of 61 practices that reported on the measure in the calendar year 2020 with 183,936 qualifying patient encounters.
      • A data set of 76 individual clinicians who reported on the measure in the calendar year 2021 with 57,709 qualifying patient encounters.
      • A data set of 51 practices that reported on the measure in the calendar year 2021 with 156,913qualifying patient encounters.

       

      The data source used to test the measure’s validity is 2022 patient data from the McKesson Practice Insights QCDR. McKesson’s Practice Insights QCDR is an oncology-specific reporting and analytics platform that supports a variety of practice value-based care initiatives. The web-based reporting system is fully integrated with the oncology-specific iKnowMed Generation 2 technology, leveraging the clinical data contained within the EHR system and enabling the automated calculation of quality measures and analytics to support improved patient care. Through Practice Insights QCDR, which provides continuous data monitoring and feedback, practices are enabled to exceed the simple task of participating in quality programs with the goal to achieve optimized patient care and reduced costs. Practice Insights not only supports successful participation in the MIPS program, but it also serves as a powerful reporting platform for practices pursuing other value-based care initiatives and alternative payment models (APMs), including the Enhancing Oncology Model (EOM).

       

      For the purpose of conducting validity testing, 10 community-based oncology practices were randomly selected from the full list of Practice Insights QCDR participants, representing 3% of all 2022 MIPS program participants.  From these, a randomized sample of 50 patients per practice, for a total of 500 patients, were selected for full medical record chart audits.

      Differences in Data

      To conduct data element testing with greater granularity, we acquired an additional data set from the McKesson Practice Insights QCDR as the CMS-provided MIPS individual clinician and practice performance data sets were not detailed enough. The CMS-provided data sets were utilized for accountable entity-level testing, while the Practice Insights QCDR-provided data set was used to carry out encounter/patient-level testing.

      _____________________________________________________________________________________

      Characteristics of Measured Entities

      The clinicians and practices included in the reliability analysis represented all 49 states of the continental United States and ranged from very small single proprietorships to large academic institutions according to the information they provided to the CMS. For validity analysis, McKesson’s Practice Insights QCDR randomly selected 10 community-based practices across the United States.

      Characteristics of Units of the Eligible Population

      CMS did not capture nor provide any patient-level socio-demographic variables and therefore no patient demographic data is available. McKesson's Practice Insights QCDR masked patients' demographic data to protect privacy during medical chart audits and did not provide patient demographics.

  • Level(s) of Reliability Testing Conducted
    Method(s) of Reliability Testing

    An assessment of the measure's reliability was performed through the utilization of signal-to-noise analysis, a method that determines the precision of the actual construct in comparison to the random variation. The signal-to-noise ratio is determined by calculating the ratio of between unit variance to total variance. This analysis provides valuable insight into the measure's reliability and its ability to produce consistent results.

    Reliability Testing Results

    Among the average of 77 individual clinicians over 2 calendar years and 62 practices over 3 calendar years, the reliability of the measure scores ranged from 0.859 to 1.00. The average reliability score was an almost perfect 0.997. 

     

    Overall, 100% of clinicians and practices had measure scores with reliabilities of 0.70 or higher, a commonly accepted reliability threshold (Adams 2010). The reliability values were consistently close to the ideal, indicating that the clinician performance rates were highly reliable, and any measurement error was minimal.

     

    Adams, J. L., Mehrotra, A., Thomas, J. W., & McGlynn, E. A. (2010). Physician cost profiling—reliability and risk of misclassification. New England Journal of Medicine, 362(11), 1014-1021.

    Accountable Entity-Level Reliability Testing Results
    Accountable Entity-Level Reliability Testing Results
      Overall Minimum Decile_1 Decile_2
    Reliability SEE LOGIC MODEL ATTACHMENT
    Mean Performance Score
    N of Entities
    Interpretation of Reliability Results

    Based on the available data, it is evident that individual clinicians and practices, even those with a minimal sample size, display reliability coefficients that exceed 0.80. This result indicates that the measure is highly reliable, both at individual clinician and practice levels. Therefore, the performance scores provide a true reflection of the quality of care.

  • Method(s) of Validity Testing

    For the purpose of checking the validity of the data elements in this measure, a random sample of 500 patients from 10 different test sites was selected. Both a measure abstractor and an automated algorithm were used to score patients on each data element of the measure. The agreement between the two scoring methods was evaluated using the Kappa statistic. Denominator and numerator data elements were assessed for all 500 patients. Since this measure does not have any denominator exclusion or exception data element, these data elements were not tested.

    Validity Testing Results

    Measure Data Element    Measure Component    Kappa Estimate    Standard Error    95% Confidence Limits
    Denominator    Cancer Diagnosis That's Active    1.0000    0.0000    1.0000    1.0000
    Denominator    Office Visit    1.0000    0.0000    1.0000    1.0000
    Denominator    Chemotherapy Administration    0.9509    0.0218    0.9081    0.9937
    Denominator    Radiation Treatment Management    0.9081    0.0914    0.7289    1.0000
    Numerator    Pain Assessment Documented    1.0000    0.0000    1.0000    1.0000
     

    The Kappa coefficients were interpreted using the benchmarks for Cohen's Kappa established by Landis and Koch in 1977, which are widely recognized in the field of psychometrics:

    • 0.8 to 1.0 – almost perfect agreement;
    • 0.6 to 0.8 – substantial agreement;
    • 0.4 to 0.6 – moderate agreement;
    • 0.2 to 0.4 – fair agreement;
    • Zero to 0.2 – slight agreement; and
    • Zero or lower – poor agreement.

     

    Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 159-174.

    Interpretation of Validity Results

    The calculated Kappa coefficient was 0.96 (with a 95% confidence interval of 0.91 to 1.00) for the denominator data element and 1.00 (with a 95% confidence interval of 1.00 to 1.00) for the numerator data element.

    The evaluation benchmarks suggest that the measure accurately distinguishes between good and poor quality, with nearly perfect validity for both the measure's denominator and numerator.

  • Methods used to address risk factors
    If an outcome or resource use measure is not risk adjusted or stratified

    N/A

    Risk adjustment approach
    Off
    Risk adjustment approach
    Off
    Conceptual model for risk adjustment
    Off
    Conceptual model for risk adjustment
    Off
  • Contributions Towards Advancing Health Equity

    See measure rationale section. 

    • Name of the program and sponsor
      Merit-based Incentive Payment System (MIPS) reporting program, Center for Medicare and Medicaid Services (CMS). This measure has been in the MIPS program (formerly PQRS) since its inception and is QPP #143.
      Purpose of the program
      MIPS encourages improvement in clinical practice and supporting advances in technology that allow for easy exchange of information.
      Geographic area and percentage of accountable entities and patients included
      MIPS eligible providers may earn performance-based payment adjustments for the services provided to Medicare patients in the USA.
      Applicable level of analysis and care setting

      Clinician/Group Level; Registry Data Source; Outpatient Services/Ambulatory Care Setting

       

      Eligible providers include: Physicians (including doctors of medicine, osteopathy, dental surgery, dental medicine, podiatric medicine, and optometry), Osteopathic practitioners, Chiropractors, Physician assistants, Nurse practitioners, Clinical nurse specialists, Certified registered nurse anesthetists, Physical therapists, Occupational therapists, Clinical psychologists, Qualified speech-language pathologists, Qualified audiologists, Registered dietitians or nutrition professionals.

    • Name of the program and sponsor
      Enhancing Oncology Model, Center for Medicare and Medicaid Services (CMS). This measure is listed as EOM-4.
      Purpose of the program
      Under EOM, participating oncology practices will take on financial and performance accountability for episodes of care surrounding systemic chemotherapy administration to patients with common cancer types.
      Geographic area and percentage of accountable entities and patients included
      There are 44 practices and three payers participating, nationwide. EOM includes two risk arrangements with differing levels of downside risk.
      Applicable level of analysis and care setting

      Level of measurement and setting: Oncology practices; the measure source is EOM participant reported and measure is reported in aggregate across all patients. 

       

      Purpose: Enhancing Oncology Model (EOM), as part of the “management of symptoms toxicity” domain. The EOM is part of CMS’ Innovation Center and is a 5-year voluntary model, beginning on July 1, 2023 that aims to improve quality and reduce costs through payment incentives and required participant redesign activities.  Under EOM, participating oncology practices will take on financial and performance accountability for episodes of care surrounding systemic chemotherapy administration to patients with common cancer types. EOM supports President Biden’s Unity Agenda and Cancer Moonshot initiative to improve the experience of people and their families living with and surviving cancer. Seven cancer types are included in the model: 

      1. breast cancer
      2. chronic leukemia
      3. lung cancer
      4. lymphoma
      5. multiple myeloma
      6. prostate cancer
      7. small intestine / colorectal
    • Name of the program and sponsor
      Practice Insights by McKesson in Collaboration with The US Oncology Network – QCDR. The measure is listed as QID 143
      Purpose of the program
      Practice Insights seamlessly pulls data from multiple sources to create a holistic roadmap that supports the clinical, financial and operational needs of oncology practices.
      Geographic area and percentage of accountable entities and patients included
      Over 10,000 oncology physicians, nurses, clinicians, and cancer care specialists nationwide, treating more than 1.2 million cancer patients annually in more than 450 locations across 25 states.
      Applicable level of analysis and care setting

      Level of measurement and setting: Oncology practices. 

       

      Purpose: Practice Insights by McKesson in Collaboration with The US Oncology Network – QCDR. Practice Insights is a performance analytics tool that helps analyze data generated throughout the patient journey to gain proactive, actionable insights into quality initiatives, value-based care programs, performance metrics, productivity measures and peer/industry benchmarks. Practice Insights seamlessly pulls data from multiple sources to create a holistic roadmap that supports the clinical, financial and operational needs of oncology practices.

       

      Geographic area and number and percentage of accountable entities and patients included: The US Oncology Network (“The Network”) represents over 10,000 oncology physicians, nurses, clinicians, and cancer care specialists nationwide and is one of the nation’s largest and most innovative networks of community-based oncology physicians, treating more than 1.2 million cancer patients annually in more than 450 locations across 25 states. The Network unites over 1,400 like-minded physicians around a common vision of expanding patient access to the highest quality, state-of-the-art care close to home and at lower costs for patients and the health care system.

    • Name of the program and sponsor
      ASCO Certified: Patient-Centered Cancer Care Standards
      Purpose of the program
      The new program certifies oncology group practices and health systems that meet a single set of comprehensive, evidence-based oncology medical home standards from ASCO and the Community Oncology Alliance.
      Geographic area and percentage of accountable entities and patients included
      ASCO Certified was informed by a pilot of 12 practice groups and health systems across 95 service sites and 500 oncologists. The cohort comprised a variety of settings, including community, hospital, academic and rural.
      Applicable level of analysis and care setting

      Level of measurement and setting: Oncology group practices and health systems. 

       

      Purpose: The new program certifies oncology group practices and health systems that meet a single set of comprehensive, evidence-based oncology medical home standards from ASCO and the Community Oncology Alliance. Benefits include recognition through ASCO, as a preferred quality provider to payers and all cancer care delivery stakeholders, sing set of evidence-based standards, participation in a learning collaborative, and ongoing assessment and improvement support.

    Actions of Measured Entities to Improve Performance

    Providers are evaluated on if pain intensity is quantified among cancer patients undergoing chemotherapy or radiation; this is an every-visit measure. ASCO has not received feedback that the measure negatively impacts the provider’s workflow. Per the NQF Cancer CDP Fall 2018 Report, the panel agreed that data for this measure are routinely collected, and the measure is feasible. 

    Feedback on Measure Performance

    ASCO’s measure development team allows for feedback and measure inquiries from implementers and reporters via email ([email protected]).  In addition, we receive questions and feedback from the CMS Helpdesk. To date, questions related to coding guidance and the intent of the measure have come through. Otherwise, ASCO has not received feedback on these measures through those avenues. 

     

    Consideration of Measure Feedback

    N/A

    Progress on Improvement

    In evaluating the QPP data, the average performance rate on this measure increased three percentage points between performance periods 2019 and 2021, indicating some improvement. However, a gap remains, particularly at the practice level. 

    Unexpected Findings

    At this time, we are not aware of any unintended consequences related to this measure. We take unintended consequences very seriously and therefore continuously monitor to identify actions that can be taken to mitigate them.

  • Most Recent Endorsement Activity
    Advanced Illness and Post-Acute Care Fall 2023
    Initial Endorsement
    Next Planned Maintenance Review
    Advanced Illness and Post-Acute Care Fall 2028
    Endorsement Status
    E&M Committee Rationale/Justification
    • Explore, with the developer’s TEP, adding mention of other specific measurement tools that can be used to support the measure.
    • Include additional guidance for caregivers, namely for patients with cognitive impairment. For instance, adding additional guidance to note alternative methods of assessment, such as observations, behavioral cues, or care plans may be employed.
    Last Updated
  • Do you have a secondary measure developer point of contact?
    On
    Measure Developer Secondary Point Of Contact

    Caitlin Drumheller
    American Society of Clinical Oncology
    2318 Mill Road
    Suite 800
    Alexandria, VA 22314
    United States

    Measure Developer Secondary Point Of Contact Phone Number
    The measure developer is NOT the same as measure steward
    Off
    Steward Address

    United States

    • Submitted by Amanda on Mon, 01/08/2024 - 14:55

      Permalink

      Importance

      Importance Rating
      Importance

      Strengths:

      • The developer cites evidence regarding the incident rate of over 1.9 million cancer cases in 2023 and the prevalence of pain among cancer patients during treatment. There is a logic model linking the process where providers queries cancer patients undergoing chemotherapy or radiation about their pain intensity, optimizing pain management therapies, which leads to improved function by way of symptom control and pain management, thereby improving the quality of life of the cancer patient.
      • The developer cites evidence of insufficient pain control for oncology patients, and disparities exist in pain control management. National Comprehensive Cancer Network's clinical practice guideline recommendations support this measure by recommending: 
        • Screening all patients for pain at each contact.
        • Routinely quantifying and documenting pain intensity and quality as characterized by the patient (whenever possible). Include patient reporting of breakthrough pain, treatments used and their impact on pain, satisfaction with pain relief, pain interference, provider assessment of impact on function, and any special issues for the patient relevant to pain treatment and access to care.
        • Performing comprehensive pain assessment if new or worsening pain is present and regularly for persisting pain.
        • Performing pain reassessment at specified intervals to ensure that analgesic therapy is providing maximum benefit with minimal adverse effects, and that the treatment plan is followed.
        • Pain intensity rating scales can be used as part of universal screening and comprehensive pain assessment.
      • The developer cites disparities in opioid access and dosage among different racial groups, noting that Black and Hispanic patients were less likely to receive opioids than White patients (Black, -4.3 percentage points, 95% CI; Hispanic, -3.6 percentage points, 95% CI) and received lower daily doses (Black, -10.5 MMED, 95% CI; Hispanic, -9.1 MMED, 95% CI).
         

      Limitations:

      • Although no direct patient input on the meaningfulness of the measure, the developer cites a 2022 study reporting that the study's patient and caregiver panel placed emphasis on the importance of routine pain screening, management, and follow-up.
      • Individual clinician performance rates from 2020 range from 0.76 to 0.95 in decile 2 (n=77 clinicians), and reach 0.98 in decile 3 (n=76), indicating potentially little room for meaningful improvement; performance score in 2021 are similar. Practice-level performance scores show some room for improvement in deciles 1-4, and reporting practices drop from 75 to 51 between 2019 and 2021. Developers note that participants are allowed to self-select measures and may select those reflecting high performance rates, which could potentially mask a drop in performance.

      Rationale:

      • There is a business case supported by credible evidence depicting a link between health care processes to desired outcomes for cancer patients. Actions providers can take to reach the desired outcome are outlined. Based on reporting clincians and practices, a performance gap may only exist for the bottom 2-4 deciles; however, performance reported may skew high since participants can self-select the measures they would like to report. Evidence cited showing disparities in access to opioids based on race/ethnicity suggests the possibility of a similar disparity in the measure focus, but this is not documented.
      • The committee should consider whether or not a gap still exists for this measure.

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      Strengths:

      • Data elements required for the numerator and denominator can be found within structured fields and are recorded using commonly accepted coding standards. The developer notes that the measure's data capture can be seamlessly integrated into existing physician workflows and data collection tools without requiring any significant modifications.
      • There are no fees to use this measure, however, the developer encourages all not-for-profit users to obtain a license to use the measure. Guidance on interactions with for-profit entities is provided.

       

      Limitations:

      None

       

      Rationale:

      The necessary data elements required for the numerator and denominator can be found within structured fields and are recorded using commonly accepted coding standards. There are no fees for not-for-profit hospitals, healthcare systems, or practices to use the measure. Guidance on interactions with for-profit entities is provided.

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      Strengths:

      • The measure is well-defined and precisely specified.
      • Across all years analyzed and individual clinician and practice levels, the reliability scores ranged from 0.859 to 1.000 with an overall average of 0.997. Within year and accountable entity level, the average reliability ranged from 0.993 to 0.999 and almost all facilities had reliability greater than 0.9.
      • Across all years analyzed and individual clinician and practice levels, dozens of accountable entities and tens of thousands of patient encounters were included in the reliability analysis.
      • The data were retrieved from 2021-2023 performance reports and reflect calendar years 2019-2021.

       

      Limitations:

      • The Calculation Algorithms for Populations 1 and 2 are very generic and lack details specific to this particular measure.

       

      Rationale:

      Measure score reliability testing (accountable entity level reliability) performed. All practice levels have a reliability which exceeds the accepted threshold of 0.6. Sample size for each year and accountable entity level analyzed is sufficient.

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      Strengths:

      • The developer tested the validity of the data elements (both numerator and denominator) using a random sample of 500 patient encounters across 10 test sites. The developer scored encounters on each data element using both a measure abstractor and an automated algorithm and then evaluated agreement between the two scoring methods using the Kappa statistic.
      • Results: 
        Kappa coefficient for the denominator data element was 0.96 (with a 95% confidence interval of 0.91 to 1.00), indicating almost 100% accuracy.
        Kappa coefficient for the numerator data element was 1.00 (with a 95% confidence interval of 1.00 to 1.00), indicating 100% accuracy.
      • There are no denominator or numerator exclusions for this measure.

       

      Limitations:

      None

       

      Rationale:

      • The developer tested the validity of the data elements (both numerator and denominator) using a random sample of 500 patient encounters across 10 test sites. The developer scored encounters on each data element using both a measure abstractor and an automated algorithm and then evaluated agreement between the two scoring methods using the Kappa statistic.
      • Results: 
        Kappa coefficient for the denominator data element was 0.96 (with a 95% confidence interval of 0.91 to 1.00), indicating almost 100% accuracy.
        Kappa coefficient for the numerator data element was 1.00 (with a 95% confidence interval of 1.00 to 1.00), indicating 100% accuracy.
      • There are no denominator or numerator exclusions for this measure.

      Equity

      Equity Rating
      Equity

      Strengths:

      N/A

       

      Limitations:

      • Developers use this section to refer to measure rationale; no information is provided in rationale (or elsewhere) demonstrating this submission addresses equity as intended; extent of the support for this criterion appears to be a single study reporting racial/ethnic disparity in opioid prescribing.

       

      Rationale:

      Developer did not address this optional criterion.

      Use and Usability

      Use and Usability Rating
      Use and Usability

      Strengths:

      • Measure currently in use in MIPS (eligible entities can receive performance-based incentives) and the Enhancing Oncology Model (EOM-4; practices take on financial and performance accountability for episodes of care).
      • Other tools for QI are Practice Insights by McKesson, a performance analytics tool used by subscribing providers, and the Patient-Centered Cancer Care Standards ASCO Certification.
      • Developers reference a mean 3 percentage point improvement in the measure from 2019 to 2021.
      • Providers can send feedback via the CMS Helpdesk or via email to ASCO. They report the only feedback to date has been related to coding guidance and measure intent.
      • No unexpected findings are reported.

       

      Limitations:

      • Developers cite improvement of 3 percentage points; based on the logic model/testing attachment, it appears that it was only the clinician-level measure with that improvement between 2020 and 2021; the practice-level measure appears to show a decline from an overall mean of .92 in 2019 to .84 in 2021 (with a min performance in 2020 at .80). In addition, the room for meaningful improvement is likely limited to the bottom 2 (clinician) to 4 (practice) deciles. Finally, the developer does not provide a rationale for the decline in the practice-level measure.

       

      Rationale:

      • The measure is in use in two federal programs, and tools for QI include participation in a McKesson analytics platform (Practice Insights) and an ASCO-sponsored certification program. No feedback that would affect measure specifications, or unexpected findings, are reported.
      • Developer reports improvement of 3 percentage points from 2020-2021 (clinician-level), but room for meaningful improvement for either clinician-level or practice-level performance may be limited to the lowest deciles. No rationale is provided for an apparent decline in the practice-level performance scores.

      Summary

      N/A

    • Submitted by Andrew on Wed, 01/10/2024 - 11:34

      Permalink

      Importance

      Importance Rating
      Importance

      Pain is a subjective report, inherently difficult to quantify. The importance of this study is difficult to overstate, given the pendulum swings during the "5th vital sign" era and the opiate crisis of today.

       

      Agree with staff review points copied here:

      • Although no direct patient input on the meaningfulness of the measure, the developer cites a 2022 study reporting that the study's patient and caregiver panel placed emphasis on the importance of routine pain screening, management, and follow-up.
      • The developer cites evidence regarding the incident rate of over 1.9 million cancer cases in 2023 and the prevalence of pain among cancer patients during treatment. There is a logic model linking the process where providers queries cancer patients undergoing chemotherapy or radiation about their pain intensity, optimizing pain management therapies, which leads to improved function by way of symptom control and pain management, thereby improving the quality of life of the cancer patient.
      • There is a business case supported by credible evidence depicting a link between health care processes to desired outcomes for cancer patients. Actions providers can take to reach the desired outcome are outlined. Based on reporting clincians and practices, a performance gap may only exist for the bottom 2-4 deciles; however, performance reported may skew high since participants can self-select the measures they would like to report. Evidence cited showing disparities in access to opioids based on race/ethnicity suggests the possibility of a similar disparity in the measure focus, but this is not documented.
      • The developer cites disparities in opioid access and dosage among different racial groups, noting that Black and Hispanic patients were less likely to receive opioids than White patients (Black, -4.3 percentage points, 95% CI; Hispanic, -3.6 percentage points, 95% CI) and received lower daily doses (Black, -10.5 MMED, 95% CI; Hispanic, -9.1 MMED, 95% CI).
         

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      The needed elements are in place and access is available. 

       

      Agree with staff review points copied here:

      The necessary data elements required for the numerator and denominator can be found within structured fields and are recorded using commonly accepted coding standards. There are no fees for not-for-profit hospitals, healthcare systems, or practices to use the measure. Guidance on interactions with for-profit entities is provided.

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      Power and CI are as required

       

      Agree with staff review points copied here:

      • The developer tested the validity of the data elements (both numerator and denominator) using a random sample of 500 patient encounters across 10 test sites. The developer scored encounters on each data element using both a measure abstractor and an automated algorithm and then evaluated agreement between the two scoring methods using the Kappa statistic.
      • Results: 
        Kappa coefficient for the denominator data element was 0.96 (with a 95% confidence interval of 0.91 to 1.00), indicating almost 100% accuracy.
        Kappa coefficient for the numerator data element was 1.00 (with a 95% confidence interval of 1.00 to 1.00), indicating 100% accuracy.
      • There are no denominator or numerator exclusions for this measure.

       


       

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      Power and CI are as required

       

       

      Agree with staff review points copied here:

      • The developer tested the validity of the data elements (both numerator and denominator) using a random sample of 500 patient encounters across 10 test sites. The developer scored encounters on each data element using both a measure abstractor and an automated algorithm and then evaluated agreement between the two scoring methods using the Kappa statistic.
      • Results: 
        Kappa coefficient for the denominator data element was 0.96 (with a 95% confidence interval of 0.91 to 1.00), indicating almost 100% accuracy.
        Kappa coefficient for the numerator data element was 1.00 (with a 95% confidence interval of 1.00 to 1.00), indicating 100% accuracy.
      • There are no denominator or numerator exclusions for this measure.

       


       

      Equity

      Equity Rating
      Equity

      Edit to detail the importance of the data highlighted in the developer's report. There is an inequity here, describe it's importance.

      Use and Usability

      Use and Usability Rating
      Use and Usability

      The measure is in use federally, yet feedback (not available) is required to make this use or usability understood.

       

      Performance actually declined at the practice level - this is good in respect to a study - there is a finding - discuss it, why.

       

      Coying the staff notes here to help me with details from the study:

      • The measure is in use in two federal programs, and tools for QI include participation in a McKesson analytics platform (Practice Insights) and an ASCO-sponsored certification program. No feedback that would affect measure specifications, or unexpected findings, are reported.
      • Developer reports improvement of 3 percentage points from 2020-2021 (clinician-level), but room for meaningful improvement for either clinician-level or practice-level performance may be limited to the lowest deciles. No rationale is provided for an apparent decline in the practice-level performance scores.

      Summary

      Met or not met with addressable factors - details the inequity and performance declines.

      Submitted by Yaakov Liss on Sun, 01/14/2024 - 21:38

      Permalink

      Importance

      Importance Rating
      Importance

       Not sure where to write this but just wanted to confirm that the denominator criteria of "undergoing chemotherapy" specifically means receiving IV chemotherapy (and seemingly at least 2 chemo administrations every 30 days) and does not include those receiving oral chemotherapy agents or "maintenance" chemotherapy once a month or less often than that.

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      It sounds like this measure is already in place and so the data is presumably being successfully gathered but I do want to understand more about how doctors and practices are documenting pain intensity numerically and how information is being taken out of the medical record for reporting purposes (do all EMRs have a box to check or is AI technology being used to fetch this- how is this happening)?

       

      Additionally, does this metric apply to medical oncologists and radiation oncologists?  If 1 patient is receiving concurrent chemo and radiation, how is 1 score being generated when 2 providers from 2 different specialties are involved?  Or is only the medical oncologist involved in this metric and the medical oncologist is reporting on pain intensity while the patient is receiving radiation therapy also under a radiation oncologist's care?

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      N/A

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      N/A

      Equity

      Equity Rating
      Equity

      N/A

      Use and Usability

      Use and Usability Rating
      Use and Usability

      Assuming my above concerns are addressed, then I have nothing additional to say about this.

      Summary

      I think any metric about pain intensity grading needs to include proactive thoughts about unintended consequences leading to over-prescribing or unsafe prescribing of opiates in light of the continuing catastrophic effects that the opiate crisis is causing to American society. 

       

      How are we monitoring to ensure that this metric or others like this are not going to potentially worsen this problem?  

      Submitted by Stephen Weed on Tue, 01/16/2024 - 19:11

      Permalink

      Importance

      Importance Rating
      Importance

      I disagree with staff assessment in two areas and recommend a MET rating. There was mention that there was little room for improvement. However in the 2021 practice performance table, 15 of the 51 facilities showed a Pearson score of less than .59. While this only represents about 7.5% of the total encounters, if you are in one of those facilities, it is hard to be confident that you are receiving adequate care. It is impossible to ascertain from the aggregated statistics but it may well be that the facilities which underperform may be rural due to the fewer number of patients per facility.
      Secondly, from a patient’s perspective, it seems worthwhile to continue this measure while encouraging data gathering that is more uniform and consistent. It sends a message to providers that pain management is important. In addition, since pain is an important factor in positive outcomes, it would be counterproductive to not be aware of decreased pain measurement.
      As an aside, in oncology as in other fields, it is important to manage expectations of pain relief: The measure states in part that“…negative pain management index score, indicating that the prescribed pain treatments were not commensurate with the pain intensity reported by the patient.” It can be challenging to provide pain management care to the level that any person might like, i.e. in an oncology setting, it is difficult to meet a patient's expectation of “1” i.e. no noticeable pain. 
       

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      I do have a concern that perhaps is related to feasibility. The number of encounters logged dropped significantly since the first year the measure was in use. The developer needs to dig deeper into the data and/or the facilities used for the data to understand this drop. Could it be that the ability to consistently administer the pain measurement is affecting the data extracted?

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      The techniques and quantitative analysis are fine. 

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      My concern is reliability.  Although the aggregated numbers do not show a statistically different picture, I would like the developer to offer some ideas about the decline in events pulled into the data

      Equity

      Equity Rating
      Equity

      not required. I don't think much is gained by drilling down on demographics. Many people understand the links between communities of color, healthcare access and quality of care. 

      Use and Usability

      Use and Usability Rating
      Use and Usability

      The data is being gathered and there has been some increase in reporting.

      Summary

      Like any measure, there is always room for upside. The old adage, "you reward what you measure" still rings true. So YES measure pain with oncology patients.

      Submitted by Gerri Lamb on Thu, 01/18/2024 - 16:18

      Permalink

      Importance

      Importance Rating
      Importance

      Same information provided as for 384e.  This measure is a companion measure to 383 and both are components of the logic model. The importance of this measure is supported by the number of patients with a cancer diagnosis undergoing chemo or radiation each year and the impact of untreated or inadequately treated pain. One or more of the references indicate that even among patients whose pain intensity is documented, treatment is frequently inadequate. Support needs to be provided for the connection between measurement of pain intensity and pain control - as noted in review of 384e, the literature review indicates that measurement of pain intensity may be necessary but not sufficient to adequate pain control. 

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      The measure developers state that feedback indicates that the measure is easy to implement.  Available data to support this conclusion need to be provided. 

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      Reliability was evaluated using 2019 through 2021 data sets and signal to noise ratio.  Results are within acceptable limits. 

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      Validity was evaluated using 2022 QCDR data using the same procedure described for 384e. It would be helpful to explain how the use of a kappa statistic supports validity of the measure. 

      Equity

      Equity Rating
      Equity

      General information about differences in pain treatment and access to pain treatment across ethnic groups is provided in the section on importance. No additional data are provided addressing equity. 

      Use and Usability

      Use and Usability Rating
      Use and Usability

      Data provided suggest that the measure may be topped out at the individual clinician level. Further justification should be provided to support continued use of this measure in MIPS and other quality performance programs. 

      Summary

      No additional comments 

      Submitted by Raina Josberger on Fri, 01/19/2024 - 14:58

      Permalink

      Importance

      Importance Rating
      Importance

      Unclear if measurement gap still exists and if this measure would fill that gap

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      items needed are from existing fields

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      reliable 

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      passed validity testing

      Equity

      Equity Rating
      Equity

      not addressed

      Use and Usability

      Use and Usability Rating
      Use and Usability

      used in two programs 

      Summary

      N/A

      Submitted by Karie Fugate on Fri, 01/19/2024 - 15:14

      Permalink

      Importance

      Importance Rating
      Importance

      As a patient/caregiver this quality measure (and 0384e) is very important as it discusses encounters with cancer patients receiving chemotherapy or radiation and evaluates their pain intensity (routine screening and management). I did notice in the measure it states, “Although there have been some improvements, as evidenced by data obtained from the CMS Quality Payment Program, subpar pain management amongst cancer patients persists.” This may be addressed in CBE 0383 as that measure discusses a documented plan of care for a cancer patient.

       

      Also stated in this measure:

      Evidence of Performance Gap or Measurement Gap

      The MIPS-Quality program data were retrieved from 2021-2023 performance reports and reflect calendar years 2019-2021. The average performance rates suggest continued room for improvement in practice performance rates. 

       

      It is possible that when data for 2022 – 2023 is available there may be increased performance rates. 

       

      On a side note, in the Measure Specifications - Measure Rationale section – calculation of measure score – it says, “If the visit does not meet the numerator, this case represents a quality failure.” To a patient/caregiver this would represent an exclusion – not failure.

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      The developer notes that the measure's data capture can be seamlessly integrated into existing physician workflows and data collection tools without requiring any significant modifications.

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      N/A

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      N/A

      Equity

      Equity Rating
      Equity

      The developer notes the below and did not address how this quality-of-care gap will be addressed with this measure. This may be addressed in CBE 0383 as that measure discusses a documented plan of care for a cancer patient.

       

      “Disparities exist as well, for example, a recent study evaluated opioid prescription fills and potency among cancer patients near end of life between 2007-2019. The study found that while all patients had a steady decline in opioid access, Black and Hispanic patients were less likely to receive opioids than White patients (Black, -4.3 percentage points, 95% CI; Hispanic, -3.6 percentage points, 95% CI) and received lower daily doses (Black, -10.5 MMED, 95% CI; Hispanic, -9.1 MMED, 95% CI).”

      Use and Usability

      Use and Usability Rating
      Use and Usability

      The developer notes the below; there is no discussion on how this will be addressed at the practice level. This may be addressed in CBE 0383 as that measure discusses a documented plan of care for a cancer patient.

       

      Progress on Improvement

      In evaluating the QPP data, the average performance rate on this measure increased three percentage points between performance periods 2019 and 2021, indicating some improvement. However, a gap remains, particularly at the practice level. 

      Summary

      N/A

      Submitted by Nicole Keane on Fri, 01/19/2024 - 16:59

      Permalink

      Importance

      Importance Rating
      Importance

      Business case supported by evidence.  Unclear if measure variation remains as participants are allowed to self-select measures and may select those reflecting high performance rates, which could potentially mask a drop in performance.

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      The necessary data elements required can be found within structured fields and are recorded using ICD-10.

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      Current performance data used.Sample size for each year and accountable entity level analyzed is sufficient.

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      Kappa coefficient threshold met for reliability.

      Equity

      Equity Rating
      Equity

      No information provided.

      Use and Usability

      Use and Usability Rating
      Use and Usability

      Measure currently in use in the CMS Merit-based Payment System (MIPS). Providers can send feedback via the CMS Helpdesk or via email to ASCO. Developers reference a mean 3 percentage point improvement in the measure from 2019 to 2021.

      Summary

      See domain fields.

      Submitted by Emily Martin on Sun, 01/21/2024 - 19:18

      Permalink

      Importance

      Importance Rating
      Importance

      Ideally, pain assessment would not be limited to only the scales listed but would also include assessment of pain impact on function. The importance rating would be increased if the measure guided documentation of functional impact of pain and a multimodal plan of care to address the pain. 

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      This can be easily incorporated into workflow and can be readily measured. 

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      The reliability scores are high, much above the threshold. 

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      The validity scores are high per Kappa coefficients. 

      Equity

      Equity Rating
      Equity

      The measure would benefit from explicit discussion of impact on equity. 

      Use and Usability

      Use and Usability Rating
      Use and Usability

      Measure is in use for MIPS and EOM. However, there may need to be more explicit description of what is needed in the documented pain assessment and management which would be able to demonstrate more meaningful improvement with initiation of this measure. 

      Summary

      While there are areas in which the criteria are not sufficiently met, they can be readily addressed. 

      Submitted by Sarah Thirlwell on Sun, 01/21/2024 - 22:04

      Permalink

      Importance

      Importance Rating
      Importance

      Measure developers address the importance of this measure for a sub-group of oncology patients who receive radiation therapy treatment and those who receive a chemotherapy administration procedure.  Since the endorsement of this measure in 2017, an increasing number of oncology patients receive other forms of treatments that are not addressed by the developers.  Cancer patients receiving other treatment modalities also experience pain and the developers could consider expanding this measure to include these other sub-groups of oncology patients as additional populations in the reported rate of this measure.

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      The measure specifications do not specify who documents the pain intensity or collect information regarding who documented the score in the electronic health record.  Differences in scores across different physician practices or clinicians may reflect differences in which oncology team member, from a medical assistant to the oncologist, who asked the patient's score.

      The measure specifications do not include any exclusions.  While this decreases the burden of data collection, it does not allow for capture of differences in scores and/or exclusions according to patients' cognitive ability to respond to a standard pain instrument or account for patients' choice to decline to provide a rating.

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      Measure developers provide evidence of reliability testing and strong inter-rater reliability.

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      Measure developers provide evidence of validity testing and strong validity.

      Equity

      Equity Rating
      Equity

      Measure developers indicated that differences could exist and that care settings are encouraged to track additional data that could reflect differences in health equity but these have not been included in measure specifications and analyses according to those data were not reported. 

      Use and Usability

      Use and Usability Rating
      Use and Usability

      Measure developers indicate the use and usability of this measure nationally and internationally as an indicator of quality of oncology care.

      Summary

      It would have been helpful if the developers addressed the similarities and differences between 0384e and 0384 within each domain of the measure and commented on the need for both measures to be endorsed.

      Submitted by Brigette DeMarzo on Mon, 01/22/2024 - 15:02

      Permalink

      Importance

      Importance Rating
      Importance

      Agree with PQM staff comments

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      Agree with PQM staff comments

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      Agree with PQM staff comments

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      Agree with PQM staff comments

      Equity

      Equity Rating
      Equity

      Agree with PQM staff comments; could not find additional information to assess this.

      Use and Usability

      Use and Usability Rating
      Use and Usability

      Agree with PQM staff comments

      Summary

      Agree with PQM staff comments

      Submitted by Morris Hamilton on Mon, 01/22/2024 - 21:56

      Permalink

      Importance

      Importance Rating
      Importance

      As currently specified, the performance gap appears closed. Is the existence of the measure keeping the gap closed? If the measure was no longer in use, would the gap widen?

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      All necessary information has been provided. The measure is feasible though usage of the measure has decreased.

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      Using ANOVA signal-to-noise ratios, the developers present entity-level reliability. The ratios meet or exceed 0.859, which exceeds conventional standards for reliability. This is unsurprising considering that about half of the participants have no within-provider variance (aka noise) because their performance is 1.0 or 0.0.

       

      Encounter-level reliability is provided in validity section.

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      With very high kappa values, encounter-level validity is satisfied. The elements of the measure appear accurately measured.

       

      Entity-level validity is not provided. As a maintenance measure that has been in existence for several years, the submission should also include measures of concurrent validity. How correlated is this measure to other measures related to patient quality for pain or cancer? Are the correlations reasonable?

      Equity

      Equity Rating
      Equity

      The authors indicate that demographic data are not available at the patient-level; however, they do not acknowledge that geographic data of the providers may be available. A comparison of measure performance by Area Deprivation Index may be feasible and may elucidate some information about the relationship between measure performance and equity. Though this domain is optional, I encourage the developers to investigate further.

      Use and Usability

      Use and Usability Rating
      Use and Usability

      While the developers provide good evidence to suggest that providers can improve, the developers muddled evidence of improvement.

       

      Without a stable cohort to compare across years, the claim that there is an improvement in performance is spurious. If the authors can limit their presentation of performance to a stable cohort and improvement still exists, then the authors should explain why. The period was 2019-2021. Did the PHE play a role?

      Summary

      Overall, the measure is well defined, feasible, and reliable. It is currently in use in several federal programs. The developers should provide additional analyses to improve their submission. At this time, entity-level validity and usability cannot be adequately evaluated. The developers may also consider using geographic data for providers to investigate equity relationships further. Most importantly, with most participants exceeding 0.90, the developers must address what gap this measure is addressing.

      Submitted by Heather Thompson on Mon, 01/22/2024 - 22:03

      Permalink

      Importance

      Importance Rating
      Importance

      Literature review provides supporting evidence of measure importance.

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      Easily obtained via existing electronic medical record documentation and workflow processes.

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      High reliability metrics.

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      High validity performance.

      Equity

      Equity Rating
      Equity

      Opportunities for further exploration available by cross referencing other patient identifying factors readily available in the electronic medical record with these measure outcomes.

      Use and Usability

      Use and Usability Rating
      Use and Usability

      Data is reliable, valid, and provides actionable information related to effective pain managment care planning.

      Summary

      Measure is valuable in identifying opportunities for improvement in key processes necessary for effective pain management, quantifying pain and development of effective care plans.

      Submitted by Carol Siebert on Mon, 01/22/2024 - 22:31

      Permalink

      Importance

      Importance Rating
      Importance

      Strong evidence for assessing (and managing) pain. But is the measure seems to be topped out. What will continued endorsement/use of the measure achieve?

      Feasibility Acceptance

      Feasibility Rating
      Feasibility Acceptance

      Data elements from defined fields in medical record, limiting resources/burden. 

      Scientific Acceptability

      Scientific Acceptability Reliability Rating
      Scientific Acceptability Reliability

      Agree with PQM staff assessment.

      Scientific Acceptability Validity Rating
      Scientific Acceptability Validity

      Agree with PQM staff assessment.

      Equity

      Equity Rating
      Equity

      THe numerator instructions for this measure state: "Pain intensity should be quantified using a standard instrument, such as a 0-10 numerical rating scale, visual analog scale, a categorical scale, or the pictorial scale. Examples include the Faces Pain Rating Scale and the Brief Pain Inventory (BPI)." But there is a signicant difference between the Faces Scale and the BPI. There is also a evidence of pain being underrecognized and undertreated in African Americans, persons with low English proficiency, persons with cognitive or intellectual impairment, and several other populations and that simple rating scales (such as Faces or Numerical Rating) are often suboptimal tools for assessing pain in these populations. The measure treats simple rating scales and the BPI as equivalent, when they are not.  This measure focuses on only one aspect (quantify) of one recommendation of the cited clinical practice guideline:

      Routinely quantify and document pain intensity and quality as characterized by the patient (whenever possible). Include patient reporting of breakthrough pain, treatments used and their impact on pain, satisfaction with pain relief, pain interference, provider assessment of impact on function, and any special issues for the patient relevant to pain treatment and access to care.

      As this measure seems to be topping out, perhaps it should be upgraded to focus on more pain assessment that goes beyond a simple rating scale and, in the process, addresses recognized gaps in pain assessment among various populations.

      Use and Usability

      Use and Usability Rating
      Use and Usability

      Measure is in use in 2 federal programs. Seems to be topping out, at least as a clinician-level measure. No explanation re: decline in overall mean as a practice level measure.

      As I noted earlier under equity, there seems to be a move away from using simple pain scales such as a numeric scale or visual analog scale. This measure includes these scales in the definition of "quantify" and may be unintentionally promoting the use of such scales when there is growing evidence that a more comprehensive and person-centered assessment of pain is warranted.

      Summary

      Seems to be topping out as a clinician-level measure.

      My biggest concern with this measure is it reduces pain assessment to "quantification." See comments re: equity and use.There is growing evidence that tools that quantify pain are not person centered and do not account for linguistic and cultural differences in how pain is communicated.