The measure is a PRE-PM, which calculates the percentage of contraceptive care patients who give a “top box” score for their experience of contraceptive counseling. The measure is a four-question survey which asks patients about key components of patient-centered counseling, including respect and adequate information. A “top box” score is defined as a response which gives the highest score for each of the four questions. Respondents give answers evaluating the quality of contraceptive care they received in the past six months.
Measure Specs
- General Information(active tab)
- Numerator
- Denominator
- Exclusions
- Measure Calculation
- Supplemental Attachment
- Point of Contact
General Information
Patient experience of contraceptive counseling is an important outcome, in that it is highly valued by patients [1] and measures patient-centeredness, a core aspect of care quality as defined by the National Academy of Medicine (previously Institute of Medicine) in its report, Crossing the Quality Chasm [2]. Additionally, patient-centeredness of contraceptive counseling has been demonstrated to be associated with contraceptive continuation at six months [3], indicating a relationship between patient experience of counseling and the ability of patients to achieve their own reproductive goals, including pregnancy prevention. Patient experience has also been linked to improved engagement with care in various contexts [4,5]; in the context of contraceptive care, this means that patients who receive patient-centered care may feel more able to continue engaging with the reproductive health care system not only for contraception but also if and when they become pregnant and/or give birth [6] and for their other reproductive health needs. As such, positive patient experience of contraceptive counseling can support positive pregnancy and birth outcomes such as reduced maternal mortality.
Given the important implications of patient-centeredness of contraceptive counseling, both for patient experience and reproductive health outcomes, many healthcare organizations are invested in gathering information on the experiences of their patients and improving those experiences at various levels of aggregation. The Retrospective Person-Centered Contraceptive Counseling measure (PCCC-RS) is a four-item patient-reported experience performance measure (PRE-PM), informed by the input of a patient stakeholder group and a healthcare quality expert workgroup and designed to give state healthcare organizations and health plans an opportunity to understand the quality of their patients’ experience of contraceptive counseling. Adapted from the visit-specific Person-Centered Contraceptive Counseling (PCCC) Measure (CBE #3543), the PCCC-RS measure allows for state and regional population-level and health plan-level sampling.
The original, visit-specific PCCC measure collects information on the quality of care at a specific encounter and is endorsed for use at the facility and provider level. The PCCC-RS measure, in contrast, asks patients to reflect on the quality of contraceptive care across a six-month lookback period, with aggregation of scores at the level of region and state population and health plan. This higher level of aggregation allows for public reporting and accountability and can be used to track changes in quality in response to interventions at these higher levels. Use of this measure at the state/region and health plan levels also uplifts the importance of patient experience metrics as population metrics. We expect its use will encourage providers to invest in high-quality contraceptive care practices. While PCCC-RS results are intended to have stand-alone value to organizations, the measure exists in an ecosystem of person-centered measures working to improve the quality of contraceptive services. This includes measures of contraceptive use and provision, as well as screening for contraceptive need. Widespread use of validated performance measures for contraceptive care in diverse care contexts has the potential to improve patient experience and reproductive outcomes, particularly in underserved populations. Improvement in the quality of contraceptive care has been shown to improve people’s ability to identify methods that they can use over time and to promote engagement with health care across the reproductive life course (1–3), which will improve people’s reproductive outcomes and therefore would also be expected to have a positive impact on health care costs.
References
[1] Dehlendorf C, Levy K, Kelley A, Grumbach K, Steinauer J. Women´s preferences for contraceptive counseling and decision making. Contraception. 2013;88(2):250-256.
[2] Wolfe A. Institute of Medicine Report: crossing the quality chasm: a new health care system for the 21st century. Policy, Politics, & Nursing Practice. 2001;2(3):233-235.
[3] Dehlendorf C, Henderson JT, Vittinghoff E, et al. Association of the quality of interpersonal care during family planning counseling with contraceptive use. American Journal of Obstetrics and Gynecology. 2016;215(1):78. e71-78. e79.
[4] Anhang Price R, Elliott MN, Zaslavsky AM, et al. Examining the role of patient experience surveys in measuring health care quality. Medical Care Research and Review. 2014;71(5):522-554.
[5] Doyle C, Lennox L, Bell D. A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open. 2013;3(1):e001570.
[6] Gomez AM, Wapman M. Under (implicit) pressure: young Black and Latina women´s perceptions of contraceptive care. Contraception. 2017;96(4):221-226.
[7] Wingo E, Sarnaik S, Michel M, et al. The status of person-centered contraceptive care in the United States: Results from a nationally representative sample. Perspectives on Sexual and Reproductive Health. 2023;55(3):129-139.
[8] Welti K, Manlove J, Finocharo J, Faccio B, Kim L. Women's experiences with person-centered family planning care: differences by sociodemographic characteristics. Contraception: X. 2022 Jan 1;4:100081.
None.
Numerator
The PCCC-RS is a retrospective measure of patient-centeredness in contraceptive counseling. It specifically measures how many patients report a top-box (i.e., the highest possible) score of patient experience in their contraceptive counseling interactions during any visits in the last 6 months.
The PCCC-RS is a retrospective measure of patient-centeredness in contraceptive counseling. It specifically measures how many patients report a topbox score of patient experience (i.e., the highest possible summative score out of 20, adding together the four items of the scale, each with individual score ranges from 1-5) in their contraceptive counseling interactions during any visits in the last 6 months.
Denominator
The target population for the PCCC-RS is patients aged 15-45 who were assigned female at birth, who are not currently pregnant, and who received contraceptive counseling as part of their visits in the last 6 months prior to being surveyed.
The denominator includes all eligible patients who completed the PCCC-RS. Specifically, individuals are eligible for the PCCC-RS if they are 15-45 years of age, assigned female at birth, are not currently pregnant nor have given birth in the preceding six months, and have received any contraceptive counseling as part of their visits in the six months prior to being surveyed. Potentially eligible patients are initially targeted by identifying all patients with a recorded female sex aged 15-45 years of age with a record of receiving services in the preceding six months. Eligibility is further determined by the following eligibility screening question: “In the last 6 months, did you talk about contraception or pregnancy prevention with a member of a healthcare team (including any doctor, nurse, medical assistant, etc.)?". Two separate questions ask patients to self-report their current pregnancy status and whether they have given birth in the preceding six months. Those who respond “Yes” to the screening question, do not indicate current pregnancy or pregnancy in the preceding six months, and respond to all four items of the PCCC-RS comprise the denominator.
Exclusions
Pregnant and recently postpartum patients (i.e. pregnant in the preceding six months) are excluded from the denominator because contraceptive counseling during this period has unique considerations. Specifically, given distinct issues related to postpartum contraceptive use, including increased risk of blood clots, effect on lactation, and the health impact of birth spacing, counseling pregnant women about future contraceptive use has components distinct from that of non-pregnant women. For these conceptual reasons, the PCCC-RS was designed for use with non-pregnant patients and has not been extensively tested with pregnant patients to determine whether it accurately captures their needs and desires for counseling. Pregnant or recently postpartum patients self-identify as such in the patient survey. Of note, our team is currently conducting work to develop and validate a measure that would capture patient experience of contraceptive counseling in the peripartum period, taking into account the unique aspects of this point in the reproductive life course.
The PCCC-RS survey asks patients if they are currently pregnant or if they have given birth in the last six months, and these responses are excluded from the calculation of the measure.
Measure Calculation
Measure users should follow these steps in order to obtain measure results:
- Identification and data collection
- Survey distributors identify all patients within the network (health plan and/or all health plan networks in a state or region) aged 15-45 years with documented female sex who received services within the preceding six months
- Patients are sent the survey using established patient surveying pathways or new surveying efforts, e.g. through patient portal, email, text, or mailed paper survey
- Patients who receive the survey will answer the following screening question, “In the last 6 months, did you talk about contraception or pregnancy prevention with a member of a healthcare team (including any doctor, nurse, medical assistant, etc.)?". This question serves to identify those within the target population who had any encounters relevant to the PCCC-RS (individuals who received contraceptive counseling the past six months)
- Patients who answer “Yes” to the screening question complete the survey (self-administered via mailed paper survey or electronically, e.g. a link sent through patient portal, text, etc.)
- Patient responses are collected into an electronic database for analysis, either through data entry of paper surveys or collation of electronic survey responses
- Data aggregation and measure calculation
- Patients indicating that they did not receive contraceptive counseling have their responses excluded
- Patients indicating that they are currently pregnant or have given birth in the last 6 months have their responses excluded
- Patients who did not answer all four items of the PCCC-RS have their responses excluded
- Measure responses are summed as the total of all PCCC-RS item values (maximum value of 20)
- PCCC value sums are dichotomized as a maximum value of 20 (topbox score) versus any value less than 20
- Measure result is calculated as the percentage of patients responding with a topbox score, divided by the total number of patients who gave any complete response to the survey, on a region/state or health plan-level
The measure is not stratified.
As described in section 1.18, accountable entities send the survey to all patients with documented female sex and between the ages of 15 and 45 years who received services in the preceding six months using established patient surveying pathways, e.g. through patient portal, email, text, or mailed paper survey. The survey is self-administered and can be completed in English or Spanish. The anonymous nature of the survey should be emphasized, with assurance that answers to the survey will not be linked to the individual patient and will not impact care. Respondents first answer a screening question asking them to verify if they have received contraceptive counseling in the past six months. Those who say no are screened out and do not complete the survey.
The response rate is calculated as the number of patients to whom a survey is administered who returns a completed survey, divided by the number of patients asked to complete a survey. Improving response rates is possible through mixed-mode protocols, such as calling patients to request that they complete a survey sent through text message.
We recommend a minimum sample size of 150 patients to achieve adequate reliability in both region/state population and health plan levels of analysis.
Supplemental Attachment
Point of Contact
Not Applicable.
Christine Dehlendorf
San Francisco, CA
United States
Christine Dehlendorf
University of California, San Francisco
San Francisco, CA
United States
Importance
Evidence
There are demonstrated gaps in the quality of contraceptive counseling. Analyses utilizing the 2017-2019 wave of National Survey of Family Growth (NSFG) found that only 58% rated quality of contraceptive care received over the last year as optimal, demonstrating a gap nationally (1). A similar national survey conducted by KFF in 2022 found only 40% reported optimal counseling (2). Use of the visit-specific PCCC measure (CBE #3543), has demonstrated a range of scores from 30% to 95% at the health center level in community health centers across the country, suggesting differential care at the health facility level (3). To understand gaps in quality, studies have explored inadequacies in contraceptive counseling. In multiple studies examining patient experience of counseling, patients reported receiving information from their providers that was inadequate to support them in making an informed decision on contraception (4–7), and the patients felt dissatisfied with the patient-centeredness and adequacy of counseling overall (7–10). Research conducted by our team at UCSF examining quality of counseling via audio recording of patient visits found that providers inconsistently elicited or engaged with patient experiences and preferences during counseling (11,12).
Gaps in quality of contraceptive care are experienced inequitably. Analyses using the version of the Retrospective Person-Centered Contraceptive Counseling measure (PCCC-RS) integrated into the 2017-2019 wave of the NSFG have found lower scores among Black, Spanish-speaking Latine, low-income, and gay and bisexual patients (1,13). Similar differences were noted in the national KFF survey in 2022 (2). Other studies have surfaced indications of poor quality of contraceptive care differentially experienced by women of color, such as low-income women of color having greater odds of being advised to limit their childbearing (14) and greater emphasis on highly effective methods by providers when counseling women of color (10,15). A randomized controlled trial explored this dynamic explicitly by using a standardized patient approach and found that providers were more likely to recommend an intrauterine device to standardized patients identified as low-income Black or Latina compared to those identified as white. (16) This is further elucidated in qualitative research, where Black, Latine and low-income patients describe themes of discriminatory and/or coercive contraceptive care practices (17–21). Scholars have highlighted that these differential practices are rooted in a longstanding history of racial and class discrimination and systemic oppression (22–25). Thus measuring and monitoring quality of contraceptive counseling is important to address healthcare disparities and promote health equity.
Patient experience is an outcome meaningful and important to patients and aligned with positive health outcomes. Patient experience is an important outcome in and of itself, in that it is highly valued by patients (4) and measures a core aspect of quality care – patient-centeredness – as defined by the National Academy of Medicine in its report Crossing the Quality Chasm (26). A large body of evidence demonstrates how patient experience is associated with a range of health outcomes, structures, and processes. This includes two systematic reviews (one UK-based (27) and one US-based (28)), which both found that patient experience was positively associated with outcomes such as seeking and adhering to preventive care treatments, and positive clinical health outcomes, including self-rated health, engaging in heath-promoting behavior, and primary care utilization. In addition, a 2009 meta-analysis of 127 studies documented that the quality of provider communication, which is the specific aspect of patient-centeredness measured by the PCCC-RS, was directly associated with patient treatment adherence (29).
Patient experience of high-quality contraceptive counseling enables patients to meet their reproductive goals. In the context of contraceptive care specifically, the patient-centeredness of contraceptive counseling is associated with contraceptive continuation and satisfaction (30,31). By working to improve patient experience, health care entities can therefore help support their patients achieve their reproductive goals, such as pregnancy prevention. Further, qualitative data suggests that patients who experience non-patient-centered contraceptive care are less likely to return to seek out care for future reproductive health needs (17,32). This has the potential to negatively impact a range of outcomes, including pregnancy-related morbidity and mortality.
The PCCC-RS provides the ability to monitor contraceptive counseling at region/state population and health plan levels, which is particularly critical given the use of contraceptive provision performance measures (CBE #2903 and #2904) at these levels, which has the potential to incentivize coercive contraceptive practices. The motivation behind the development of the visit-specific PCCC originated during the Department of Health and Human Services Office of Population Affair’s (OPA’s) development of measures CBE #2903 and #2904, which focus on most and moderately effective contraception and long-acting reversible contraceptive (LARC) methods. The OPA team and others involved in the measure development process foresaw that use of these important measures could have the unintended consequence of incentivizing provider pressure on patients to use more effective methods and away from use of prescription methods. During the Consensus-Based Entity endorsement process, this concern was voiced by stakeholders, including the National Partnership for Women & Families (NPWF). The NPWF submitted a public comment that stated, “It is extremely important to keep in mind that reproductive coercion has a troubling history, and remains an ongoing reality for many, including low-income women, women of color, young women, immigrant women, LGBT people, and incarcerated women. We hope this measure will be paired with a woman-reported ‘balancing measure’ of experience of receiving contraceptive care. Such a measure can be expected to help identify and/or check inappropriate pressure from the health care system.” CBE #2903 and #2904 have been endorsed, institutionalized, and included in the CMS Adult Core Set (33), making to mandatory for states to report. The visit-specific PCCC measure (CBE #3543), however, cannot be evaluated at this level due to the infeasibility of collecting a population-level sample at the time of the visit. Therefore, in addition to the value of having information about the quality of contraceptive counseling at the region/state and health plan levels in its own right, the PCCC-RS is also critical as a balancing measure of patient experience of contraceptive counseling at the state/region and health plan levels, allowing for the collection of population-level data using a retrospective lookback approach aligned with the Consumer Assessment of Healthcare Providers and Systems Clinician & Group (CG-CAHPS) and other measures of patient experience.
REFERENCES
1. Wingo E, Sarnaik S, Michel M, Hessler D, Frederiksen B, Kavanaugh ML, et al. The status of person-centered contraceptive care in the United States: Results from a nationally representative sample. Perspect Sex Reprod Health. 2023 Sep;55(3):129–39.
2. Frederiksen B, Ranji U, Long M, Diep K, Salganicoff A. Contraception in the United States: A Closer Look at Experiences, Preferences, and Coverage [Internet]. KFF; 2022 Nov [cited 2024 Jul 18]. Available from: https://www.kff.org/womens-health-policy/report/contraception-in-the-un…
3. Dehlendorf C, Wingo E, Gibson L, Goetsch-Avila S, Kriz R, Hessler D. Leveraging an equity-focused, data-driven quality improvement learning collaborative to advance contraceptive care in community health centers. Journal of the American Board of Family Medicine. 2025;(under review).
4. Dehlendorf C, Levy K, Kelley A, Grumbach K, Steinauer J. Women’s preferences for contraceptive counseling and decision making. Contraception. 2013 Aug;88(2):250–6.
5. Yee LM, Simon MA. Perceptions of coercion, discrimination and other negative experiences in postpartum contraceptive counseling for low-income minority women. J Health Care Poor Underserved. 2011 Nov;22(4):1387–400.
6. Guendelman S. Perceptions of hormonal contraceptive safety and side effects among low-income Latina and non-Latina women. Maternal and Child Health Journal. 2000;4(4):233–9.
7. Becker D, Koenig MA, Mi Kim Y, Cardona K, Sonenstein FL. The Quality of Family Planning Services in the United States: Findings from a Literature Review. Perspect Sexual Reproductive. 2007 Dec;39(4):206–15.
8. Becker D, Tsui AO. Reproductive Health Service Preferences And Perceptions of Quality Among Low-Income Women: Racial, Ethnic and Language Group Differences. Perspectives on Sexual and Reproductive Health. 2008 Dec;40(4):202–11.
9. Nobili MP, Piergrossi S, Brusati V, Moja EA. The effect of patient-centered contraceptive counseling in women who undergo a voluntary termination of pregnancy. Patient Educ Couns. 2007 Mar;65(3):361–8.
10. Borrero S, Schwarz EB, Creinin M, Ibrahim S. The Impact of Race and Ethnicity on Receipt of Family Planning Services in the United States. Journal of Women’s Health. 2009 Jan;18(1):91–6.
11. Dehlendorf C, Anderson N, Vittinghoff E, Grumbach K, Levy K, Steinauer J. Quality and Content of Patient-Provider Communication About Contraception: Differences by Race/Ethnicity and Socioeconomic Status. Womens Health Issues. 2017 Oct;27(5):530–8.
12. Dehlendorf C, Kimport K, Levy K, Steinauer J. A qualitative analysis of approaches to contraceptive counseling. Perspect Sex Reprod Health. 2014 Dec;46(4):233–40.
13. Welti K, Manlove J, Finocharo J, Faccio B, Kim L. Women’s experiences with person-centered family planning care: Differences by sociodemographic characteristics. Contraception: X. 2022;4:100081.
14. Downing RA, LaVeist TA, Bullock HE. Intersections of Ethnicity and Social Class in Provider Advice Regarding Reproductive Health. Am J Public Health. 2007 Oct;97(10):1803–7.
15. Rowley S, Broomfield C, Min J, Quinn S, Campbell K, Wood S. Racial Inequities in Adolescent Contraceptive Care Delivery: A Reproductive Justice Issue. Journal of Pediatric and Adolescent Gynecology. 2023 Jun;36(3):298–303.
16. Dehlendorf C, Ruskin R, Grumbach K, Vittinghoff E, Bibbins-Domingo K, Schillinger D, et al. Recommendations for intrauterine contraception: a randomized trial of the effects of patients’ race/ethnicity and socioeconomic status. Am J Obstet Gynecol. 2010 Oct;203(4):319.e1-8.
17. Gomez AM, Wapman M. Under (implicit) pressure: young Black and Latina women’s perceptions of contraceptive care. Contraception. 2017 Oct;96(4):221–6.
18. Higgins JA, Kramer RD, Ryder KM. Provider Bias in Long-Acting Reversible Contraception (LARC) Promotion and Removal: Perceptions of Young Adult Women. Am J Public Health. 2016 Nov;106(11):1932–7.
19. Thorburn S, Bogart LM. African American women and family planning services: perceptions of discrimination. Women Health. 2005;42(1):23–39.
20. Reed R, Osby O, Nelums M, Welchlin C, Konate R, Holt K. Contraceptive care experiences and preferences among Black women in Mississippi: A qualitative study. Contraception. 2022 Oct;114:18–25.
21. Mann ES, Chen AM, Johnson CL. Doctor knows best? Provider bias in the context of contraceptive counseling in the United States. Contraception. 2022 Jun;110:66–70.
22. Stern AM. STERILIZED in the Name of Public Health: Race, Immigration, and Reproductive Control in Modern California. Am J Public Health. 2005 Jul;95(7):1128–38.
23. Roberts D. Killing the Black Body. New York, NY: Penguin Random House; 1998.
24. Brandi K, Fuentes L. The history of tiered-effectiveness contraceptive counseling and the importance of patient-centered family planning care. American Journal of Obstetrics and Gynecology. 2020 Apr;222(4):S873–7.
25. Kathawa CA, Arora KS. Implicit Bias in Counseling for Permanent Contraception: Historical Context and Recommendations for Counseling. Health Equity. 2020 Jul 1;4(1):326–9.
26. Institute of Medicine (US) Committee on Quality of Health Care in America. Crossing the Quality Chasm: A New Health System for the 21st Century [Internet]. Washington (DC): National Academies Press (US); 2001 [cited 2022 May 20]. Available from: http://www.ncbi.nlm.nih.gov/books/NBK222274/
27. Doyle C, Lennox L, Bell D. A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open. 2013;3(1):e001570.
28. Anhang Price R, Stucky B, Parast L, Elliott MN, Haas A, Bradley M, et al. Development of Valid and Reliable Measures of Patient and Family Experiences of Hospice Care for Public Reporting. Journal of Palliative Medicine. 2018 Jul;21(7):924–32.
29. Zolnierek KBH, Dimatteo MR. Physician communication and patient adherence to treatment: a meta-analysis. Med Care. 2009 Aug;47(8):826–34.
30. Dehlendorf C, Henderson JT, Vittinghoff E, Grumbach K, Levy K, Schmittdiel J, et al. Association of the quality of interpersonal care during family planning counseling with contraceptive use. American Journal of Obstetrics and Gynecology. 2016 Jul;215(1):78.e1-78.e9.
31. Oakley LP, Harvey SM, López-Cevallos DF. Racial and Ethnic Discrimination, Medical Mistrust, and Satisfaction with Birth Control Services among Young Adult Latinas. Womens Health Issues. 2018 Aug;28(4):313–20.
32. Baker K, Emery ST, Spike E, Sutton J, Ben-Porath E. Skin tone discrimination and birth control avoidance among women in Harris County, Texas: a cross-sectional study. BMC Public Health. 2024 Sep 2;24(1):2375.
33. Medicaid.gov [Internet]. [cited 2025 Feb 4]. 2026 Core Set of Adult Health Care Quality Measures for Medicaid (Adult Core Set). Available from: https://www.medicaid.gov/medicaid/quality-of-care/downloads/2026-adult-…
Measure Impact
Desired outcomes: The PCCC-RS is designed to help states/regions and health plans assess the quality of contraceptive counseling within their networks. Our desired impact for this measure is to help patients achieve their reproductive health life goals, as fits their self-defined needs and preferences. The outcomes associated with the use of this measure fall into three categories: short-term, medium-term, and long-term outcomes. Our short-term goals include having more patients report on their counseling experiences and share their feedback with administrators at their health plan, as well as establishing ongoing monitoring of the quality of contraceptive counseling at the state or regional level. Outcomes also include increasing administrator knowledge and awareness of the quality of contraceptive counseling within their states or networks. As administrators are more aware of contraceptive care quality and encourage quality improvement efforts in their networks/health plans, this leads to the medium-term outcomes of improved quality of contraceptive counseling, improved patient experience, and increased trustworthiness of reproductive care across geographies and health plan networks (1). An increase in focus on contraceptive care and with quality improvement practices, improves the quality of counseling across the network. Specifically, by collecting PCCC-RS, systems will be able to better support patients and meet their reproductive health needs by improving this contraceptive counseling quality. With increased attention to patient feedback and improved contraceptive counseling, better trust with patients may be established, which can improve adherence to care plans and lead to better health outcomes (2–6). Long-term outcomes of utilizing this measure can include increasing usage of patient-preferred contraceptive methods and, thus, a decrease in undesired pregnancies.
Adverse events or costs avoided: No adverse events were reported by users of the PCCC-RS. While no direct costs were reported to be avoided, health plans may better utilize their resources by targeting areas for quality improvement identified by this measure, rather than applying quality improvement resources more broadly.
Improvement in the quality of contraceptive care has been shown to improve people’s ability to identify methods that they can use over time and to promote engagement with health care across the reproductive life course, which will improve people’s reproductive outcomes and therefore would also be expected to have a positive impact on health care costs. Moreover, while the goal of this measure is not to control fertility outcomes, improved alignment between care and patient preferences can lead to better health outcomes and more efficient use of healthcare resources.
Unintended consequences: As reported, some people may feel uncomfortable with some of the questions asked during the survey. However, respondents may refuse to answer any question they don’t want to answer and may stop the survey at any time with no consequence. All data is anonymous and aggregated, so information cannot be tied to an individual.
Our team learned through dialogue with patient stakeholders during the development process for our visit-specific PCCC survey (CBE # 3543) that many patients directly benefited from the opportunity to respond to a measure about their patient experience of contraceptive counseling. By having their feedback solicited, patients felt more engaged in their medical care and aware of their right to receive care focused on their needs (5,6).
REFERENCES
1. Jones EJ, Dehlendorf C, Kriz R, Grzeniewski M, Decker E, Eikner D. Using the person-centered contraceptive counseling (PCCC) measure for quality improvement. Contraception. 2023 Jul;123:110040.
2. Aiyegbusi OL, Hughes SE, Calvert MJ. The Role of Patient-Reported Outcomes (PROs) in the Improvement of Healthcare Delivery and Service. In: Kassianos AP, editor. Handbook of Quality of Life in Cancer [Internet]. Cham: Springer International Publishing; 2022 [cited 2025 Mar 19]. p. 339–52. Available from: https://link.springer.com/10.1007/978-3-030-84702-9_20
3. Van Der Wees PJ, Nijhuis‐Van Der Sanden MWG, Ayanian JZ, Black N, Westert GP, Schneider EC. Integrating the Use of Patient‐Reported Outcomes for Both Clinical Practice and Performance Measurement: Views of Experts from 3 Countries. Milbank Quarterly. 2014 Dec;92(4):754–75.
4. Porter I, Gonçalves-Bradley D, Ricci-Cabello I, Gibbons C, Gangannagaripalli J, Fitzpatrick R, et al. Framework and guidance for implementing patient-reported outcomes in clinical practice: evidence, challenges and opportunities. J Comp Eff Res. 2016 Aug;5(5):507–19.
5. Lohr KN, Zebrack BJ. Using patient-reported outcomes in clinical practice: challenges and opportunities. Qual Life Res. 2009 Feb;18(1):99–107.
6. Oliveira VC, Refshauge KM, Ferreira ML, Pinto RZ, Beckenkamp PR, Negrao Filho RF, et al. Communication that values patient autonomy is associated with satisfaction with care: a systematic review. Journal of Physiotherapy. 2012 Dec;58(4):215–29.
The PCCC-RS meaningfully adds to the landscape of contraceptive care quality measures. Existing measures, including the OPA-developed measures for contraceptive provision (CBE #2902, 2903 and 2904), the Contraceptive Use electronic clinical quality measure (CU-SINC; CBE#3699e and 3682e), and the Contraceptive Care Screening eCQM (CCS-SINC; CBE #4655e) are valuable tools for assessing access to contraceptive care, including access to counseling and contraceptive methods.
The visit-specific PCCC (CBE #3543) was the first validated and endorsed measure of the quality of patient experience of contraceptive counseling. However, this measure primarily assesses immediate counseling quality, making it difficult for health systems and policymakers to evaluate whether providers and systems are consistently supporting person-centered decision-making, and infeasible for collection at the population level. Some health systems and insurers rely on retrospective data collection (e.g., surveys sent periodically) for patient experience surveys, such as CG-CAHPS. A measure that aligns with this methodology can improve data collection feasibility, as well as allow for actionability at higher levels of aggregation by enabling meaningful quality improvement efforts at the state/region- and plan-level. The PCCC-RS, with its six-month lookback period, meets the standard for use at the population and health plan level and provides a more comprehensive assessment of care over time.
The contraceptive provision measures, contraceptive use measures, and contraceptive care screening measures are the only other CBE-endorsed measures to address quality in the context of contraceptive care. An original motivation for the visit-specific PCCC development was the need for a PRE-PM of patient-centered contraceptive counseling to balance use of contraceptive provision measures. However, as described above the visit-specific PCCC is infeasible for use at a state/region or health plan level, necessitating the development of the PCCC-RS. The CU-SINC and CCS-SINC eCQMs were developed to be more patient-centered measures of contraceptive use; however, they do not capture patients’ experiences with care. When contraceptive use, contraceptive screening, and PRE-PMs are used together, these measures can provide a robust picture of contraceptive care quality and ensure that advances in contraceptive provision do not come at the cost of patient experience.
The choice of a contraceptive method is a highly preference-sensitive decision with multiple medically appropriate options. Each patient has unique preferences regarding key contraceptive method attributes, such as pregnancy prevention, side effects, and menstrual cycle control. Research also shows the patient experience is influenced by historical and ongoing forceful and coercive practices of pressuring patients, particularly low-income patients and Black and Latine patients, to use contraceptive methods that they do not want (1–6). The visit-specific PCCC (CBE#3543) and PCCC-RS measures have been specifically designed with patient and provider stakeholder input to reflect the individualized nature of contraceptive counseling and ensure that patient-centered care is prioritized, which can help safeguard against care inequities.
The visit-specific PCCC was derived from the Interpersonal Quality of Family Planning scale (IQFP) (7). The IQFP is a validated 11-item scale of patient experience of contraceptive counseling that addresses three domains of contraceptive counseling preferences: interpersonal connection, decision support, and adequate information. These domains and items were elucidated though qualitative interviews with patients (8).
To create the visit-specific PCCC, a short form of the IQFP, we conducted cognitive interviews with 33 patients (10 in English, 13 in Spanish, and 10 with bilingual participants with substantial discussion in both languages) to understand which items held the greatest relevance and importance to patients (9). The items included in the final measure were those that held greatest importance to patients while also continuing to reflect the domains of patient importance identified in the IQFP, as well as maintaining the scientific validity and acceptability.
The items contained in PCCC-RS are the same as the visit-specific measure. However, it was important that we assess the appropriateness of the specific lookback period and additional changes made to the measure stem. We tested the relevance of these qualities of the measure through iterative engagement with the Person-Centered Reproductive Health Program’s Patient Stakeholder Group (PSG). The PSG, a standing group of nine individuals assigned female at birth, provided critical feedback on survey elements, ensuring accessibility and relevance. All members of the PSG confirmed that providing feedback on their healthcare experiences with contraceptive counseling at a population level is meaningful, as it informs system-level improvements.
Specific refinements based on patient input included adjusting language for clarity, such as replacing the term “member of the healthcare team,” and shortening the lookback period from twelve months to six months to improve recall accuracy (also confirmed through a Delphi process of quality improvement leaders, as discussed in section 5.3). These changes were incorporated into the measure before finalization. Overall, our patient engagement and review of the literature demonstrates that the target population values the measured outcome and finds the feedback process meaningful in enhancing contraceptive counseling experiences.
REFERENCES
1. Stern AM. Sterilized in the name of public health: race, immigration, and reproductive control in modern California. Am J Public Health. 2005 Jul;95(7):1128–38.
2. Eeckhaut MCW, Hara Y. Reproductive Oppression Enters the Twenty-First Century: Pressure to Use Long-Acting Reversible Contraception (LARC) in the Context of “LARC First.” Socius: Sociological Research for a Dynamic World. 2023 Jan;9:23780231231180378.
3. Higgins JA, Kramer RD, Ryder KM. Provider Bias in Long-Acting Reversible Contraception (LARC) Promotion and Removal: Perceptions of Young Adult Women. Am J Public Health. 2016 Nov;106(11):1932–7.
4. Gomez AM, Wapman M. Under (implicit) pressure: young Black and Latina women’s perceptions of contraceptive care. Contraception. 2017 Oct;96(4):221–6.
5. Borrero S, Schwarz EB, Creinin M, Ibrahim S. The Impact of Race and Ethnicity on Receipt of Family Planning Services in the United States. Journal of Women’s Health. 2009 Jan;18(1):91–6.
6. Brandi K, Fuentes L. The history of tiered-effectiveness contraceptive counseling and the importance of patient-centered family planning care. American Journal of Obstetrics and Gynecology. 2020 Apr;222(4):S873–7.
7. Dehlendorf C, Henderson JT, Vittinghoff E, Steinauer J, Hessler D. Development of a patient-reported measure of the interpersonal quality of family planning care. Contraception. 2018 Jan;97(1):34–40.
8. Dehlendorf C, Levy K, Kelley A, Grumbach K, Steinauer J. Women’s preferences for contraceptive counseling and decision making. Contraception. 2013 Aug;88(2):250–6.
9. Dehlendorf C, Fox E, Silverstein IA, Hoffman A, Campora Pérez MP, Holt K, et al. Development of the Person-Centered Contraceptive Counseling scale (PCCC), a short form of the Interpersonal Quality of Family Planning care scale. Contraception. 2021 May;103(5):310–5.
Performance Gap
To assess variation of our proposed PRE-PM at a state population level, we worked with twelve Planned Parenthood Federation of America (PPFA) affiliates to collect the PCCC-RS. These affiliates act as state-level equivalents (SLE) for population-level analyses, aligned with how data from PPFA affiliates were used in testing for measures CBE #2902, #2903, and #2904. For this analysis, PPFA collected data from 3,284 patients from 9/1/2023-4/1/2024.
Washington Health Care Authority (WA HCA) served as a representative of health plan data collection. WA HCA collected data from 151 patients from 5/1/2024-10/3/24.
Across the twelve units of analysis of PPFA, performance score ranged from 36.9% to 72.9%, with a median score of 58.6%, demonstrating a wide range of performance scores. This presents evidence of an existing performance gap for this measure at the population level. Since we only have data from one health plan, we cannot provide deciles of scores at that analysis level. WA HCA score was 48.3% (not included in table), which is both within range of the PPFA results and demonstrates opportunity for improvement.
Overall | Minimum | Decile_1 | Decile_2 | Decile_3 | Decile_4 | Decile_5 | Decile_6 | Decile_7 | Decile_8 | Decile_9 | Decile_10 | Maximum | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Mean Performance Score | 0.586 | 0.369 | 0.369 | 0.503 | 0.507 | 0.540 | 0.586 | 0.593 | 0.599 | 0.605 | 0.626 | 0.729 | 0.729 |
N of Entities | 12 | 1 | 1 | 1 | 1 | 2 | 1 | 2 | 1 | 1 | 1 | 1 | 1 |
N of Persons / Encounters / Episodes | 3284 | 168 | 168 | 137 | 148 | 342 | 1019 | 351 | 576 | 248 | 188 | 107 | 107 |
Equity
Equity
Inadequate and inequitable experiences of contraceptive care are well-documented. Studies have shown that poor quality of contraceptive care is differentially experienced by women of color, such as low-income women of color having greater odds of being advised to limit their childbearing (1), and greater emphasis by providers on highly-effective methods in counseling of women of color (2,3). A randomized controlled trial further explored this dynamic by using a standardized patient approach and found that providers were more likely to recommend an intrauterine device to standardized patients identified as low-income Black or Latina than those identified as white (4). This is further elucidated in qualitative research, where Black, Latine and low-income patients describe themes of discriminatory and/or coercive contraceptive care practices (5–9). Scholars have highlighted that these differential practices are rooted in a longstanding history of racial and class discrimination and systemic oppression (10–13). Patient experience of quality contraceptive care is also important in that it enables patients to meet their reproductive goals, including through contraceptive continuation and satisfaction and continued engagement in reproductive health care (5,14–16). Therefore, measuring and monitoring the quality of contraceptive counseling is important to address healthcare disparities and promote health equity.
The PCCC-RS can be used to surface and address these inequities. Analyses using a version of the PCCC-RS integrated into the 2017-2019 wave of the National Survey of Family Growth have found lower scores among Black, Spanish-speaking Latine, low-income, and gay and bisexual patients (17,18). Similar differences were noted in a 2022 national KFF survey (19). In a quality improvement collaborative, the visit-specific PCCC (CBE #3543) was collected before and after a health equity-informed intervention that included targeted quality improvement to improve baseline PCCC-RS scores (20). Five out of nine participating CHCs improved their scores, in a range of 2.1% - 26.2% after the intervention. This included an improvement of 8.8% among Black patients across all agencies. This intervention illustrated how a version of this measure helped healthcare entities identify inequitable differences in care and make improvements through quality interventions. The PCCC-RS is an adaptation of this visit-specific measure made feasible for use at the region/state population or health plan level, and we anticipate a similar responsiveness to intervention.
Empirical analysis of differences in performance scores
Methods: Based on the healthcare disparities identified in previous literature, we explored scores by age (binary of youngest age group versus older), race/ethnicity, and language of survey completion for Planned Parenthood Federation of America (PPFA) affiliates (state-level equivalents) and Washington Healthcare Authority (WA HCA) (health plan). (Table 3.1 and 3.2, see Section 7 - Supplemental attachment). We then conducted two mixed-effects logistic regressions examining the odds of PCCC-RS topbox score within the PPFA dataset, first by age then by race/ethnicity group, with random intercepts by affiliate. Given the small cell sizes, we restricted the latter analysis to the following race/ethnicity groups: white, Hispanic/Latina, African American/Black, and Multi-racial. We made white race the reference group based on the previous literature that demonstrates that white people are more likely to receive a higher quality of contraceptive care compared to people of other races and racialized ethnicities. Cell sizes were too small to conduct inferential statistics to examine differences by language. We also did not conduct regression analyses for a health plan-level analysis with the WA HCA data given the limited sample and small cell sizes.
Results: Descriptive breakdown of scores by demographic subgroups within each PPFA affiliate is provided in Table 3.1 (located in Section 7 - Supplemental attachment) and within WA HCA in Table 3.2 (see Section 7 - Supplemental attachment). In regression analyses among PPFA affiliates, compared to the youngest age group, older respondents reported greater odds of reporting a topbox score (Table 3.3, Section 7 - Supplemental attachment), and compared to white respondents, all racial/ethnic groups had lower odds of reporting a topbox score (Table 3.4, Section 7 - Supplemental attachment).
Interpretation and anticipated impact: Results among PPFA affiliates suggest that those of younger ages and those from minoritized racial and ethnic groups have lower odds of experiencing high quality contraceptive care. Descriptive analyses within the WA HCA sample suggest that these differences hold true in the health plan-based dataset, however sample size is too small to investigate statistically. Additionally, descriptive analyses suggest that similar differences in scores may be apparent by language of respondent as well, though low volume of Spanish-speaking respondents does not allow us to explore in depth.
Exposure to lower quality contraceptive care may result in these groups being pressured to utilize a contraceptive method that they do not want to use, extending a long history of reproductive oppression in healthcare as described above. Alternatively, this poor-quality care may result in racially minoritized and younger patients not receiving desired contraception due to negative experiences of care. We know that quality of contraceptive counseling is associated with contraceptive continuation and satisfaction (15,21), suggesting in both cases that these patients are less likely to have their pregnancy prevention needs met in the way they want. Longer term, this may impact patients’ overall engagement in reproductive healthcare (5,16), which may have negative impact a range of outcomes, including pregnancy-related morbidity and mortality for these groups.
Accountable entities can use these findings to identify disparities and provide resources to improve care for groups within their states and networks. This may include trainings provided to clinics and providers to increase person-centeredness of contraceptive counseling, as well as deepen understanding and practice of cultural responsiveness and humility. They may also incentivize clinics in their networks to engage with patient groups to identify gaps in quality of care to identify quality improvement targets.
REFERENCES
1. Downing RA, LaVeist TA, Bullock HE. Intersections of Ethnicity and Social Class in Provider Advice Regarding Reproductive Health. Am J Public Health. 2007 Oct;97(10):1803–7.
2. Borrero S, Schwarz EB, Creinin M, Ibrahim S. The Impact of Race and Ethnicity on Receipt of Family Planning Services in the United States. Journal of Women’s Health. 2009 Jan;18(1):91–6.
3. Rowley S, Broomfield C, Min J, Quinn S, Campbell K, Wood S. Racial Inequities in Adolescent Contraceptive Care Delivery: A Reproductive Justice Issue. Journal of Pediatric and Adolescent Gynecology. 2023 Jun;36(3):298–303.
4. Dehlendorf C, Ruskin R, Grumbach K, Vittinghoff E, Bibbins-Domingo K, Schillinger D, et al. Recommendations for intrauterine contraception: a randomized trial of the effects of patients’ race/ethnicity and socioeconomic status. Am J Obstet Gynecol. 2010 Oct;203(4):319.e1-8.
5. Gomez AM, Wapman M. Under (implicit) pressure: young Black and Latina women’s perceptions of contraceptive care. Contraception. 2017 Oct;96(4):221–6.
6. Higgins JA, Kramer RD, Ryder KM. Provider Bias in Long-Acting Reversible Contraception (LARC) Promotion and Removal: Perceptions of Young Adult Women. Am J Public Health. 2016 Nov;106(11):1932–7.
7. Thorburn S, Bogart LM. African American women and family planning services: perceptions of discrimination. Women Health. 2005;42(1):23–39.
8. Reed R, Osby O, Nelums M, Welchlin C, Konate R, Holt K. Contraceptive care experiences and preferences among Black women in Mississippi: A qualitative study. Contraception. 2022 Oct;114:18–25.
9. Mann ES, Chen AM, Johnson CL. Doctor knows best? Provider bias in the context of contraceptive counseling in the United States. Contraception. 2022 Jun;110:66–70.
10. Stern AM. STERILIZED in the Name of Public Health: Race, Immigration, and Reproductive Control in Modern California. Am J Public Health. 2005 Jul;95(7):1128–38.
11. Roberts D. Killing the Black Body. New York, NY: Penguin Random House; 1998.
12. Brandi K, Fuentes L. The history of tiered-effectiveness contraceptive counseling and the importance of patient-centered family planning care. American Journal of Obstetrics and Gynecology. 2020 Apr;222(4):S873–7.
13. Kathawa CA, Arora KS. Implicit Bias in Counseling for Permanent Contraception: Historical Context and Recommendations for Counseling. Health Equity. 2020 Jul 1;4(1):326–9.
14. Dehlendorf C, Henderson JT, Vittinghoff E, Grumbach K, Levy K, Schmittdiel J, et al. Association of the quality of interpersonal care during family planning counseling with contraceptive use. Am J Obstet Gynecol. 2016 Jul;215(1):78.e1-9.
15. Oakley LP, Harvey SM, López-Cevallos DF. Racial and Ethnic Discrimination, Medical Mistrust, and Satisfaction with Birth Control Services among Young Adult Latinas. Womens Health Issues. 2018 Aug;28(4):313–20.
16. Baker K, Emery ST, Spike E, Sutton J, Ben-Porath E. Skin tone discrimination and birth control avoidance among women in Harris County, Texas: a cross-sectional study. BMC Public Health. 2024 Sep 2;24(1):2375.
17. Wingo E, Sarnaik S, Michel M, Hessler D, Frederiksen B, Kavanaugh ML, et al. The status of person-centered contraceptive care in the United States: Results from a nationally representative sample. Perspect Sex Reprod Health. 2023 Sep;55(3):129–39.
18. Welti K, Manlove J, Finocharo J, Faccio B, Kim L. Women’s experiences with person-centered family planning care: Differences by sociodemographic characteristics. Contraception: X. 2022;4:100081.
19. Frederiksen B, Ranji U, Long M, Diep K, Salganicoff A. Contraception in the United States: A Closer Look at Experiences, Preferences, and Coverage [Internet]. KFF; 2022 Nov [cited 2024 Jul 18]. Available from: https://www.kff.org/womens-health-policy/report/contraception-in-the-un…
20. Dehlendorf C, Wingo E, Gibson L, Goetsch-Avila S, Kriz R, Hessler D. Leveraging an equity-focused, data-driven quality improvement learning collaborative to advance contraceptive care in community health centers. Journal of the American Board of Family Medicine. 2025;(under review).
21. Dehlendorf C, Henderson JT, Vittinghoff E, Grumbach K, Levy K, Schmittdiel J, et al. Association of the quality of interpersonal care during family planning counseling with contraceptive use. American Journal of Obstetrics and Gynecology. 2016 Jul;215(1):78.e1-78.e9.
Feasibility
Feasibility
The Retrospective Person-Centered Contraceptive Counseling measure (PCCC-RS) is a patient-reported experience performance measure (PRE-PM) collected through patient surveys; measure calculation is based solely on information provided by patients responding to the instrument and thus cannot be captured from existing electronic sources. Accountable entities identify all patients who received services in the preceding six months, have a documented female sex and are between 15-45 years. Survey distributors can utilize existing patient surveying efforts or distribute a standalone survey of the PCCC-RS and include an eligibility question (“In the last 6 months, did you talk about contraception or pregnancy prevention with a member of a healthcare team (including any doctor, nurse, medical assistant, etc.)?") to identify patients eligible to answer the PCCC-RS survey question. Patients’ PCCC-RS responses may be captured using electronic or paper collection with all eligible patients at the state/region- or health-plan level who report receiving contraceptive counseling within the six months prior to receiving the survey.
PCCC-RS measure implementation was completed by two partner organizations, Planned Parenthood Federation of America (PPFA) (affiliates=state-level equivalents) and Washington Health Care Authority (WA HCA) (health plan), using two different data collection mechanisms.
PPFA’s implementation of the PCCC-RS measure was integrated into a larger patient experience survey across twelve affiliates. Patients received a survey via email with a link to the anonymous survey, in which they were asked if they received contraceptive counseling in the preceding six months. Those who responded yes were prompted to complete the PCCC-RS. WA HCA implemented a stand-alone patient survey that was distributed to all patients who were enrolled in the Family Planning Only waiver and attended a visit in the preceding six months. Those patients received an anonymous survey link through text/email and follow-up was conducted via phone if patients did not complete the survey within two weeks.
The level of missingness from patient-collected information was <5% across all required items and optional demographic information. The PCCC-RS measure is a brief, four-item survey that can be paired with optional demographic questions to determine any equity considerations in quality improvement. Given survey brevity and anonymity, the measure is not susceptible to inaccuracies other than incomplete responses. Auditing data to detect problems can occur throughout the collection process. For example, if organizations implementing the PCCC-RS measure notice that patients are overwhelmingly submitting incomplete surveys, additional messaging can be added to communication containing the PCCC-RS surveys in initial and follow-up outreach.
We worked with PPFA and the WA HCA to implement the PCCC-RS for the purposes of validity and reliability testing. In doing so, we were able to benefit from and build upon our partner organizations’ existing strengths in implementing patient surveys, as well as work with these partners to address challenges and find solutions to complete data collection. Our collaboration with partners allowed for a deep understanding of implementation costs and burdens. Partner organizations’ iterative input related to project feasibility and existing survey practices guided the development of a standard but adaptable workflow for survey implementation, which all participating entities used in collection efforts. The resulting process consisted of administering the survey to patients who received in the last six months. Implementation costs, burdens, and barriers were identified in regular check-ins with these partners over their collection period.
PPFA distributed their survey through the Planned Parenthood Health Outcomes, Measurement and Evaluation (PPFA HOME) Survey. Patients from twelve affiliates were contacted via email with a link to the anonymous survey. Patients were not surveyed in the health care setting, and there was no impact on clinical workflow. PPFA did not have significant barriers to collection other than having to extend their data collection to ensure optimal collection of surveys from all twelve affiliates. The data collection concluded in approximately six months.
WA HCA surveyed patients who were part of the Family Planning Only Waiver program. Their survey was distributed via an electronic survey link sent to patients via text message after identifying patients who had a visit in the last six months. Non-respondents were sent 2-3 follow-up reminders via text message over two weeks. Clients who did not respond to 2-3 reminders were called on the phone for a telephone survey. One implementation-related cost was utilizing the Washington State Department of Social and Health Services Research and Data Analysis (RDA) resource to survey patients via telephone. Patients were not surveyed in the health care setting, and there was no impact on clinical workflow. During implementation, improvements to survey collection were identified. Early evaluation from survey administrators and respondents found that respondents, when answering the screening question about whether they received contraceptive counseling, were erroneously screening themselves out, as they faced confusion about the healthcare setting included in the measure. The language initially read: “In the last 6 months, did you talk about contraception or pregnancy prevention with a member of a healthcare team (including any doctor, nurse, medical assistant, etc.)?”. Based on respondent feedback, they modified the question to say: “In the last 6 months, did you talk about contraception or pregnancy prevention with a member of a healthcare team (including any doctor, nurse, medical assistant at a Planned Parenthood, primary care provider, etc.)?” This led to an increased percentage of respondents selecting “Yes” to receiving contraceptive counseling and completing the survey. We have updated our implementation guidance to provide the opportunity to modify the question stem to describe in more detail the care settings at which respondents may have received care.
With respect to potential burdens faced by patients completing the survey themselves, as described in sections 5.3.3 and 5.3.4, we conducted face validity testing with a patient stakeholder group to explore their feelings about the completion of this survey and worked to minimize burden through understanding the optimal retrospective lookback period. The length of the survey (four items) was designed to capture the three domains of quality contraceptive counseling (interpersonal connection, adequate information, and decision support) while remaining short and limiting patient burden.
Patients were identified as eligible and then received a survey electronically. For WA HCA, patients had had a visit in the six months prior to survey collection as part of their participation in the Family Planning Only waiver. Electronic surveys were sent over email/text, and if there was a nonresponse, the RDA team reached out over a phone call to remind patients to look at their email/text to complete the survey. After patients were identified and contacted, data collection was not tied to their patient records, and thus was anonymous. Patients were informed that their answers to the patient survey were confidential and reports about the survey would not include names or information that could identify them.
For PPFA, the PPFA HOME survey was not tied to electronic health records or other sources of identifiable information about the patient. Anonymous electronic survey links were distributed via email and at the end of the survey, participants were directed to a separate survey to claim their gift card.
The measure is an adjusted form of the visit-specific PCCC, optimized for use in retrospective patient surveys to ask about any relevant patient visits over a period of time. The final measure specifications include a retrospective lookback period of six months. This period was determined based on patient and healthcare expert consensus during our modified Delphi process conducted during measure development. We also developed implementation guidance supporting adding additional specificity to visit context to the question stem if needed for the setting.
Proprietary Information
Scientific Acceptability
Testing Data
Data for testing came from two sources: Planned Parenthood Federation of America (PPFA), affiliates serving as state-level equivalents (SLEs), and Washington Health Care Authority (WA HCA), as an example of health plan-level collection.
PPFA administered the PCCC-RS survey through their Planned Parenthood Health Measurement & Evaluation (PP HOME) survey between 9/1/2023-4/1/2024 at 12 PPFA affiliates. Data from PPFA are described in the patient demographic table (Table 5.1, see Section 7 - Supplemental Attachment) and were used to assess the validity and reliability of the instrument. Survey response rate was 10%.
WA HCA administered the PCCC-RS survey utilizing Washington State Department of Social and Health Services Research and Data Analysis (RDA) to administer an electronic survey via text and phone call between 5/1/2024-10/3/24. Data from Washington are described in the patient demographic table (Table 5.2, see Section 7 - Supplemental Attachment) and were used to assess the validity and reliability of the instrument. Survey response rate was 34%.
Given that WA HCA serves as our only dataset from health plan-level collection, we do not use these data for score-level reliability or PRE-PM-level construct validity testing. For these analyses, we exclusively present findings from the analyses of PPFA, representing analyses at the state population level. Person-level reliability and validity tests use both datasets to reflect analyses at the health plan and state population level.
State-level equivalent (SLE) data (PPFA): The measure was tested at twelve PPFA affiliates. Affiliates vary in size and can cover geographic service areas ranging from several counties within a single state to an entire state or even multiple states. Among the twelve affiliates included in our dataset, there were 214 health centers across 21 states. Overall, for PPFA, there are 49 total affiliates with 599 health centers. For the purposes of this application, UCSF suggests that each affiliate be considered a proxy for a U.S. state, as was previously done in the endorsement processes for CBE #2902, 2903 and 2904. We utilized the PPFA data for reliability and validity testing.
Health plan data (WA HCA): The measure was tested via Washington State’s 1115 Family Planning Only (FPO) program demonstration waiver. The waiver extends eligibility for family planning services to uninsured people capable of producing children and certain groups that need confidential family planning services, all with income at or below 260 percent of the federal poverty level. FPO covers a single comprehensive sexual and reproductive health visit every 365-days, a range of FDA-approved birth control methods, and a limited scope of family planning-related services to help clients use their contraceptive methods safely and effectively. In a report from May 31, 2024, 94.2% of enrollees identified as female (1). We include WA HCA as an example of health-plan level collection. We utilized the WA HCA data for person-level reliability and validity testing.
REFERENCES
- Washington State Health Care Authority. Quarter 3: Section 1115 Family Planning Only Demonstration Waiver Demonstration Year 23: July 1, 2023 - June 30, 2024 Demonstration Reporting Period: January 1, 2024 - March 31, 2024. Published May 31, 2024. Accessed from: https://www.medicaid.gov/medicaid/section-1115-demonstrations/downloads…;
Patients who received services in the preceding six months, who had recorded female sex, and who were between the ages of 15 and 45 years were contacted and asked to complete the survey. Patients were then further assessed for eligibility by answering a brief eligibility question at the beginning of the survey to determine if they had received contraceptive counseling in the preceding six months.
SLE: PPFA sent a survey to all patients who had received care at one of their twelve affiliates in the preceding twelve months. Survey efforts occurred between 9/1/2023-4/1/2024 at the discretion of each affiliate and resulted in 3,284 responses used for analyses.
Health plan-level: WA HCA data collection occurred between 5/1/2024 and 10/3/24. They sampled patients by using billing data to identify all patients who had received care using the FPO waiver over a rolling time frame. These patients were sent an introductory text and an initial survey text from HCA communications with 2-3 follow-up texts over two weeks following initial communication. Then, the RDA team called non-respondents up to five times to either remind patients to take the survey or survey the patients over the phone, depending on patient preference. Patients were able to opt out of phone calls at any point. WA HCA procured 151 surveys used for analyses.
Within both datasets, those with a recorded female sex, were aged 15-45 years, responded “Yes” to the PCCC-RS screening question and completed all four PCCC-RS items, and had not given birth in the preceding six months are included in the eligible population. Tables 5.1 and 5.2 (see see Section 7 - Supplemental attachment) describe patient characteristics of respondents by PPFA affiliate and from the WA HCA respectively.
Reliability
Reliability testing for the four critical data elements (the four items that comprise the PCCC-RS survey) was examined with Cronbach’s alpha. Cronbach’s alpha is a measure of internal consistency that examines how closely related a set of items (critical data elements) are as a group or the extent to which the data elements measure the same concept or construct (1).
The formula for Cronbach’s alpha is α = N c̅/(v̅ + (N – 1)c̅), where N is the number of items, c is the average between-item covariance, and v is the average within-item variance. Item-total correlations were evaluated by Pearson’s product-moment correlation coefficient (r). The correlation between each individual item and the domain and/or global scores omitting the item was assessed. Critical data element reliability was conducted with the PPFA dataset, representing state-level data, and, separately, the WA HCA health plan data.
Accountable entity reliability
For both conceptual and analytic reasons, and aligned with the visit-specific PCCC measure, our calculation of the performance score used a dichotomous scoring system, in which all surveys reporting the highest rating (20/20) was given for all four questions are considered a positive score, whereas any survey in which a less-than-optimal rating on any of the four questions is considered a negative score. To assess reliability of this measure, we adhered to the recommendations in the National Quality Forum-commissioned paper entitled “Patient-Reported Outcomes in Performance Measurement,” in which signal-to-noise ratio (SNR) is recommended as a measure of reliability (2), with signal defined as the variance in a performance measure due to systematic differences across units, and noise as the residual variance due to random error within units. We have focused on the equivalent Spearman-Brown (S-B) measure of reliability (3–6), which is a function of the intraclass correlation (ICC), defined as the ratio of between-unit variance to the sum of between- and within-unit variances, and the prospective panel size to be used in evaluating state-level performance. Specifically, with a prospective panel size of n and ICC=Vb/(Vb + Vw), where Vb and Vw denote the between- and within-unit variances, we have
ReliabilityS-B = nICC/(1 + (n – 1)ICC)
= nVb/(Vb + Vw)/(1 + (n – 1)Vb/(Vb + Vw))
= nVb/(nVb + Vw)
= Vb(Vb + Vw/n)
= SNR
Thus the reliability of a profile increases as the panel size increases and as the difference in practice patterns between accountable entities becomes larger.
Given that we only have one health plan dataset, we exclusively conduced these analyses utilizing PPFA data, representing state-level units. With a binary performance measure, and respondents nested within units of analysis (PPFA affiliate), the ICC can be estimated using a normal-logistic model with nested random effects for affiliates. Specifically, our analysis used the Stata melogit and estat icc commands to estimate the ICC with 95% confidence intervals. Confidence bounds for S-B reliability were estimated by plugging the confidence limits for the ICC provided by estat icc into the formula for the Spearman-Brown reliability. Our analysis was implemented using Stata Version 16.0 (StataCorp LLC, College Station, TX 77845).
REFERENCES
1. Cronbach LJ. Coefficient Alpha and the Internal Structure of Tests. Psychometrika. 1951 Sep;16(3):297–334.
2. Deutsch A, Smith L, Gage B, Kelleher C, Garfinkel D. Patient-reported outcomes in performance measurement. Commissioned paper prepared for the National Quality Forum. Washington, DC; 2012.
3. Spearman C. CORRELATION CALCULATED FROM FAULTY DATA. British Journal of Psychology, 1904-1920. 1910 Oct;3(3):271–95.
4. Brown W. SOME EXPERIMENTAL RESULTS IN THE CORRELATION OF MENTAL ABILITIES1. British Journal of Psychology, 1904-1920. 1910 Oct;3(3):296–322.
5. Hofer TP. The Unreliability of Individual Physician “Report Cards” for Assessing the Costs and Quality of Care of a Chronic Disease. JAMA. 1999 Jun 9;281(22):2098.
6. Eijkenaar F, Van Vliet RCJA. Profiling Individual Physicians Using Administrative Data From a Single Insurer: Variance Components, Reliability, and Implications for Performance Improvement Efforts. Medical Care. 2013 Aug;51(8):731–9.
Data element reliability
Using the datasets described in section 5.1.1, we calculated the Cronbach’s alpha for the PCCC-RS.
State-level equivalent analysis: The PCCC-RS measure has a Cronbach’s alpha of 0.94 (Item-total correlations ranged from 0.80 - 0.89). Inspection of Cronbach’s alpha if any single item was deleted yielded no improvement in Cronbach’s alpha.
Health plan level analysis: The PCCC-RS measure has a Cronbach’s alpha of 0.84 (Item-total correlations ranged from 0.52 - 0.67). Inspection of Cronbach’s alpha if any single item was deleted yielded no improvement in Cronbach’s alpha.
Accountable entity-level reliability
Table 5.3 displays the SLE estimates of Spearman-Brown reliabilities for a range of panel sizes (i.e. number of patient respondents). Our ICC of 0.024 (0.008, 0.069) resulted in adequate reliability with moderate panel sizes of 150.
Accountable Entity Level Reliability Testing Results by Denominator, Target Population Size
Our PPFA dataset included twelve affiliates, representing units of analyses. Thus we were unable to split them into deciles. We instead provide summary statistics including mean, minimum, maximum, and median in Table 5.4 (see Section 7 - Supplemental attachment). As we only have data from one health plan, we are unable to provide reliability estimates of health.
Data element reliability
A value for Cronbach’s alpha of 0.70 or above is viewed as acceptable and a value of 0.90 or above is excellent and indicating a strong internal consistency of the measure (1). Our reported value of 0.93 within our state-level analysis is in the excellent category and of 0.84 within our health plan-level analysis is in the good category. Given the role of sample size in calculating Cronbach’s alpha, we believe this lower alpha score from the WA HCA sample likely is a result of the smaller sample size of only having one health plan with an overall smaller number of respondents available for testing. Notwithstanding, both are strong and sufficient results, especially in light of the small number of items (four items) providing assurance that the high Cronbach alpha level is not inflated by instrument length.
Accountable entity-level reliability
Our results indicate moderate to high reliability across our state-level sample (0.73 – 0.96) and are consistent with recommendations for reliability estimates for performance measurement of >0.7. These values also compare favorably with reliability estimates for the CG-CAHPS surveys, including the communication composite score, which has been reported to have a reliability of 0.62-0.81 (2).
To ensure high reliability in the context of high variability in provision sites at the state/region level, we recommend state/region entities utilize a minimum panel size of 150 respondents to result in an estimated reliability of 0.79. We expect a similar level of variability at the health plan level and would recommend the same minimum panel size. This will be reassessed with testing conducted in the future for measure maintenance.
REFERENCES
1. De Vellis R. Scale Development: Theory and Applications. 2nd ed. Vol. 26. Thousand Oaks, CA: Sage Publications; 2003.
2. Dyer N, Sorra JS, Smith SA, Cleary PD, Hays RD. Psychometric Properties of the Consumer Assessment of Healthcare Providers and Systems (CAHPS®) Clinician and Group Adult Visit Survey. Medical Care. 2012 Nov;50:S28–34.
Validity
Data element validity
Our data element validity is based on prior testing completed as part of our submission for endorsement by the CBE for the visit-specific PCCC (CBE #3543). Briefly, we evaluated the visit-specific PCCC critical data element validity by assessing the association between each of the four critical data elements (individual PCCC scale items) with specific clinician communication practices consistent with patient-centered care and assessed from audio recordings of 341 clinical visits using measures derived from the Four Habits Coding Scheme (4HCS). We ran linear mixed models to assess the association between individual visit-specific PCCC items used in a continuous (1-5 response scale) and the specified 4HCS components (aggregated across specified items).
Since data elements remain the same for the PCCC-RS, we did not conduct additional data element validity testing for this application.
Systematic face validity
We conducted a modified Delphi process to iteratively reach a consensus on the PCCC-RS and determine systematic face validity. This process focused on the elements of the validated visit-specific PCCC that were to be modified for the retrospective adaptation, namely the time period and means of capturing all encounters within that period of time. No changes were made to the language of the four items that comprise the scale. This modified Delphi process was complemented by engagement with a Patient Stakeholder Group (PSG) to understand validity from a patient perspective.
The modified Delphi process engaged 25 measurement experts, which we defined as people with operational and/or administrative expertise related to quality improvement in reproductive health and/or primary care ranging from facility-, plan-, or state-level engagement. We also engaged with five members of our PSG with invaluable lived experience to provide feedback during this process. The process had several rounds of feedback that included two surveys rounds with the measurement expert group ad two meetings with the patient stakeholder group. The entire process took place from May to August 2023. Both groups were compensated for their time spent on engagement.
Individuals in the measurement expert group were recruited in May 2023, and surveys were circulated in June and July 2023. Survey One asked to assess the lookback period, format of survey items, and feasibility of implementation. Responses were aggregated and the team discussed results prior to the first PSG meeting. At the first PSG meeting, patients were introduced to the PCCC-RS measure and initial proposed modifications by the measurement expert group. A discussion was held wherein the PSG group reacted to suggested modifications and the idea of a retrospective measure of PCCC. Both the survey with measurement experts and engagement with the PSG resulted in initial modifications to the survey. A second survey was distributed to the measurement expert group, where they were given a series of proposed changes and asked how satisfied they were with the proposed changes and how feasible they felt the proposed changes were. This review period culminated in a final presentation with the PSG. Members of the PSG offered final suggestions and confirmed that the look-back period felt feasible from a patient perspective. The PCCC-RS was finalized prior to pilot data collection.
Construct validity
Convergent validity was assessed by examining associations between the PCCC-RS and two patient-reported measures: (1) satisfaction with provider help with the choice of a birth control method and (2) satisfaction with the method choice. These two single-item measures were collected as part of the same data collection efforts as the PCCC-RS scores. The choice of these measures of validity was based on the fact that measures of satisfaction often correlate with measures of patient-centered processes of care but are considered distinct. We conceptualized the PCCC-RS as being more specific than measures of satisfaction, as satisfaction measures tend to be informed by expectation disconfirmation theory (i.e., the extent to which an experience exceeded or fell below expectations (1,2)) and have additional limitations of lack of differentiation and lack of specificity of measured behaviors (3). In contrast, the items in the PCCC-RS assess the extent to which the patient experienced or perceived specific types of communication and exchanges consistent with patient-centered care. To measure satisfaction with how the provider helped with choice of a birth control method, women were asked to rate their “How satisfied are you in how your healthcare provider helped you to choose what birth control method to use” on a 7-point Likert scale from excellent to poor. Satisfaction with the method choice was assessed using the question “How satisfied are you with your choice of birth method at this visit?”, using a 7-point Likert scale. To align with PCCC-RS scoring, responses in the highest or most positive rating category were compared to all others.
Aligned with guidance for PRO-PMs,(4) we first tested the constructs at the Patient Reported Experience Measure (PREM) level, and then conducted testing at the Patient Reported Experience Performance Measure (PRE-PM) level.
For the PREM level, we conducted two logistic regression models with PCCC-RS as a binary variable as the outcome and binary scoring of provider and method satisfaction as predictors respectively. We tested the need for random effects at the affiliate level by assessing the postestimation ICC and determined random effects were not necessary for the model. We then ran two simple univariate logistic regression models. PREM testing was conducted with both the PPFA dataset (SLEs) and WA HCA data (health plan).
For the PRE-PM level, we averaged PCCC-RS scores, provider satisfaction scores, and method satisfaction scores at the unit of analysis. Given we only had one health plan unit available, PRE-PM testing was exclusively conducted with PPFA data, comprised of affiliates as SLEs. We estimated Pearson’s correlation coefficient (r) to assess strength and direction of the linear relationship between the mean PCCC-RS score by affiliate and each of the mean scores of satisfaction with method selection and with provider by affiliate. We then conducted two univariate Ordinary Least Squares regression models with PCCC-RS score (outcome) and each of satisfaction variables as predictors averaged at the state-level equivalent level.
At both the PREM and PRE-PM level, we hypothesized a moderate to strong positive relationship between the PCCC-RS score and satisfaction with contraceptive method and satisfaction with provider.
Analysis of missing data
We identified the extent and distribution of missing items among respondents. We evaluated the pattern of missingness to assess if items were missing completely at random. As missingness did not exceed 5% across any variable necessary for validity testing, we conducted a complete case analysis, aligned with best practice of handling item-level missingness (5).
Assessment of Non-Response bias
This PRE-PM is collected through an anonymous survey and, by design, not connected to other sources of patient information, such as electronic health records. This practice serves to protect data quality and improve response rates, as patients may be concerned about care consequences if they report lower scores and, as a result, choose to either record higher scores that do not reflect quality of care or refuse to respond to the survey if they know or suspect it will be linked to their patient records. Thus, we do not have demographic information on nonrespondents and cannot assess differences between groups.
Moreover, the survey is distributed to patients identified as likely having received contraceptive counseling in the preceding six months. This is verified when respondents answer the survey screening question, confirming that they did, in fact, receive counseling. Without confirmation of eligibility of nonrespondents, we do not know the true population of those who received contraceptive counseling among those who were contacted, and thus, cannot assess differences between that population of interest and the sample who completed the survey.
REFERENCES
1. Kupfer JM, Bond EU. Patient Satisfaction and Patient-Centered Care: Necessary but Not Equal. JAMA. 2012 Jul 11;308(2):139.
2. Williams S, Weinman J, Dale J, Newman S. Patient expectations: What do primary care patients want from the GP and how far does meeting expectations affect patient satisfaction? Fam Pract. 1995;12(2):193–201.
3. Cella DF, Hahn EA, Jensen SE, Butt Z, Nowinski CJ, Rothrock N. Methodological issues in the selection, administration and use of patient-reported outcomes in performance measurement in health care settings. Paper prepared of the National Quality Forum. Waashington, DC; 2012.
4. National Quality Forum. Patient Reported Outcomes (PROs) in Performance Measurement [Internet]. 2013 Jan. Available from: https://www.qualityforum.org/Publications/2012/12/Patient-Reported_Outc…
5. Heymans MW, Twisk JWR. Handling missing data in clinical research. Journal of Clinical Epidemiology. 2022 Nov;151:185–8.
Data element validity
Table 5.5 shows results from the visit-specific PCCC validity testing (CBE #3543). These data elements were adapted to the PCCC-RS with no changes, making this evidence inclusion appropriate for PCCC-RS evaluation. Per the 4HCS coding system, 4HCS components are interpreted in the direction of lower values, indicative of highly effective use of a code, whereas higher values are indicative of lower effective use. As shown in Table 5.5, each critical data element from the visit-specific PCCC was significantly associated with the specific 4HCS component selected on the basis of conceptual match, with the negative betas signifying the association between higher visit-specific PCCC scores and lower (meaning more effective use) of the 4HCS component. Betas presented in Table 5.5 represent a unit change in the visit-specific PCCC item score for each unit change in the 4HCS score. All results remained significant, and with a virtually identical pattern of findings, when testing the visit-specific PCCC items dichotomized (5 vs. else) and when not adjusting for provider.
Systematic face validity
A modified Delphi process with measurement experts and engagement with a PSG was used to inform the final measure and document systematic face validity. Twenty-five measurement experts received two surveys asking open- and closed-ended, Likert scale questions. The first survey included four open-ended questions exploring options for modifying the visit-specific PCCCs inclusion question, measure introduction, and question stem. One key theme in ten responses from experts was the necessity of clarity for patients on the scope of the measure (experiences with all visits in the specified time period with all members of the healthcare team from whom they received contraceptive counseling). Eleven experts also advised that the term "healthcare provider" be made plural and defined to ensure patients are thinking about all possible visits and interactions where they received counseling. The first survey also asked for feedback on potential lookback periods that patients would be reflecting on (ex. 3, 6, 12 months). Five experts noted that HP-CAHPS and CG-CAHPS use a 12- and 6-month lookback period, respectively, which could be possible lookback periods for the PCCC-RS survey. While six experts cautioned that recall becomes difficult after six months, one expert noted that most patients would only be receiving contraceptive counseling once/year, and surveying a six-month lookback period might exclude some patients. Experts were also asked a close-ended, Likert scale question about how likely they were to consider results from the PCCC-RS survey an accurate reflection of the quality of contraceptive counseling. 64% of experts selected at least a 7 on a scale of 1-9, 9 being very likely. These results were reviewed with the PSG and modifications made to the measure.
The second survey included a summary of changes to the PCCC-RS measure following the first expert survey and first meeting of the patient stakeholder group. Experts were asked several satisfaction questions on a scale of 1-9, 9 being very satisfied. There was overall satisfaction (noted as percentage of experts selecting at least a 7 on a scale of 1-9) with the changes made to the eligibility question (90%), measure introduction (75%), and question stem (90%). Satisfaction was also indicated for the six-month lookback period (85%). In an open-ended question asking for feedback on any additional changes, four experts advised that allowing entities to change the measure introduction and question stem to be specific to the population would likely support implementation of the PCCC-RS (e.g. explicitly naming that an entity is surveying patients who went to Planned Parenthood). Overall, engagement from the two measurement expert surveys and two PSG meetings resulted in optimal feedback to modify the visit-specific PCCC measure for retrospective use. For example, when contemplating what look-back period would be optimal for both patients taking the survey and for retrospective reporting, the measurement experts and patient stakeholder group had several deliberations on whether that look-back period could be six months, twelve months, or some other amount of time. At the end of the surveys and engagement, six months was determined to be the optimal lookback period. Additionally, survey language was expanded from just mentioning “birth control” to instead be “contraception or pregnancy prevention” following feedback from experts and PSG. Other changes were made to both the measure introduction and question stem. The measure introduction asked patients to think about their “appointments” rather than a singular visit in the last six months. The question stem also expanded language to ask patients how they think the “member(s) of the healthcare team (including any doctor, nurse, medical assistant, etc.) did” instead of a singular provider. No changes were made to the PCCC-RS survey items.
Construct validity
As displayed in Table 5.6, the PCCC-RS is positively and significantly associated with both a measure of satisfaction with contraceptive method and with healthcare provider, at both the PREM and PRE-PM levels (all p-values <0.001). We also found an estimated Pearson’s correlation of 0.63 (p<0.001) between average PCCC-RS scores and average satisfaction with contraceptive method choice and a Pearson’s correlation of 0.84 (p<0.001) between average PCCC-RS scores and average satisfaction with provider help with method choice, demonstrating a strong positive relationship between constructs.
Analysis of missing data
We investigated item-missingness within the eligible sample. Percent missingness across all variables used in validity testing was low and ranged from 1.9% (first item of survey) to 4.0% (measure of method satisfaction). Item missingness increased in the order the items were presented in the survey, suggesting attrition bias. However, level of missingness did not exceed 5% for any item used in analysis. In line with best practice, we conducted a complete case analysis (1).
REFERENCES
1. Heymans MW, Twisk JWR. Handling missing data in clinical research. Journal of Clinical Epidemiology. 2022 Nov;151:185–8.
We provide two distinct sources of evidence of the validity of the PCCC-RS. First, in systematic face validity, healthcare experts and patients agree that the measure can determine better versus worse care and the lookback period is adequate and reasonable for patient recall. Second, our construct validity results show a strong positive association with two related measures of satisfaction, as we hypothesized. Together, these findings support the validity of the PCCC-RS measure. While we were only able to assess construct validity with the state-level equivalent sample, we expect the measure to be equally valid at the health plan level.
Risk Adjustment
We do not believe that risk adjustment is justified. While it is possible that different demographic groups may report different results on the PCCC-RS measure, this would represent true differences in patient-centeredness due to the manner in which the questions are framed and the fact the concepts of respect and attention to preferences and adequate provision of information are generally desirable. This is consistent with the interpretation of the visit-specific PCCC and how it has been endorsed.
With respect to the question of stratification, we do note that studies have suggested that women of color receive poorer quality contraceptive counseling than their white counterparts. These disparities are rooted in the long history in the United States of coercion on the part of the reproductive health care system towards women of color, including forced sterilization and pressure to use long-acting contraceptive methods. While this suggests that stratification by race/ethnicity may be desirable in order to assess differences in care, neither the PCCC-RS nor the visit-specific measure on which it’s based have yet been evaluated to assess these differences in an accurate and nuanced way. Given the findings in Section 3 of this application, specifically that there were in fact empirical differences in PCCC-RS scores by race/ethnicity and age, and indications of differences by language, future stratified analyses are warranted. We acknowledge the existence of these disparities and intend to use the PCCC-RS to examine them more closely in the future. In the current application, we wish to demonstrate the validity and reliability of the measure overall before taking a focused approach to examining disparities in quality of care.
Use & Usability
Use
Usability
Measurement and reporting of the Retrospective Person-Centered Contraceptive Counseling measure (PCCC-RS) can be used to identify the need for improvement of quality care on a population or system level. Identifying this need can lead to systems-level interventions, such as education and training initiatives, leveraging financial incentive programs, such as pay for performance, and provision of resources, including visual aids and web-based content such as decision aids, to enhance contraceptive counseling. If low scores are identified, the visit-specific PCCC measure (CBE #3543) can be strategically deployed to identify lower-performing sites or plans, allowing for more targeted quality improvement (QI) initiatives. Additionally, analysis of low-scoring population subgroups can aid in understanding of root causes of low performance, allowing for appropriate intervention.
QI actions may vary in amount of effort needed to achieve favorable results, in accordance with what scores indicate about performance (e.g. how much improvement is needed) and the size and structure of the accountable entity. Entities can overcome difficulties by leveraging existing expertise within staff and engaging with patient stakeholder groups who are already focused on quality improvement, patient engagement, and language interpretation.
Following the implementation of these interventions, we recommend re-measurement of the PCCC-RS to track change and reevaluate quality. When used in combination other currently endorsed contraceptive care measures, such as the Contraceptive Care claims measures (CBE #2902, #2903 and 2904), PCCC-RS can also serve as a tool for identifying whether attention to method provision is leading to a decrease in attention to patient-centeredness and allow for intervention if such an effect is observed.
Potential unintended consequences due to the measure’s planned use include possible discomfort from patients reflecting on their experiences in the preceding six months, which may include negative experiences with providers. However, the key benefit that outweighs this is that the PCCC-RS emphasizes the importance of patient input, with the goal that patients are able to reflect on their experiences and reflect on the quality of their care towards improving overall quality of contraceptive counseling.
Comments
Staff Preliminary Assessment
CBE #4825 Staff Preliminary Assessment
Importance
Strengths
- A clear logic model is provided, depicting the relationships between inputs (e.g., staff time for survey administration and data management; training protocols for staff on collecting the Retrospective Person-Centered Contraceptive Counseling (PCCC-RS); contraceptive counseling guidelines including American College of Obstetricians and Gynecologists (ACOG) and Quality Family Planning Recommendations (QFP), activities (e.g., webinars on contraceptive counseling, disseminating best practices), and desired outcomes (e.g., increased number of patients who receive contraceptive counseling who can report on counseling experience; improved patient experience; improved reproductive health outcomes: decreased undesired pregnancy; use of preferred contraceptive methods). This model demonstrates how the measure’s implementation will lead to the anticipated outcomes.
- If implemented, the developer posits the measure’s anticipated impact on important outcomes, such as having patients report on their counseling experiences and share feedback with their administrators at their health plan and increasing administrator knowledge and awareness of the quality of contraceptive counseling within their states or networks, is expected to better support patients and meet their reproductive health needs by improving contraceptive counseling, therefore establishing better trust with patients leading to improved adherence to care plans and resulting in better health outcomes.
- Data from two national surveys - National Survey of Family Growth (NSFG) from 2017-2019 and Kaiser Family Foundation (KKF) survey from 2022 - demonstrated a gap in optimal counseling, with only 58% and 40% respondents, respectively, reporting optimal care. Through the use of the visit-specific PCCC measure (CBE #3543), scores ranged from 30% to 95% suggesting differential care at the health facility level.
- This measure is supported by a comprehensive literature review, including systematic reviews with high evidence quality demonstrating a clear net benefit in terms of improved outcomes in contraceptive counseling experiences, such as contraceptive continuation at six months and continued engagement with the reproductive health care system for pregnancy; birth; and other reproductive health needs, for patients aged 15-45 who were assigned female at birth, who are not currently pregnant, and who received contraceptive counseling as part of their visits in the last 6 months prior to being surveyed.
- The proposed measure addresses a health care need not sufficiently covered by existing measures (e.g., CBE# 3543), offering advantages in terms of the levels of analysis and retrospective lookback approach. Use of this measure at the state/region and health plan levels uplifts the importance of patient experience metrics as population metrics and enables public reporting and accountability. While not required for initial endorsement, data from 12 Planned Parenthood Federation of America (PPFA) affiliates acting as state-level equivalents (SLE) for population-level analyses from 9/1/2023-4/1/2024, show a performance gap with decile ranges from 36.9% to 72.9% indicating variation in measure performance and demonstrates opportunity for improvement.
- Description of patient input supports the conclusion that the measured PRE-PM is meaningful with at least moderate certainty. Patient input was obtained through iterative engagement with the Person-Centered Reproductive Health Program’s Patient Stakeholder Group (PSG), a standing workgroup of nine individuals assigned female at birth.
Limitations
- Data from Washington Health Care Authority (WA HCA) from 5/1/2024-10/3/2024, was the only source of data at the health plan level. Since data is only available from one health plan, deciles of scores were not provided in the submission for that level of analysis. This will not impact the rating as this is not required for initial endorsement.
Rationale
- This new measure meets all criteria for ‘Met’ due to its significant anticipated impact, its robust evidence base, the justification for its use compared to existing measures, and well-articulated logic model, making it essential for addressing patients’ contraceptive counseling experiences.
Closing Care Gaps
Strengths
- The developer provided evidence of gaps in care related to the measure focus for subgroups, including a randomized controlled trial and qualitative research, and their claim that the measure will help close care gaps by assessing inadequate and inequitable contraceptive counseling experiences among specific populations and enable patients to meet their reproductive goals is credible.
The measure’s performance was empirically tested across all identified subgroup variables including age, race/ethnicity, and language; the developer’s rationale for selecting these subgroups is based on existing evidence. Data for the analyses were from surveys for Planned Parenthood Federation of America (PPFA) affiliates (state-level equivalent) and Washington Healthcare Authority (WA HCA) (health plan). The analyses employed two mixed-effects logistic regressions to assess differences in measure scores across these subgroups within the PPFA data set.
- The analysis revealed significant differences in performance scores by age, race/ethnicity, and language. For example, in regression analyses among PPFA affiliates, respondents who identified as 25+ years reported greater odds of reporting a top box score in comparison to individuals in the 18-24 years age group.
- Based on the findings, the developer notes recommended actions entities can take to monitor and close care gaps, including trainings provided to clinics and providers to increase person-centeredness of contraceptive counseling and deepen understanding and practice of cultural responsiveness and humility. Additionally, entities can incentivize clinics in their networks to engage with patient groups.
Limitations
- Due to limited sample and small cell size, regression analyses at the health plan-level were not conducted. Additionally, small cell sizes restricted race/ethnicity analysis to white, Hispanic/Latina, African American/Black, and Multiracial. Small cell sizes also resulted in not conducting inferential statistics to examine differences by language. This will not impact the rating as this is not required for initial endorsement.
- Due to the small sample size at the health plan-level, differences could not be statistically investigated. Additionally, the low volume of Spanish-speaking respondents did not allow for in-depth exploration of differences in scores. This will not impact the rating as this is not required for initial endorsement.
Rationale
- The measure sufficiently assesses gaps in care with respect to age and race/ethnicity, providing crucial insights into how accountable entities can use this measure to improve differences in care for these subgroups. This includes implementing trainings for clinics and providers to increase person-centeredness of contraceptive counseling and deepen understanding and practice of cultural responsiveness and humility. Additionally, entities can incentivize clinics in their networks to engage with patient groups.
Feasibility Assessment
Strengths
- The data elements required for measure calculation are collected retrospectively from patients using a survey that can be collected using electronic sources (e.g., email). While the survey is not administered during the course of care, it reflects the quality of interactions that took place during the course of care. The developers describe a path to implement routine, effective electronic collection.
- The developer stated that no feasibility issues were found requiring adjustment of the final measure specifications.
The developer described the costs and burden associated with data collection and data entry, validation, and analysis. They discussed barriers that were encountered during implementation and data collection which included PPFA having to extend their data collection to ensure optimal collection of surveys from all twelve affiliates. Additionally, early evaluation from survey administrators and respondents found that respondents faced confusion about the healthcare setting included in the measure, resulting in respondents erroneously screening themselves out. Developers noted mitigation approaches such as modifying the survey question for clarity and eliminating confusion among respondents.
- The developer described how all required data elements can be collected without risk to patient confidentiality including not linking data collection to patient records or electronic health records and distributing anonymous electronic survey links via email.
- There are no fees, licensing, or other requirement to use any aspect of the measure (e.g., value/code set, risk model, programming code, algorithm).
Limitations
- None identified.
Rationale
- This new measure meets all criteria for ‘Met’ due to its well-documented feasibility assessment, clear and implementable data collection strategy, and transparent handling of patient confidentiality, burden, licensing, and fees. These factors collectively ensure that the measure can be implemented effectively and sustainably in a real-world healthcare setting.
Scientific Acceptability
Strengths
- The developer clearly explains the method used to assess data element reliability (internal consistency). Data used for testing were collected over several months in 2023 and 2024.
- For data element reliability, the developer reported Cronbach's alpha of 0.94 for PPFA affiliates (state-level equivalents) and 0.84 for WA HCA, which are both above the threshold of 0.7.
- For the accountable-entity level, the reported reliability was greater than the 0.6 threshold for all entities included in the testing.
Limitations
- The developer did not describe the method used to estimate signal-to-noise reliability among PPFA affiliates (state-level equivalents) found in Table 5.4. The signal-to-noise reliability methodology is not required to be reported for a new measure, and does not affect the rating.
Rationale
- The developer reported person/encounter-level and accountable-entity level reliability testing on data collected within the last two years. The reported person/encounter-level reliability was above the threshold of 0.7 for internal consistency.
- Reported accountable-entity level reliability was above the threshold of 0.6 for all entities included in the testing.
Strengths
- Validity: As a new measure, the assessment focuses primarily on data element or person-level validity. Generally that means comparing measure data against a gold standard. However, since the measure data could be argued is the gold standard, the measure developer compared the measure data with specific clinician communication practices plausibly causally related to the measure data (either directly or from a common causal source). At the accountable entity level validity, the developer provides an Importance Table, a Logic Model, and conducted an extensive Delphi process to inform face validity, providing a plausible causal association between the entity response to the measure and the measure focus. Empirical support for ruling out confounders includes adequate reliability and a correlation with a related satisfaction measures with construct overlap (1) satisfaction with provider help with the choice of a birth control method (r=0.63) and (2) satisfaction with the method choice (r=0.84)). Empirical support for ruling-in responsible mechanisms includes several empirical studies (e.g. High-quality provider communication, Elicitation and engagement with patient preferences, Equity-informed counseling practices, Monitoring patient experience through validated measures, Preventing unintended pressure to use specific methods).
Limitations
- Validity: The data element/person-level validity results are from the encounter version of the measure rather than this version that includes a retrospective period. So although the questions are the same the time period of response is different (residual risk is the potential absence of external validity). At the accountable entity level, residual risk for confounders also includes the absence of risk adjustment that cannot rule out confounding (e.g. patient race, income, language, and sexual orientation, provider emphasis on highly effective methods, Differential patient willingness to return for care). Residual risk for a responsible mechanism includes the potential counter-acting mechanisms (setting-level differences in care quality, Measure incentives from other contraceptive provision metrics).
- Risk adjustment (RA): The developer did not perform risk adjustment, but provided the rationale that observed differences in measure scores by demographic groups would represent true differences in patient-centeredness. The developer did not perform stratification, but noted intent to use the measure to examine disparities in care by stratified subgroups in future analyses.
Rationale
- MET Justification (validity): The measure developer provides support for the data element/person-level validity, although a condition might be a more compelling justification for the external validity of this evidence relative to the measures as specified with a retrospective response period. As a new measure the developer need only establish validity at the data element/person-level. At the accountable entity-level, the measure developer also provides some support for the causal claim that the entity response to the measure is causally related to the measure focus. The developer provides empirical support for ruling out confounders (always with some residual risk of unstated or unexamined confounders) and for ruling in responsible mechanisms (always with some residual risk that the explicit mechanisms are only partially responsible for the measure focus). For the maintenance submission additional evidence on the mitigation of these residual risks would strengthen the submission
- MET Justification (RA):The developer did not perform risk adjustment or stratification; however, they provided the rationale that differences in measure scores by demographic groups would represent true differences in patient-centeredness. Future analyses incorporating risk adjustment and stratification are warranted as entity-level data become available to ensure fair comparisons.
Use and Usability
Strengths
- The measure is not currently in use, but the developer indicates a plan for use in public reporting, payment program, quality improvement with benchmarking (external benchmarking to multiple organizations), and quality improvement (internal to the specific organization).
- The developer provides a summary of how accountable entities can use the measure results to improve quality of care on a population or system level, leading to system-level interventions. Specifically, entities can engage in education and training initiatives and leverage financial incentive programs, such as pay for performance, and provision of resources.
Limitations
- The developer described potential unintended consequences such as possible discomfort from patients reflecting on their experiences in the preceding six months as a limitation. The developer plausibly argues that the measure’s benefits outweigh the potential unintended consequences identified.
Rationale
- For initial endorsement, there is a clear plan for use in at least one accountability application, and the measure provides actionable information for improvement. The developer described potential unintended consequences such as possible discomfort from patients reflecting on their experiences in the preceding six months as a limitation but plausibly argues the measure’s benefits outweigh the potential unintended consequences.
Committee Independent Review
4825 - Support
Importance
The logic model effectively describes its components and incorporates patient input, alongside considerations for measure implementation. However, the explanation of the feedback mechanism, specifically regarding the review of scores, could be more robust in detailing how insights will be utilized to inform improvements.
Closing Care Gaps
An RCT (Randomized Controlled Trial) investigated variations in the quality of contraceptive care, with findings further supported by qualitative research. The measure demonstrates potential for identifying disparities, and a quality collaborative reported that an intervention resulted in improved measure scores.
Feasibility Assessment
The measure is collected electronically and was implemented via various data collection mechanisms. There were no apparent issues with implementation or barriers to data collection.
Scientific Acceptability
For the two PFFA affiliates, the person/encounter level reliability indicated an ICC greater than 0.7. Recall bias might still be an issue for reliability given the 6-month lookback period.
The validity was established at the person-level but its from the encounter version of the method. There are also issues around confounders at the accountable-entity level.
Use and Usability
The measure is not in use but could be used to improve contraceptive counselling and allow for QI methods to target specific sites.
Summary
The measure can help drive improvements in contraceptive counseling especially when looking at the potential for addressing gaps in care. Further examination of the 6-month lookback period might be interesting, perhaps by exploring results with alternative periods such as 3 months.
Do Not Support
Importance
Criteria Met
Closing Care Gaps
Criteria met
Feasibility Assessment
The feasible use of this measure at a state/region or health plan level is not clear. The care setting is is primary care and reproductive health settings. Feasibility in Planned Parenthood was assessed, however, how feasible is it for a primary care practitioner to implement this measure? Cost and burden for primary care would also be a concern.
Scientific Acceptability
The level of analysis for the measure is health plan and population/geographic area, however the care setting is primary care and reproductive health settings. The measure is a patient survey on contraception care counseling. The developer used data for testing from planned parenthood and the Washington Health Care Authority. Based on this information, the data used in testing should be from the care setting, which was defined as primary care and reproductive health settings. It is unclear why this measure is testing at the health plan level. Testing was provided from Planned Parenthood, however additional testing in other primary care settings is needed.
The level of analysis for the measure is health plan and population/geographic area, however the care setting is primary care and reproductive health settings. The measure is a patient survey on contraception care counseling. The developer used data for testing from planned parenthood and the Washington Health Care Authority. Based on this information, the data used in testing should be from the care setting, which was defined as primary care and reproductive health settings. It is unclear why this measure is testing at the health plan level. Testing was provided from Planned Parenthood, however additional testing in other primary care settings is needed.
Use and Usability
I question the usability of this measure, based on the information provided in the submission, with the level of analysis being at the health plan and population/geographic area and the care setting being primary care and reproductive health settings. How do primary care clinicians specifically use this measure. Should this be a questionnaire that is already embedded into other questions that are sent to patients who are seen in the primary care setting. The use of the measure in health plans is not clear and my assumption would be that many providers would caution the use of a PRE-PM measure for payment.
Summary
Criteria was not met in the submission and mentioned above. While overall the topic of this measure is important, I do not feel it meets needs of being a stand alone measure.
I think the use and…
Importance
There is evidence provided support the importance of the topic and rationale for wanting to implement such a survey.
Closing Care Gaps
There is some baseline data about care gaps but it is not clear how surveying will necessarily close the gap without additional interventions by the clinic or health system. In addition, if the survey response is low, it is not clear how a bias sample will be that helpful.
Feasibility Assessment
The survey can be deployed in a number of different methods.
Scientific Acceptability
There is data provided supporting reliability
Data is provided regarding validity.
Use and Usability
Given that there is no baseline data, it is not clear how the data can be used for public reporting. In addition, it has not been in different health systems. Given that there are already numerous surveys in which a patient is sent following a visit, this survey may add to the patient burden and there is concern about response rates in general.
Summary
I think the use and usability is the biggest concern for this measure. Given low response rates for surveys in general and one in which patients are already asked to fill out many other surveys, data quality and representativeness will be a concern.
Summary Recommendation
Importance
This measure captures a highly valued outcome, respectful, person-centered contraceptive counseling, which aligns with national priorities around patient-centeredness. Evidence links positive experiences with improved contraceptive continuation and reduced disengagement from care.
Closing Care Gaps
By surfacing systemic gaps in counseling quality especially among racially and linguistically minoritized patients the measure enables targeted improvement. The top-box scoring highlights suboptimal care and pushes systems toward universal high-quality interactions.
Feasibility Assessment
PCCC-RS is brief (four items), uses standard survey modes, and has been tested across multiple systems with minimal missing data. Implementation requires modest outreach infrastructure but no EHR integration.
Scientific Acceptability
Cronbach’s alpha exceeded 0.90 for state-level and 0.84 for health plan-level testing, indicating strong item consistency. Reliability at the entity level (ICC = 0.024) achieved adequate Spearman-Brown scores with a recommended sample size of 150. Testing aligned with PRE-PM standards and compared favorably with CAHPS benchmarks.
The measure’s validity is supported through construct testing (r = 0.63–0.84) against related satisfaction measures and audio-coded counseling behaviors. A modified Delphi process and patient advisory input reinforced face validity.
Use and Usability
The measure supports public reporting, plan-level QI, and program accountability. Its six-month lookback makes it actionable at scale, while still grounded in patient recall. Advisory reviewers noted its usefulness in complementing existing contraceptive provision metrics to prevent coercive or biased care.
Summary
PCCC-RS is a well-validated, low-burden patient experience measure designed to assess the quality of contraceptive counseling. The measure demonstrates strong feasibility, reliability, and validity across health plan and regional levels. Its design and testing meet the PQM rubric criteria and support broad-scale use for quality improvement and public accountability.
Endorse
Importance
Agree with staff assessment.
Closing Care Gaps
Agree with staff assessment.
Feasibility Assessment
Agree with staff assessment.
Scientific Acceptability
Agree with staff assessment.
Agree with staff assessment.
Use and Usability
Agree with staff assessment.
Summary
I endorse this measure.
Summary Recommendation
Importance
There is a gap in optimal contraceptive counseling and this measure seeks to find a way to address this.
Closing Care Gaps
It's not very clear how this survey will be able to be used to actually implement changes to address the gaps in care.
Feasibility Assessment
Appears to be an easily implemented short survey.
Scientific Acceptability
There is data provided to support reliability. There may be some recall bias as it is asking for care received in the pat 6 months.
This data was collected at a planned parenthood and at a plan level at Washington Health Care Authority. There was no data collection from other primary care settings or from other reproductive health settings.
Use and Usability
There is no provider specific information being collected. How exactly will this data be used to change patient outcomes? What baseline will they be compared to?
Summary
This measure does not currently meet criteria. While this is an important gap in care to address, it is unclear how this measure will be used to actually address the issues.
Recommendation Summary
Importance
This measure is supported by a comprehensive literature review, including systematic reviews with high evidence quality demonstrating a clear net benefit in terms of improved outcomes in contraceptive counseling experiences, such as contraceptive continuation at six months and continued engagement with the reproductive health care system for pregnancy, birth, and other reproductive health needs.
The PCCC-RS is a balancing measure that provides the ability to monitor contraceptive counseling at region/state population and health plan levels, which is particularly critical given the use of contraceptive provision performance measures (CBE #2903 and #2904) at these levels, which has the potential to incentivize coercive contraceptive practices.
Closing Care Gaps
The developer provides a descriptive breakdown of scores by demographic subgroups within each (Planned Parenthood Federation of America) PPFA affiliate. Results among PPFA affiliates suggest that those of younger ages and those from minoritized racial and ethnic groups have lower odds of experiencing high quality contraceptive care.
The developer successfully linked measure results to closing gaps in care.
Feasibility Assessment
Partner organizations’ iterative input related to project feasibility and existing survey practices guided the development of a standard but adaptable workflow for survey implementation, which all participating entities used in collection efforts. The resulting process consisted of administering the survey to patients who received in the last six months. Implementation costs, burdens, and barriers were identified in regular check-ins with these partners over their collection period.
However, there was no information on the amount of time and burden to incorporate these workflows.
Scientific Acceptability
The data element reliability results and face validity results were strong.
The developers do not think they need to conduct risk adjustment but provided a rationale that differences in scores by demographic groups would represent true differences in patient-centeredness.
The data element reliability results and face validity results were strong.
The developers do not think they need to conduct risk adjustment but provided a rationale that differences in scores by demographic groups would represent true differences in patient-centeredness
Use and Usability
The measure is not currently in use, but the developer plans to submit it to a public reporting program.
Summary
Overall, the measure focus is important, but there are concerns with feasibility.
Actions that should follow results would help usability.
Importance
The importance of this measure is supported by the background information submitted by the developer.
Closing Care Gaps
Improvements are needed in contraceptive counseling.
Feasibility Assessment
No issues with feasibility.
Scientific Acceptability
Reliability met.
Validity met.
Use and Usability
More information is needed about how the use of survey results should be aligned with actions that will improve contraceptive counseling and care delivery.
Summary
Usability would be greatly enhanced if clearer care improvement actions that should follow measures results were provided for users.
Public Comments
Comment in support of endorsing the retrospective PCCC
Upstream appreciates the opportunity to comment on CBE #4825: The percent of contraceptive care patients giving “top box” scores on a PRE-PM focused on quality of contraceptive care (the Person-Centered Contraceptive Counseling [PCCC] measure), within a 6-month lookback period. We support the endorsement of this measure as it facilitates assessment of the patient experience of contraceptive counseling, which is essential information to incorporate into any analysis of contraceptive service provision.
Upstream is a nonprofit organization working to expand access to contraception by providing high-quality, patient-centered training and technical assistance to healthcare organizations. Upstream integrated the visit-specific PCCC into our patient survey instrument in 2019. Since that time, we have administered our virtual patient survey in over 100 healthcare facility settings. Leveraging the PCCC enables us to take a multi-dimensional approach to analyzing contraceptive service provision, and, most importantly, it provides a framework for understanding the patient experience related to contraceptive counseling and identifying areas of potential improvement.
We are fortunate to have the infrastructure to administer a virtual survey in real-time with patients that visit our partnering health centers, but many healthcare organizations, health plans, or states do not have this capability. The retrospective PCCC could be used in these settings and would enable analyses among larger aggregated groups of patients. Patient experience on its own is incredibly valuable to measure and incorporate into care quality initiatives. In addition, this measure can be used in conjunction with the available claims-based (CBE #2902, #2903, #2904) or electronic (#CBE 4655e, 3699e, and 3682e) clinical quality measures focused on contraceptive need screening and contraceptive service provision to provide a holistic view of patient access to contraceptive services.
Comment in support of PCCC-RS endorsement
Washington State Health Care Authority (HCA) greatly appreciates the opportunity to provide support and public comment for the Person-Centered Contraceptive Counseling (PCCC-RS) Retrospective Survey (CBE ID 4825).
HCA provides reproductive health services through a variety of federal/state programs and related authorities (e.g., Apple Health State Plan), Family Planning Only 1115 Waiver, etc.) and health care delivery models (e.g., Medicaid Managed Care (MMC), Fee-For-Service (FFS) or Primary Care Case Management (PCCM)). Medicaid programs, administered and with flexibility at the state level, offer a range of strategies to improve access to contraception, including eligibility, coverage, payment, quality improvement, and quality assurance initiatives. These programmatic levers have their limitations and may potentially propagate unintended consequences that undervalue patient voice, culturally appropriate and/or trauma informed care. Recent contraceptive initiatives (e.g., postpartum LARC insertion and the ‘tiered effectiveness’ model of contraceptive counseling) are problematic or potentially unethical as they prioritize contraceptive effectiveness over individual reproductive/sexual autonomy and reproductive goals.
Moreover, current federally mandated contraceptive metrics were designed as a proxy for measuring patient access to most or moderately effective prescription contraceptive methods (e.g., HEDIS metrics CCW-Contraceptive Care for Women, CCP-Contraceptive Care for Postpartum Women). These metrics prioritize certain contraceptive methods over individual patient preference and lived experience. In contrast, proposed PCCC-RS (CBE ID 4825) is a measure of patient experience and is intended to capture if contraceptive counseling, contraceptive method education, and decision making are person-centered, or focused on the patient’s own needs, values, and preferences.
Throughout 2024, HCA administered the PCCC-RS to clients enrolled in our Family Planning Only (FPO) waiver (Washington State Institutional Review Board Project #2024-006). This research pilot has allowed HCA to better understand the feasibility of a survey administration process and explore the application of the results at a state agency level, which we believe has unique impactful differences and implications than current federally mandated contraceptive metrics.
HCA was encouraged by our implementation and initial results in 2024 and will continue to collect data, establish a baseline, and assess trends over time. We will partner with key stakeholders/partners to share results and analyses on the aggregate and scale-item levels. Patient-centered care is an important domain of high-quality care and health plan organizations should incorporate patient experience in addition to available claims-based/electronic clinical quality measures.