The measure calculates the percentage of Assisted living (AL) residents, those living in the facility for two weeks or more, who are satisfied. This patient reported outcome measure is based on the CoreQ: AL Resident Satisfaction questionnaire that is a four-item questionnaire.
Measure Specs
- General Information(active tab)
- Numerator
- Denominator
- Exclusions
- Measure Calculation
- Supplemental Attachment
- Point of Contact
General Information
Collecting satisfaction information from Assisted Living (AL) residents and family members is more important now than ever. We have seen a philosophical change in healthcare that now includes the patient and their preferences as an integral part of the system of care. The Institute of Medicine (IOM) endorses this change by putting the patient as central to the care system (IOM, 2001). For this philosophical change to person-centered care to succeed, we have to be able to measure patient satisfaction for these three reasons:
(1) Measuring satisfaction is necessary to understand patient preferences.
(2) Measuring and reporting satisfaction with care helps patients and their families choose and trust a health care facility.
(3) Satisfaction information can help facilities improve the quality of care they provide.
The implementation of person-centered care in long-term care has already begun, but there is still room for improvement. The Centers for Medicare and Medicaid Services (CMS) demonstrated interest in consumers’ perspective on quality of care by supporting the development of the Consumer Assessment of Healthcare Providers and Systems (CAHPS) survey for patients in nursing facilities (Sangl et al., 2007). We have developed three skilled nursing facility (SNF) and two assisted living CoreQ measures, and all five are endorsed by a consensus-based entity (NQF at the time).
Further supporting person-centered care and resident satisfaction are ongoing organizational change initiatives. These include: the Center for Excellence in Assisted Living (CEAL) which has developed a measure of person-centeredness of assisted living with the University of North Carolina at Chapel Hill; the Advancing Excellence in America’s Nursing Homes campaign (2006), which lists person-centered care as one of its goals; Action Pact, Inc., which provides workshops and consultations with long-term care facilities on how to be more person-centered through their physical environment and organizational structure; and Eden Alternative, which uses education, consultation, and outreach to further person-centered care in long-term care facilities. All these initiatives have identified the measurement of resident satisfaction as an essential part in making, evaluating, and sustaining effective clinical and organizational changes that ultimately result in a person-centered philosophy of care.
The importance of measuring resident satisfaction as part of quality improvement cannot be stressed enough. Quality improvement initiatives, such as total quality management (TQM) and continuous quality improvement (CQI), emphasize meeting or exceeding “customer” expectations. William Deming, one of the first proponents of quality improvement, noted that “one of the five hallmarks of a quality organization is knowing your customer’s needs and expectations and working to meet or exceed them” (Deming, 1986). Measuring resident satisfaction can help organizations identify deficiencies that other quality metrics may struggle to identify, such as communication between a patient and the provider.
As part of the US Department of Commerce renowned Baldrige Criteria for organizational excellence, applicants are assessed on their ability to describe the links between their mission, key customers, and strategic position. Applicants are also required to show evidence of successful improvements resulting from their performance improvement system. An essential component of this process is the measurement of customer, or resident, satisfaction (Shook & Chenoweth, 2012).
The CoreQ: AL Resident Satisfaction questionnaire and measure can strategically help AL facilities achieve organizational excellence and provide high quality care by being a tool that targets a unique and growing patient population. Moreover, improving the care for AL patients is tenable. A review of the literature on satisfaction surveys in long-term care facilities (Castle, 2007) concluded that substantial improvements in resident satisfaction could be made in many facilities by improving care (i.e., changing either structural or process aspects of care). This was based on satisfaction scores ranging from 60 to 80% on average (with 100% as a maximum score).
It is worth noting, few other generalizations can be made because existing instruments used to collect satisfaction information are not standardized (except CoreQ). Thus, benchmarking scores and comparison scores (i.e., best in class) are difficult to establish. The CoreQ: AL Resident Satisfaction Measure has considerable relevance in establishing benchmarking scores and comparison scores. Benchmark and comparison scores are available with CoreQ, and come from tens of thousands of surveys returned.
We developed three skilled nursing facility (SNF) based CoreQ measures: CoreQ: Long-Stay Family Satisfaction Measure, CoreQ: Long-Stay Resident Satisfaction Measure, and CoreQ: Short-Stay Discharge Measure. All three of these measures received NQF endorsement in 2016. Then, the assisted living CoreQ Resident and Family Satisfaction Measures received NQF endorsement in 2019. With these five satisfaction measures, it enables providers, researchers, and regulators to measure satisfaction across the long-term care continuum with valid and reliable measures.
The measure’s relevance are furthered by recent federal legislative actions. The Affordable Care Act of 2010 requires the Secretary of Health and Human Services (HHS) to implement a Quality Assurance & Performance Improvement Program (QAPI) within nursing facilities. This means all nursing facilities have increased accountability for continuous quality improvement efforts. In CMS’s “QAPI at a Glance” document there are references to customer-satisfaction surveys and organizations utilizing them to identify opportunities for improvement. Some AL communities have implemented QAPI in their organizations.
Lastly, in CMS’s National Quality Strategy (2024), one of the four key areas is advancing equity and engagement for all individuals. Specifically, CMS calls out expanding the use of person-reported outcomes and experience measures as a key action. Similarly, in the most recent SNF payment rule (CMS, August 2024), CMS acknowledges an opportunity to add patient experience or satisfaction measures to the Quality Reporting Program (QRP) that spans across post-acute and long-term care providers and created by the IMPACT Act of 2014. While CMS does not provide direct oversight of assisted living, more states are covering assisted living as part of home and community-based Medicaid waivers. As of 2020, 44% of assisted living communities were Medicaid certified (CDC, 2020). Thus, the principles of CMS’s Quality Strategy apply and the CoreQ: AL resident measure can further CMS’s quality efforts.
Castle, N.G. (2007). A literature review of satisfaction instruments used in long-term care settings. Journal of Aging and Social Policy, 19(2), 9-42.
CDC (2020). National Post-Acute and Long-Term Care Study. https://www.cdc.gov/nchs/npals/webtables/overview.htm
CMS (2009). Skilled Nursing Facilities Non Swing Bed - Medicare National Summary. http://www.cms.hhs.gov/MedicareFeeforSvcPartsAB/Downloads/NationalSum2007.pdf
CMS, University of Minnesota, and Stratis Health. QAPI at a Glance: A step by step guide to implementing quality assurance and performance improvement (QAPI) in your nursing home. https://www.cms.gov/Medicare/Provider-Enrollment-and-Certification/QAPI/Downloads/QAPIAtaGlance.pdf.
CMS (April 2024). Quality in Motion: Acting on CMS National Quality Strategy. https://www.cms.gov/files/document/quality-motion-cms-national-quality-strategy.pdf
CMS (August 6, 2024). Medicare Program; Prospective Payment System and Consolidated Billing for Skilled Nursing Facilities; Updates to the Quality Reporting Program and Value-Based Purchasing Program for Federal Fiscal Year 2025. https://www.federalregister.gov/d/2024-16907/p-588
The collection instrument is the CoreQ: AL Resident Satisfaction Questionnaire and exclusions are from the facility health information systems.
Numerator
The numerator is the sum of the individuals in the facility that have an average satisfaction score of =>3 for the four questions on the CoreQ: AL Resident Satisfaction questionnaire.
A specific date is chosen. On that date all residents in the facility are identified. The data is then collected from all the residents in the facility meeting eligibility criteria on that date. Residents are given a maximum 2-month time window to complete the survey. While the frequency in which the questionnaires are administered is left up to the provider, they should at least administer the Core Q questionnaire once a year. Only surveys returned within two months of the resident initially receiving the survey are included in the calculation.
The numerator includes all of the AL residents that had an average response greater than or equal to 3 on the CoreQ: AL Resident Satisfaction Questionnaire that do not meet any of the denominator exclusions or are missing responses for 2 or more questions.
The calculation of an individual patient’s average satisfaction score is done in the following manner:
• Respondents within the appropriate time window and who do not meet the exclusions (See: S.8) are identified.
• A numeric score is associated with each response scale option on the CoreQ: AL Resident Satisfaction Questionnaire (that is, Poor=1, Average=2, Good=3, Very Good=4, and Excellent=5).
• The following formula is utilized to calculate the individual’s average satisfaction score. [Numeric Score Question 1 + Numeric Score Question 2 + Numeric Score Question 3 + Numeric Score Question 4]/4
• The number of respondents whose average satisfaction score is greater than or equal to 3 are summed together and function as the numerator.
For residents with one missing data point (from the 4 items included in the questionnaire) imputation is used (representing the average value from the other three available questions). Residents with more than one missing data point, are not counted in the measure (i.e., no imputation is used for these residents since their responses are excluded).
Denominator
The denominator includes all of the residents that have been in the AL facility for two weeks or more regardless of payer status; who received the CoreQ: AL Resident Satisfaction Questionnaire.
Residents have up to 2 months to complete and return the survey. The length of stay is identified from AL facility records.
Exclusions
Exclusions made at the time of sample selection are the following: (1) Residents who have poor cognition (described below in 1.15c); (2) residents receiving hospice; (3) residents with a legal court appointed guardian; and (4) residents who have lived in the AL facility for less than two weeks. Additionally, once the survey is administered, the following exclusions are applied: a) surveys received outside of the time window (two months after the administration date) b) surveys that have more than one questionnaire item missing c) surveys from residents who indicate that someone else answered the questions for the resident. (Note this does not include cases where the resident solely had help such as reading the questions or writing down their responses.)
Individuals are excluded based on information from facility records.
(1) Residents who have poor cognition: The Brief Interview for Mental Status (BIMS), a well validated dementia assessment tool is used. BIMS ranges are 0-7 (lowest); 8-12; and 13-15 (highest). Residents with BIMS scores of equal or less than 7 are excluded. Or Mini-Mental State Exam (MMSE) score of 12 or lower {Note: we understand that some AL communities may not have information on cognitive function. We suggest administering the survey to all AL residents and assume that those with cognitive impairment will not complete the survey or have someone else complete on their behalf and in either case they will be excluded them from the analysis. The main impact of including all residents with any level of cognitive impairment is a drop in the response rate, which for smaller communities can result in their not having a reportable measure (see response rate exclusion discussed later) (Saliba, et al., 2012).
(2) Residents receiving or having received any hospice. This is recorded in facility health information systems. This exclusion is consistent with other CMS CAHPS surveys.
(3) Residents with court appointed legal guardian for all decisions will be identified from facility health information systems.
(4) Residents who have lived in the AL facility for less than two weeks will be identified from facility health information systems.
(5) Residents that respond after the 2 month response period.
(6) Residents whose responses were completed by someone other than the resident will be excluded. Identified from an additional question on the CoreQ: AL Resident Satisfaction questionnaire. We have developed a CoreQ: Family Satisfaction for families to respond to.
(7) Residents without usable data (defined as missing data for 2 or more questions on the survey).
Saliba D, Buchanan J, Edelen MO, Streim J, Ouslander J, Berlowitz D, Chodosh J.
J Am Med Dir Assoc. 2012 Sep;13(7):611-7. doi: 10.1016/j.jamda.2012.06.004. Epub 2012 Jul 15.
Measure Calculation
1. Identify the residents that have been residing in the AL facility for two weeks or more.
2. Take the residents that have been residing in the AL facility for greater than or equal to two weeks and exclude the following:
- Residents who have poor cognition.
- Patients receiving or having received any hospice. This is recorded in facility health information systems.
- Residents with Court appointed legal guardian for all decisions will be identified from facility health information systems.
3. Administer the CoreQ: AL Resident Satisfaction questionnaire to these individuals. The questionnaire should be administered to all residents in the facility after exclusions in step 2 above. Communicate to residents that we will include surveys received up to two months from administration. Providers should use follow-up to increase response rates.
4. Create a tracking sheet with the following columns:
- Data Administered
- Data Response Received
- Time to Receive Response ([Date Response Received – Date Administered])
5. Exclude any surveys received after 2 months from administration.
6. Exclude responses not completed by the intended recipient (e.g. questions were answered by a friend or family members (Note: this does not include cases where the resident solely had help such as reading the questions or writing down their responses).
7. Exclude responses that are missing data for 1 or more of the CoreQ questions.
8. All of the remaining surveys are totaled and become the denominator.
9. Combine the CoreQ: AL Resident Satisfaction questionnaire items to calculate a resident level score. Responses for each item should be given the following scores:
- Poor = 1,
- Average = 2,
- Good = 3,
- Very Good =4 and
- Excellent = 5.
10. Impute missing data if only one of the three questions are missing data.
11. Calculate resident score from usable surveys.
- Patient score= (Score for Item 1 + Score for Item 2 + Score for Item 3 + Score for Item 4) / 4.
- For example, a resident rates their satisfaction on the four Core Q questions as excellent = 5, very good = 4, very good = 4, and good = 3. The resident’s total score will be 5 + 4 + 4 + 3 for a total of 16. The resident total score (16) will then be divided by the number of questions (4), which equals 4.0. Thus, the residents average satisfaction rating is 4.0. Since the resident’s score is >3.0, this resident will be counted in the numerator.
- Flag those patients with a score equal to or greater than 3.0. These residents will be included in the numerator.
12. Calculate the CoreQ: AL Resident Satisfaction Measure which represents the percent of residents with average scores of 3.0 or above. CoreQ: AL Resident Satisfaction Measure= ([number of respondents with an average score of ≥3.0] / [total number of respondents])*100.
13. No risk-adjustment is used.
No stratification is used.
1. Administer the CoreQ: AL Resident Satisfaction questionnaire to AL residents who have resided in the AL facility for greater than or equal to two weeks and who do not fall into one of the following exclusions:
- Residents who have poor cognition; recorded in the facility health information system.
- Residents receiving or having received any hospice. This is recorded in the facility health information system.
- Residents with Court appointed legal guardian for all decisions will be identified from facility health information system.
2. Administer the CoreQ: AL Resident Satisfaction questionnaire to residents.
3. Instruct residents that they must respond to the survey within 2 months.
4. The response rate is calculated based on the number of usable surveys returned divided by the number of surveys administered.
- As stated in S.14, surveys with missing responses for more than 1 question, surveys received outside of the time window (more than two months after administration date), and surveys who were completed by someone else other than the intended resident are excluded
- A minimum response rate of 30% needs to be achieved for results to be reported for an AL.
5. Regardless of response rate, facilities must also achieve a minimum number of 20 usable questionnaires (e.g. denominator). If after 2 months, less than 20 usable questionnaires are received then a facility level satisfaction measure is not reported.
6. All the questionnaires that are received (other than those with more than one missing value; or those returned after 2 months; or those completed by another person other than the intended resident) must be used in the calculations.
Saliba, D., Buchanan, J., Edelen, M.O., Streim, J., Ouslander, J., Berlowitz, D, & Chodosh J. (2012). MDS 3.0: brief interview for mental status. Journal of the American Medical Directors Association, 13(7): 611-617.
A minimum sample size of 20 and overall response rate of 30% is needed for the measure.
Supplemental Attachment
Point of Contact
None
Valerie Williams
2 Massachusetts Avenue NE, Unit 77880
Washington, DC 20013
United States
Nicholas Castle
University of West Virginia
P.O. Box 9190, 64 Medical Center Drive
Morgantown, WV 26506
United States
Importance
Evidence
Collecting satisfaction information from Assisted Living (AL) residents and family members is more important now than ever. We have seen a philosophical change in healthcare that now includes the patient and their preferences as an integral part of the system of care. The Institute of Medicine (IOM) endorses this change by putting the patient as central to the care system (IOM, 2001). For this philosophical change to person-centered care to succeed, we have to be able to measure patient satisfaction for these three reasons:
(1) Measuring satisfaction is necessary to understand patient preferences.
(2) Measuring and reporting satisfaction with care helps patients and their families choose and trust a health care facility.
(3) Satisfaction information can help facilities improve the quality of care they provide.
The implementation of person-centered care in long-term care has already begun, but there is still room for improvement. The Centers for Medicare and Medicaid Services (CMS) demonstrated interest in consumers’ perspective on quality of care by supporting the development of the Consumer Assessment of Healthcare Providers and Systems (CAHPS) survey for patients in nursing facilities (Sangl et al., 2007). We have developed three skilled nursing facility (SNF) and two assisted living CoreQ measures, and all five are endorsed by a consensus-based entity (NQF at the time).
Further supporting person-centered care and resident satisfaction are ongoing organizational change initiatives. These include: the Center for Excellence in Assisted Living (CEAL) which has developed a measure of person-centeredness of assisted living with the University of North Carolina at Chapel Hill; the Advancing Excellence in America’s Nursing Homes campaign (2006), which lists person-centered care as one of its goals; Action Pact, Inc., which provides workshops and consultations with long-term care facilities on how to be more person-centered through their physical environment and organizational structure; and Eden Alternative, which uses education, consultation, and outreach to further person-centered care in long-term care facilities. All these initiatives have identified the measurement of resident satisfaction as an essential part in making, evaluating, and sustaining effective clinical and organizational changes that ultimately result in a person-centered philosophy of care.
The importance of measuring resident satisfaction as part of quality improvement cannot be stressed enough. Quality improvement initiatives, such as total quality management (TQM) and continuous quality improvement (CQI), emphasize meeting or exceeding “customer” expectations. William Deming, one of the first proponents of quality improvement, noted that “one of the five hallmarks of a quality organization is knowing your customer’s needs and expectations and working to meet or exceed them” (Deming, 1986). Measuring resident satisfaction can help organizations identify deficiencies that other quality metrics may struggle to identify, such as communication between a patient and the provider.
As part of the US Department of Commerce renowned Baldrige Criteria for organizational excellence, applicants are assessed on their ability to describe the links between their mission, key customers, and strategic position. Applicants are also required to show evidence of successful improvements resulting from their performance improvement system. An essential component of this process is the measurement of customer, or resident, satisfaction (Shook & Chenoweth, 2012).
The CoreQ: AL Resident Satisfaction questionnaire and measure can strategically help AL facilities achieve organizational excellence and provide high quality care by being a tool that targets a unique and growing patient population. Moreover, improving the care for AL patients is tenable. A review of the literature on satisfaction surveys in long-term care facilities (Castle, 2007) concluded that substantial improvements in resident satisfaction could be made in many facilities by improving care (i.e., changing either structural or process aspects of care). This was based on satisfaction scores ranging from 60 to 80% on average (with 100% as a maximum score).
It is worth noting, few other generalizations can be made because existing instruments used to collect satisfaction information are not standardized (except CoreQ). Thus, benchmarking scores and comparison scores (i.e., best in class) are difficult to establish. The CoreQ: AL Resident Satisfaction Measure has considerable relevance in establishing benchmarking scores and comparison scores. Benchmark and comparison scores are available with CoreQ, and come from tens of thousands of surveys returned.
We developed three skilled nursing facility (SNF) based CoreQ measures: CoreQ: Long-Stay Family Satisfaction Measure, CoreQ: Long-Stay Resident Satisfaction Measure, and CoreQ: Short-Stay Discharge Measure. All three of these measures received NQF endorsement in 2016. Then, the assisted living CoreQ Resident and Family Satisfaction Measures received NQF endorsement in 2019. With these five satisfaction measures, it enables providers, researchers, and regulators to measure satisfaction across the long-term care continuum with valid and reliable measures.
The measure’s relevance are furthered by recent federal legislative actions. The Affordable Care Act of 2010 requires the Secretary of Health and Human Services (HHS) to implement a Quality Assurance & Performance Improvement Program (QAPI) within nursing facilities. This means all nursing facilities have increased accountability for continuous quality improvement efforts. In CMS’s “QAPI at a Glance” document there are references to customer-satisfaction surveys and organizations utilizing them to identify opportunities for improvement. Some AL communities have implemented QAPI in their organizations.
Lastly, in CMS’s National Quality Strategy (2024), one of the four key areas is advancing equity and engagement for all individuals. Specifically, CMS calls out expanding the use of person-reported outcomes and experience measures as a key action. Similarly, in the most recent SNF payment rule (CMS, August 2024), CMS acknowledges an opportunity to add patient experience or satisfaction measures to the Quality Reporting Program (QRP) that spans across post-acute and long-term care providers and created by the IMPACT Act of 2014. While CMS does not provide direct oversight of assisted living, more states are covering assisted living as part of home and community-based Medicaid waivers. As of 2020, 44% of assisted living communities were Medicaid certified (CDC, 2020). Thus, the principles of CMS’s Quality Strategy apply and the CoreQ: AL resident measure can further CMS’s quality efforts.
Castle, N.G. (2007). A literature review of satisfaction instruments used in long-term care settings. Journal of Aging and Social Policy, 19(2), 9-42.
CDC (2020). National Post-Acute and Long-Term Care Study. https://www.cdc.gov/nchs/npals/webtables/overview.htm
CMS (2009). Skilled Nursing Facilities Non Swing Bed - Medicare National Summary. http://www.cms.hhs.gov/MedicareFeeforSvcPartsAB/Downloads/NationalSum2007.pdf
CMS, University of Minnesota, and Stratis Health. QAPI at a Glance: A step by step guide to implementing quality assurance and performance improvement (QAPI) in your nursing home. https://www.cms.gov/Medicare/Provider-Enrollment-and-Certification/QAPI/Downloads/QAPIAtaGlance.pdf.
CMS (April 2024). Quality in Motion: Acting on CMS National Quality Strategy. https://www.cms.gov/files/document/quality-motion-cms-national-quality-strategy.pdf
CMS (August 6, 2024). Medicare Program; Prospective Payment System and Consolidated Billing for Skilled Nursing Facilities; Updates to the Quality Reporting Program and Value-Based Purchasing Program for Federal Fiscal Year 2025. https://www.federalregister.gov/d/2024-16907/p-588
Deming, W.E. (1986). Out of the crisis. Cambridge, MA. Massachusetts Institute of Technology, Center for Advanced Engineering Study.
Institute of Medicine (2001). Improving the Quality of Long Term Care. National Academy Press, Washington, D.C., 2001.
MedPAC. (2015). Report to the Congress: Medicare Payment Policy. http://www.medpac.gov/documents/reports/mar2015_entirereport_revised.pdf?sfvrsn=0.
Sangl, J., Bernard, S., Buchanan, J., Keller, S., Mitchell, N., Castle, N.G., Cosenza, C., Brown, J., Sekscenski, E., and Larwood, D. (2007). The development of a CAHPS instrument for nursing home residents. Journal of Aging and Social Policy, 19(2), 63-82.
Shook, J., & Chenoweth, J. (2012, October). 100 Top Hospitals CEO Insights: Adoption Rates of Select Baldrige Award Practices and Processes. Truven Health Analytics. http://www.nist.gov/baldrige/upload/100-Top-Hosp-CEO-Insights-RB-final.pdf.
Measure Impact
The consumer movement has fostered the notion that patient evaluations should be an integral component of health care. Patient satisfaction, which is one form of patient evaluation, became an essential outcome of health care widely advocated for use by researchers and policy makers. Managed care organizations, accreditation and certification agencies, and advocates of quality improvement initiatives, among others, now promote the use of satisfaction surveys. For example, satisfaction information is included in the Health Plan Employer Data Information Set (HEDIS), which is used as a report card for managed care organizations (NCQA, 2016).
Measuring and improving patient satisfaction is valuable to patients, because it is a way forward on improving the patient-provider relationship, which influences health care outcomes. A 2014 systematic review and meta-analysis of randomized controlled trials, in which the patient-provider relationship was systematically manipulated and tracked with health care outcomes, found a small but statistically significant positive effect of the patient-provider relationship on health care outcomes (Kelly et al., 2014). This finding aligns with other studies that show a link between patient satisfaction and the following health-related behaviors:
1. Keeping follow-up appointments (Hall, Milburn, Roter, & Daltroy, 1998);
2. Disenrollment from health plans (Allen & Rogers, 1997); and,
3. Litigation against providers (Penchansky & Macnee, 1994).
The positive effect of person-centered care and patient satisfaction is not precluded from AL facilities. A 2013 systematic review of studies on the effect of person-centered initiatives in long-term care facilities, such as the Eden Alternative, found person-centered care associated with psychosocial benefits to residents and staff, notwithstanding variations and limitations in study designs (Brownie & Nancarrow, 2013).
From the AL facility and provider perspective, there are numerous ways to improve patient satisfaction. One study found conversations regarding end-of-life care options with family members improve overall satisfaction with care and increase use of advance directives (Reinhardt et al., 2014). Another found an association between improving symptom management of long-term care residents with dementia and higher satisfaction with care (Van Uden et al., 2013). Improvements in a long-term care food delivery system also were associated with higher overall satisfaction and improved resident health (Crogan et al., 2013). The advantage of the CoreQ: AL Resident Satisfaction questionnaire is it is broad enough to capture dissatisfaction on various provided services and signal to providers to drill down and discover ways of improving the patient experience at their facility.
Specific to the Core Q: AL questionnaire, the importance of the satisfaction areas assessed were examined with focus groups of residents and family members. The respondents were patients (N=40) in five AL facilities in the Pittsburgh region. The overall ranking used was 10=Most important and 1=Least important. That the final three questions included in the measure had average scores ranging from 9.50 to 9.69 clearly shows that the respondents value the items used in the Core Q: AL measure.
Allen HM, & Rogers WH. (1997). The Consumer Health Plan Value Survey: Round Two. Health Affairs. 1997;16(4):156–66
Brownie, S. & Nancarrow, S. (2013). Effects of person-centered care on residents and staff in aged-care facilities: a systematic review. Clinical Interventions In Aging. 8:1-10.
Crogan, N.L., Dupler, A.E., Short, R., & Heaton, G. (2013). Food choice can improve nursing home resident meal service satisfaction and nutritional status. Journal of Gerontological Nursing. 39(5):38-45.
Hall J, Milburn M, Roter D, Daltroy L (1998). Why are sicker patients less satisfied with their medical care? Tests of two explanatory models. Health Psychol. 17(1):70–75
Kelley J.M., Kraft-Todd G, Schapira L, Kossowsky J, & Riess H. (2014). The influence of the patient-clinician relationship on healthcare outcomes: a systematic review and meta analysis of randomized controlled trials. PLoS One. 9(4): e94207.
Li, Y., Cai, X., Ye, Z., Glance, L.G., Harrington, C., & Mukamel, D.B. (2013). Satisfaction with Massachusetts nursing home care was generally high during 2005-09, with some variability across facilities. Health Affairs. 32(8):1416-25.
Lin, J., Hsiao, C.T., Glen, R., Pai, J.Y., & Zeng, S.H. (2014). Perceived service quality, perceived value, overall satisfaction and happiness of outlook for long-term care institution residents. Health Expectations. 17(3):311-20.
National Committee for Quality Assurance (NCQA) (2016). HEDIS Measures. http://www.ncqa.org/HEDISQualityMeasurement/HEDISMeasures.aspx. Accessed March 2016.
Penchansky and Macnee, (1994). Initiation of medical malpractice suits: a conceptualization and test. Medical Care. 32(8): pp. 813–831
Reinhardt, J.P., Chichin, E., Posner, L., & Kassabian, S. (2014). Vital conversations with family in the nursing home: preparation for end-stage dementia care. Journal Of Social Work In End-Of-Life & Palliative Care. 10(2):112-26.
Van Uden, N., Van den Block, L., van der Steen, J.T., Onwuteaka-Philipsen, B.D., Vandervoort, A., Vander Stichele, R., & Deliens, L. (2013). Quality of dying of nursing home residents with dementia as judged by relatives. International Psychogeriatrics. 25(10):1697-707.
Performance Gap
The data were collected in 2023 and 2024. 511 facilities participated with 17,482 surveys collected. The facilities were from across the US. Participation was voluntary. The scores and facilities used for the data below were all calculated after the previously mentioned resident exclusions were applied. In addition, scores were only used from facilities with 20 or more responses and a 30% or more response rate.
Overall | Minimum | Decile_1 | Decile_2 | Decile_3 | Decile_4 | Decile_5 | Decile_6 | Decile_7 | Decile_8 | Decile_9 | Decile_10 | Maximum | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Mean Performance Score | 77.52 | 20 | 55 | 65 | 75 | 80 | 84 | 85 | 90 | 95 | 99 | 100 | 100 |
N of Entities | 511 | 2 | 52 | 59 | 55 | 118 | 39 | 46 | 80 | 43 | 38 | 35 | 35 |
N of Persons / Encounters / Episodes | 17482 | 51 | 2058 | 2071 | 1812 | 2287 | 1292 | 1715 | 2622 | 1340 | 1164 | 1121 | 1121 |
Equity
Equity
For all of the CoreQ surveys we are examining scores for white and black residents. In nursing homes, overall scores for black residents are lower than those for white residents. However, we know that black residents are disproportionately cared for in lower quality facilities. This may influence the overall scores. We are continuing to examine this data. In AL from the data we received, very few (<2%) respondents were black. Thus, we are continuing to collect data from AL communities trying to over-sample communities with more black residents.
Feasibility
Feasibility
All of the data elements used in data collection are used in normal facility operations. As part of the data we collected as part of this maintenance, instructions were sent to AL communities detaining the process of collecting the CoreQ surveys from residents. With the exception of cognitive status, all facilities had the information needed readily available.
From the data collected from the recent 511 participating facilities missing data was rare. Of the 17,482 surveys received imputation for one of the four question responses was used in 391 cases (i.e., 2.2%). In addition, surveys not used (i.e., those with 2 or more missing responses) accounted for 1.8% of returns (i.e., N=322).
Facilities have no data entry burden. However, they do have data collection burden. In work we have done with CMS for a different CoreQ survey (NH discharge survey) the cost burden for the facility was calculated to be $2.80 per respondent. This calculation was based on requiring more that 20 data elements; whereas, here only 4 are needed. The cost will likely be less than $2.80.
No barriers were encountered with the measure specifications. The measure calculation was sometimes confused with an average score. The CoreQ measure is not an average. This is explained on reports produced and in the technical manual.
All of the patient surveys are anonymous. In addition, scores are only calculated with 20 or more survey returns. Thus, patient confidentiality is protected.
There were no negative consequences to individuals or populations identified during testing or evidence of unintended negative consequences to individuals or populations reported since the implementation of the CoreQ: AL Resident Satisfaction questionnaire or the measure that is calculated using this questionnaire. This is consistent with satisfaction surveys in general in nursing facilities. Many other satisfaction surveys are used in AL facilities with no reported unintended consequences to patients or their families.
There are no potentially serious physical, psychological, social, legal, or other risks for patients. However, in some cases the satisfaction questionnaire can highlight poor care for some dissatisfied patients, and this may make them further dissatisfied.
This is a maintenance application. As detailed above we have continued to collect CoreQ data to examine any changes in scores and implementation issues. No adjustment to the measure has occurred.
Proprietary Information
N/A
Scientific Acceptability
Testing Data
This is a maintenance application. The data used for NQF approval was collected in 2018 and the reliability, validity, and exclusions were reported. As detailed above we have continued to collect CoreQ data to examine any changes in scores and implementation issues. This data was collected in 2023 and 2024.
The 2018 testing and analysis included four data sources (Table A below):
- Reliability and validity testing of the Pilot CoreQ: AL Resident Satisfaction questionnaire was examined using responses from 411 residents from a national sample of facilities.
- Validity testing of the Pilot CoreQ: AL Resident Satisfaction questionnaire was examined using responses from 100 residents from the Pittsburgh area.
- CoreQ: AL Resident Satisfaction measure was examined using 321 facilities and included responses from 12,553 residents. These facilities were located across multiple states.
- Resident-level sociodemographic (SDS) variables were examined using a sample of 3000 residents from a national sample of AL facilities. This included 205 facilities.
- In addition, the CoreQ: AL Resident Satisfaction measure was examined along with other outcome measures using a national sample of 483 facilities (with 29,799 residents).
More information is located in Table A: Information on Data Sources Utilized in Analyses in the 7.1 Supplement.
This is a maintenance application. The data used for NQF approval was collected in 2018 and the reliability, validity, and exclusions were reported. As detailed below several different sources of data were used for reliability and validity testing. This data was used for NQF approval and was collected in 2018.
Resident Level of Analysis
Data was used from the CoreQ: AL Resident Satisfaction questionnaire. The questionnaire was administered to all residents (with the exclusions described in the Specification section). The testing and analysis included:
- The Pilot CoreQ: AL Resident Satisfaction questionnaire was examined using responses from 411 residents from a national sample of facilities.
- Validity testing of the Pilot CoreQ: AL Resident Satisfaction questionnaire was examined using responses from 100 residents from the Pittsburgh area.
- CoreQ: AL Resident Satisfaction measure was examined using 321 facilities and included responses from 12,553 residents. These facilities were located across multiple states.
- In addition, resident-level sociodemographic (SDS) variables were examined using a sample of 3000 residents from a national sample of AL facilities. This included 205 facilities.
[Note: Data source #5 above was used for facility level analyses, and is not included in the resident level of analysis]
The descriptive characteristics of the residents are given in the following table that includes information from all the data used (the education level and race information comes only from the sample described above with 3000 respondents, as this data was not collected for the other samples).
More information is located in Table B: Descriptive Characteristics of Residents Included in the Analysis (all samples pooled) its attached to the document in 7.1 Supplement.
The analysis included five measured entities. All entities were assisted living communities. Reliability and validity testing of the Pilot CoreQ: AL Resident Satisfaction questionnaire was examined using responses from 411 residents from a national sample of facilities. Validity testing of the Pilot CoreQ: AL Resident Satisfaction questionnaire was examined using responses from 100 residents from the Pittsburgh area. CoreQ: AL Resident Satisfaction measure was examined using 321 facilities and included responses from 12,553 residents. These facilities were located across multiple states. Resident-level sociodemographic (SDS) variables were examined using a sample of 3000 residents from a national sample of AL facilities. This included 205 facilities. In addition, the CoreQ: AL Resident Satisfaction measure was examined along with other outcome measures using a national sample of 483 facilities (with 29,799 residents).
The descriptive characteristics of the residents are given in the following table that includes information from all the data used (the education level and race information is derived from the sample described above with 3000 respondents, as this data was not collected for the other samples).
More information is located in the attachment in 7.1 Supplement.
Reliability
We measured reliability at the: (1) data element level; (2) the person/questionnaire level; and, (3) at the measure (i.e., facility) level. More detail of each analysis follows.
(1) Data Element Level. To determine if the CoreQ: AL Resident Satisfaction questionnaire data elements were repeatable (i.e. producing the same results a high proportion of the time when assessed in the same population in the same time period) we re-administered the questionnaire to residents 1 month after the submission of their first survey. The Pilot CoreQ: AL Resident Satisfaction questionnaire had responses from 100 residents; we re-administered the survey to all 100 residents (98 answered the repeat survey). The re-administered sample was a sample of convenience as they represented residents from the Pittsburgh area (the location of the team testing the questionnaire). To measure the agreement, we calculated first the distribution of responses by question in the original round of surveys, and then again in the follow-up surveys (they should be distributed similarly); and second, calculated the correlations between the original and follow-up responses by question (they should be highly correlated).
(2) Person/Questionnaire Level. Having tested whether the data elements matched between the pilot responses and the re- administered responses, we then examined whether the person-level results matched between the Pilot CoreQ: AL Resident Satisfaction questionnaire responses and their corresponding re- administered responses. In particular, we calculated the percent of time that there was agreement between whether or not the pilot response was poor, average, good, very good or excellent, and whether or not the re- administered response was poor, average, good, very good or excellent.
(3) Measure (Facility) Level. We measured stability of the facility-level measure when the
facility’s score is calculated using multiple “draws” from the same population. This measures how stable the facility’s score would be if the underlying residents are from the same population but are subject to the kind of natural sample variation that occurs over time. We did this by bootstrap with 10,000 repetitions of the facility score calculation, and present the percent of facility resamples where the facility score is within 1 percentage point, 3 percentage points, 5 percentage points, and 10 percentage points of the original score calculated on the Pilot CoreQ: AL Resident Satisfaction questionnaire sample. We also conducted two-level signal-to-noise analysis which identifies two sources of variability, those between ratees (facilities) and those for each ratee (respondents). No imputed values were used in the analysis and only AL facilities with 20 or more responses were included.
Data Element Level. Table 2a2.3.a shows the four CoreQ: AL Resident Satisfaction Questionnaire items, and the response per item for both the pilot survey of 100 residents and the re-administered survey of 98 residents. The responses in the pilot survey are not statistically significant from the re-administered survey. This shows that the data elements were highly repeatable and produced the same results a high proportion of the time when assessing the same population in the same time period.
Table 2a2.3.b shows the average of the percent agreement from the first survey score to the second survey score for each item in the CoreQ: AL Resident Satisfaction questionnaire. This shows very high levels of agreement.
- Person/Questionnaire Level. Having tested whether the data elements matched between
the pilot responses and the re-administered responses, we then examined whether the person-level results matched between the Pilot CoreQ: AL Resident Satisfaction Questionnaire responses and their corresponding re-administered responses. In particular, we calculated the percent of time that there was agreement between whether or not the pilot response was poor, average, good, very good or excellent, and whether or not the re-administered response was poor, average, good, very good or excellent. The table (2a2.3.c) shows the CoreQ: AL Resident Satisfaction Questionnaire items, and the agreement in response per item for both the pilot survey of 100 residents compared with the re-administered survey of 98 residents. The person-level responses in the pilot survey are not statistically significant from the re-administered survey. This shows that a high percent of time there was agreement between whether or not the pilot response was poor, average, good, very good or excellent, and whether or not the re-administered response was poor, average, good, very good or excellent.
- MEASURE (FACILITY) LEVEL. After having performed the 10,000-repetition bootstrap, 21% of
bootstrap repetition scores were within 1 percentage point of the score under the original pilot sample, 33% were within 3 percentage points, 65% were within 5 percentage points, and 95% were within 10 percentage points. For the two-level signal-to-noise analysis for AL resident, R=0.84 (this result is the mean), indicating that 84% of facilities true score can be attributed to ratings from the respondents (AL residents) and remaining 16% is due to noise and differences among respondents. This result exceeds what is generally considered a good reliability coefficient of 0.8 (Campbell et al., 2010).
In summary, the measure displays a high degree of element-level, questionnaire-level, and measure (facility)-level reliability. First, the CoreQ: AL Resident Satisfaction questionnaire data elements were highly repeatable, with pilot and re-administered responses agreeing between 95% to 100% of the time, depending on the question. That is, this produced the same results a high proportion of the time when assessed in the same population in the same time period. Second, the questionnaire level scores were also highly repeatable, with pilot and re-administered responses agreeing 98% of the time. Third, a facility drawing residents from the same underlying population only varied modestly. The 10,000-repetition bootstrap results showed that the CoreQ: AL Resident Satisfaction measure scores from the same facility are very stable.
4.2.3a- Table 2
This information cannot be provided because this was not conducted in the initial testing.
Campbell, JA, Narayanan, A., Burford, B., Greco, MJ. Validation of a multi-source feedback tool for use in general practice. Education in Primary Care, 2010, 21, 165-179.
| Overall | Minimum | Decile_1 | Decile_2 | Decile_3 | Decile_4 | Decile_5 | Decile_6 | Decile_7 | Decile_8 | Decile_9 | Decile_10 | Maximum |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Reliability | 0.84 | ||||||||||||
Mean Performance Score | |||||||||||||
N of Entities | |||||||||||||
N of Persons / Encounters / Episodes | 411 |
In summary, the measure displays a high degree of element-level, questionnaire-level, and measure (facility)-level reliability. First, the CoreQ: AL Resident Satisfaction questionnaire data elements were highly repeatable, with pilot and re-administered responses agreeing between 95% to 100% of the time, depending on the question. That is, this produced the same results a high proportion of the time when assessed in the same population in the same time period. Second, the questionnaire level scores were also highly repeatable, with pilot and re-administered responses agreeing 98% of the time. Third, a facility drawing residents from the same underlying population only varied modestly. The 10,000-repetition bootstrap results showed that the CoreQ: AL Resident Satisfaction measure scores from the same facility are very stable.
Validity
In the development of the CoreQ: AL Resident Satisfaction questionnaire, four sources of data were used to perform three levels of validity testing. Each is described further below. The first source of data (convenience sampling) was used in developing and choosing the format to be utilized in the CoreQ: AL Resident Satisfaction questionnaire (i.e., response scale). The second source of data was pilot data collected from 411 residents (described below). This data was used in choosing the items to be used in the CoreQ: AL Resident Satisfaction Questionnaire. The third source of data (collected from 321facilities (n=12,553)) was used to examine the validity of the CoreQ: AL Resident Satisfaction Measure (i.e., facility and summary score validity). An additional source of data (collected from 483 facilities described in Section 1.5) was used to examine the correlations between the CoreQ: AL Resident Satisfaction measure scores and other quality metrics from the facilities.
Thus, the following sections describe this validity testing:
1. Validity testing of the questionnaire format used in the CoreQ: AL Resident Satisfaction Questionnaire;
2. Testing the items for the CoreQ: AL Resident Satisfaction Questionnaire;
3. To determine if a sub-set of items could reliably be used to produce an overall indicator of satisfaction (Core Q: AL Resident Measure);
4. Validity testing for the CoreQ: AL Resident Satisfaction measure.
In summary, the overall intent of these analyses was to determine if a subset of items could reliably be used to produce an overall indicator of satisfaction for AL residents.
1. Validity Testing for the Questionnaire Format used in the CoreQ: AL Resident Satisfaction Questionnaire
A. The face validity of the domains used in the CoreQ: AL Resident Satisfaction questionnaire was evaluated via a literature review. The literature review was conducted to examine important areas of satisfaction for long-term care residents. The research team examined 12 commonly used satisfaction surveys and reports to determine the most valued satisfaction domains. These surveys were identified by completing internet searches in PubMed and Google. Key terms that were searched included “resident satisfaction, long-term care satisfaction, assisted living satisfaction, and elderly satisfaction”.
B. The face validity of the domains was also examined using residents. The overall ranking used was 1=Most important and 22=Least important. The respondents were residents (N=40) in five AL facilities in the Pittsburgh region.
C. The face validity of the Pilot CoreQ: AL Resident Satisfaction questionnaire response scale was also examined. The respondents were residents (N=40) in five AL facilities in the Pittsburgh region. The percent of respondents that stated they “fully understood” how the response scale worked, could complete the scale, and in cognitive testing understood the scale was used.
D. The Flesch-Kinkaid scale (Streiner & Norman, 1995) was used to determine if respondent correctly understood the questions being asked (Streiner & Norman, 1995).
2. Testing the Items for the CoreQ: AL Resident Satisfaction Questionnaire
The analyses above were performed to provide validity information on the format in the CoreQ: AL Resident Satisfaction questionnaire (i.e, domains and format). The second series of validity testing was used to further identify items that should be included in the CoreQ: AL Resident Satisfaction Questionnaire. This analysis was important, as all items in a satisfaction measure should have adequate psychometric properties (such as low basement or ceiling effects). For this testing, a Pilot version of the CoreQ: AL Resident Satisfaction questionnaire survey was administered consisting of 20 items (N= 411 residents). The testing consisted of:
A. The Pilot CoreQ: AL Resident Satisfaction Questionnaire items performance with respect to the distribution of the response scale and with respect to missing responses.
B. The intent of the pilot instrument was to have items that represented the most important areas of satisfaction (as identified above) and to be parsimonious. Additional analyses were used to eliminate items in the pilot instrument. More specifically, analyses such as exploratory factor analysis (EFA) were used to further refine the pilot instrument. This was an iterative process that included using Eigenvalues from the principal factors (unrotated) and correlation analysis of the individual items.
3. Determine if a Sub-Set of Items Could Reliably be used to Produce an Overall Indicator of Satisfaction (The CoreQ: AL Resident Satisfaction measure).
The CoreQ: AL Resident Satisfaction questionnaire is meant to represent overall satisfaction with as few items as possible. The testing given below describes how this was achieved.
A. To support the construct validity (i.e. that the CoreQ items measured a single concept of “satisfaction”) we performed a correlation analysis using all items in the instrument.
B. In addition, using all items in the instruments a factor analysis was conducted. Using the global items Q1 (“How satisfied are you with the facility?”) the Cronbach’s Alpha of adding the “best” additional item was explored.
4. Validity Testing for the Core Q: AL Resident Measure.
The overall intent of the analyses described above was to identify if a sub-set of items could reliably be used to produce an overall indicator of satisfaction, the CoreQ: AL Resident Satisfaction questionnaire. Further testing was conducted to determine if the 4 items in the CoreQ: AL Resident Satisfaction questionnaire were a reliable indicator of satisfaction.
A. To determine if the 4 items in the CoreQ: AL Resident Satisfaction questionnaire were a reliable indicator of satisfaction, the correlation between these four items in the CoreQ: AL Resident Satisfaction Measure and all of the items on the pilot CoreQ instrument was conducted.
B. We performed additional validity testing of the facility-level CoreQ: AL Resident measure by measuring the correlations between the CoreQ: AL Resident Satisfaction measure scores and other quality metrics from the facilities. If the CoreQ AL Resident scores correlate negatively with the measures that decrease as they get better, and positively with the measures that increase as they get better, then this supports the validity of the CoreQ AL Resident measure.
Secondary data from AL is rare. As part of our validity testing staff stability information and turnover information was collected. These had a high correlation (>.4) with the CoreQ score.
Reference: Streiner, D. L. & Norman, G.R. 1995. Health measurement scales: A practical guide to their development and use. 2nd ed. New York: Oxford.
Validity Testing for the Questionnaire Format used in the CoreQ: AL Resident Satisfaction Questionnaire
A. The face validity of the Domains used in the CoreQ: AL Resident Satisfaction Questionnaire was evaluated via a literature review (described in 2b2.2). Specifically, the research team examined the surveys and reports to identify the different domains that were included. The research team scored the domains by simply counting if an instrument included the domain. Table 2b1.3.a gives the domains that were found throughout the search, as their respective score. An example is the domain food, this was used in 11 out of the 12 surveys. An interpretation of this finding would be that items addressing food are extremely important in satisfaction surveys in AL. These domains were used in developing the pilot CoreQ: AL Resident Satisfaction questionnaire items.
B. The face validity of the domains was also examined using residents (described above). The following abbreviated table (Table 2b1.3.b) shows the rank of importance for each group of domains. The overall ranking used was 1=Most important and 22=Least important. The ranking of the 4 areas used in the CoreQ: AL Resident Satisfaction questionnaire are shown in Table 2b1.3.b.
C. The face validity of the pilot CoreQ: AL Resident Satisfaction questionnaire response scale was also examined (described above). Table 2b1.3.c gives the percent of respondents that stated they fully understood how the response scale worked, could complete the scale, AND in cognitive testing understood the scale.
D. The CoreQ: AL Resident Satisfaction Questionnaire was purposefully written using simple language. No a priori goal for reading level was set, however a Flesch-Kinkaid scale score of six, or lower, is achieved for all questions.
Testing the Items for the CoreQ: AL Resident Satisfaction Questionnaire
A. The pilot CoreQ: AL Resident Satisfaction questionnaire items all performed well with respect to the distribution of the response scale and with respect to missing responses.
B. Using all items in the instruments (excluding the global item Q1 (“How would you rate the facility?”)) exploratory factor analysis (EFA) was used to evaluate the construct validity of the measure. The Eigenvalues from the principal factors 1 and 2 (unrotated) were 10.93 and 0.710, respectively. Sensitivity analyses using principal factors and rotating provide highly similar findings.
Determine if a Sub-Set of Items could Reliably be used to Produce an Overall Indicator of Satisfaction (The Core Q: AL Resident Measure).
A. To support the construct validity that the idea that the CoreQ items measured a single concept of “satisfaction” – we performed a correlation analysis using all items in the instrument. The analysis identifies the pairs of CoreQ items with the highest correlations. The highest correlations are shown in Table 2b1.3.d. Items with the highest correlation are potentially providing similar satisfaction information. Note, the table provides 7 sets of correlations, the analysis was conducted examining all possible correlations between items. Because items with the highest correlation were potentially providing similar satisfaction information they could be eliminated from the instrument.
B. In addition, using all items in the instrument a factor analysis was conducted. Using the global items Q1 (“How satisfied are you with the facility?”) the Cronbach’s Alpha of adding the “best” additional item is shown in table 2b1.3.e. Cronbach’s alpha measures the internal consistency of the values entered into the factor analysis, where a value of 0.7 or higher is generally considered acceptably high. The additional item(s) is considered best in the sense that it is most highly correlated with the existing item, and therefore provides little additional information about the same construct. So, this analysis was also used to eliminate items. Note, the table again provides a limited set of correlations, the analysis was conducted examining all possible correlations between items.
Thus, using the correlation information and factor analysis 4 items representing the CoreQ: AL Resident Satisfaction questionnaire were identified.
Validity testing for the Core Q: AL Resident Measure
The overall intent of the analyses described above was to identify if a sub-set of items could reliably be used to produce an overall indicator of satisfaction, the CoreQ: AL Resident Satisfaction Questionnaire.
A. The items were all scored according to the rules identified elsewhere. The same scoring was used in creating the 4 item CoreQ: AL Resident Satisfaction Questionnaire summary score and the satisfaction score using the Pilot CoreQ: AL Resident Satisfaction Questionnaire. The correlation was identified as having a value of 0.94. That is, the correlation score between the final “CoreQ: AL Resident Satisfaction Measure” and all of the 20 items used in the Pilot instrument indicates that the satisfaction information is approximately the same if we had included either the 4 items or the 20 item Pilot instrument.
B. We performed additional validity testing of the facility-level CoreQ: AL Resident Satisfaction Measure by measuring the correlations between the CoreQ: AL Resident Satisfaction measure scores and several other quality metrics from facilities (see Table 2b1.3.f). Therefore, we hypothesize that for each facility in the sample there is a positive correlation with other quality indicators.
Validity Testing for the Questionnaire Format used in the CoreQ: AL Resident Satisfaction Questionnaire
A. The literature review shows that domains used in the Pilot CoreQ: AL Resident Satisfaction questionnaire items have a high degree of both face validity and content validity.
B. Residents overall rankings, show the general “domain” areas used indicates a high degree of both face validity and content validity.
C. The results show that 100% of residents are able to complete the response format used. This testing indicates a high degree of both face validity and content validity.
D. The Flesch-Kinkaid scale score achieved for all questions indicates that respondents have a high degree of understanding of the items.
2. Testing the Items for the CoreQ: AL Resident Satisfaction Questionnaire
A. The percent of missing responses for the items is very low. The distribution of the summary score is wide. This is important for quality improvement purposes, as AL facilities can use benchmarks.
B. EFA shows that one factor explains the common variance of the items. A single factor can be interpreted as the only “concept” being measured by those variables. This means that the instrument measures the global concept of satisfaction and not multiple areas of satisfaction. This supports the validity of the CoreQ instrument as measuring a single concept of “customer satisfaction”. This testing indicates a high degree of criterion validity.
3. Determine if a Sub-Set of Items Could Reliably be Used to Produce an Overall Indicator of Satisfaction (The Core Q: AL Resident Measure).
A. Using the correlation information of the Pilot Core Q: AL Resident Questionnaire (20 items) and the 4 items representing the CoreQ: AL Resident Satisfaction Questionnaire a high degree of correlation was identified. This testing indicates a high degree of criterion validity.
B. EFA shows that one factor explains the common variance of the items. A single factor can be interpreted as the only “concept” being measured by those variables. This means that the instrument measures the global concept of satisfaction and not multiple areas of satisfaction. This supports the validity of the CoreQ instrument as measuring a single concept of “customer satisfaction”. This testing indicates a high degree of criterion validity.
4. Validity Testing for the Core Q: AL Resident Measure.
A. The correlation of the 4 item CoreQ: AL Resident Satisfaction measure summary score (identified elsewhere in this document) with the overall satisfaction score (scored using all data and the same scoring metric) gave a value of 0.96. That is, the correlation score between actual the CoreQ: AL Resident Satisfaction Measure and all of the 20 items used in the Pilot instrument indicates that the satisfaction information is approximately the same if we had included either the 4 items or the 20 item Pilot questions. This indicates that the CoreQ: AL Resident Satisfaction instrument summary score adequately represents the overall satisfaction of the facility. This testing indicates a high degree of criterion validity.
B. Relationship with Quality Indicators
The 9 quality indicators examined had a moderate level of correlation with the CoreQ: AL Resident Satisfaction measure. These correlations range from 0.21 to 0.02. The CoreQ: AL Resident Satisfaction measure is associated with 7 of the 9 quality indicators in the direction hypothesized (that is higher CoreQ scores are associated with better quality indicator scores). This testing indicates a moderate degree of construct validity and convergent validity.
As noted by Mor and associates (2003, p.41) when addressing quality of long-term care facilities, “there is only a low level of correlation among the various measures of quality.” Castle and Ferguson (2010) also show the pattern of findings of quality indicators in long-term care facilities is consistently moderate with respect to the correlations identified. Thus, it is not surprising that “very high” levels of correlations were not identified. As described in the literature, some correlation was identified in the direction as expected, which is in support of validity of the CoreQ: Family Satisfaction Measure.
Risk Adjustment
No research (to date) has risk adjusted or stratified satisfaction information from AL facilities. Testing on this was conducted as part of the development of the federal initiative to develop a CAHPS® Nursing Home Survey to measure nursing home residents’ experience (hereafter referred to as NHCAHPS) (RTI International, 2003). No empirical or theoretical or empirical risk adjusted or stratified reporting of satisfaction information was recommended as the evidence showed that no clear relationship existed with respect to resident characteristics and the satisfaction scores. We note, this testing was in nursing facilities; not AL. However, it is cited here as very little information exists on satisfaction testing in AL facilities.
Education may influence responses to the questions asked. That is, respondents with lower education levels may not appropriately interpret the items. To address this, our items were written and tested to very low Flesh-Kincaid levels. In testing, no differences in average item scores were identified based on education levels (p<.05) (Table2b3.4b.c) . A t-test analysis was used to compare the CoreQ mean scores, adjusting for race (Table 2b3.4b.d). This analysis demonstrated the CoreQ: AL Resident Satisfaction measure is not significantly different based on race. Based on these results, education level makeup of the respondents or the racial makeup of the respondents does not appear to be related to this measure. We included these background characteristics for two reasons. First, to examine if any responses were different based on these factors (in no case were the responses different). Second, to examine the representativeness of the samples (the samples examined were representative of national AL figures).
Multiple studies in the past twenty years have examined racial disparities in the care of nursing facility residents and have consistently found poorer care in facilities with high minority populations (Fennell et al., 2000; Mor et al., 2004; Smith et al., 2007). No equivalent work in AL facilities exists; therefore, the nursing facility work is referenced here.
Work on racial disparities in nursing facilities’ quality of care between elderly white and black residents within nursing facility has shown clearly that nursing homes remain relatively segregated and that specifically nursing home care can be described as a tiered system in which Blacks are concentrated in marginal-quality homes (Li, Ye, Glance & Temkin-Greener, 2014; Fennell, Feng, Clark & Mor, 2010; Li, Yin, Cai, Temkin-Greener, Mukamel, 2011; Chisholm, Weech-Maldonado, Laberge, Lin, & Hyer, 2013; Mor et al., 2004; Smith et al., 2007). Such homes tend to have serious deficiencies in staffing ratios, performance, and are more financially vulnerable (Smith et al, 2007; Chisholm et al., 2013). Based on a review of the nursing facility disparities literature, Konetzka and Werner concluded that disparities in care are likely related to this racial and socioeconomic segregation as opposed to within-provider discrimination (Konetzka & Werner 2009). This conclusion is supported, for example, by Grunier and colleagues who found that as the proportion of black residents in the nursing home increased the risk of hospitalization among all residents, regardless of race, also increased (Grunier et al., 2008). Thus, adjusting for racial status has the unintended effect of adjusting for poor quality providers not to differences due to racial status and not within-provider discrimination.
Lower satisfaction scores also likely increase as the proportion of black residents increases, indicating that the best measure of racial disparities in satisfaction rates is one that measures scores at the facility level. That is, ethnic and social economic status differences are related to inter-facility differences not to intra-facility differences in care. Therefore, the literature suggests that racial status should not be risk adjusted otherwise one is adjusting for the poor quality of the SNFs rather than differences due to racial status. We believe the same is true for AL facilities.
Chisholm L, Weech-Maldonado R, Laberge A, Lin FC, Hyer K. (2013). Nursing home quality and financial performance: does the racial composition of residents matter? Health Serv Res;48(6 Pt 1):2060–2080.
Fennell ML, Feng Z, Clark MA, Mor V. (2010). Elderly Hispanics more likely to reside in poor-quality nursing homes. Health Aff (Millwood);29(1):65–73.
International, R. (2003). RTI International Annual Report. Research Triangle Park: RTI’s Office of Communications, Information and Marketing.
Gruneir, A., Miller, S. C., Feng, Z., Intrator, O., & Mor, V. (2008). Relationship between state Medicaid policies, nursing home racial composition, and the risk of hospitalization for black and white residents. Health Services Research, 43(3), 869-881.
Konetzka RT, Werner RM. Disparities in long-term care: building equity into market-based reforms. Med Care Res Rev. 2009 Oct;66(5):491-521. doi: 10.1177/1077558709331813. Epub 2009 Feb 18. PMID: 19228634.
Li Y, Ye Z, Glance LG, Temkin-Greener H. Trends in family ratings of experience with care and racial disparities among Maryland nursing homes. Med Care. 2014 Jul;52(7):641-8. doi: 10.1097/MLR.0000000000000152. PMID: 24926712; PMCID: PMC4058647.
Li Y, Yin J, Cai X, Temkin-Greener J, Mukamel DB. Association of race and sites of care with pressure ulcers in high-risk nursing home residents. JAMA. 2011 Jul 13;306(2):179-86. doi: 10.1001/jama.2011.942. PMID: 21750295; PMCID: PMC4108174.
Mor V, Zinn J, Angelelli J, Teno JM, Miller SC. Driven to tiers: socioeconomic and racial disparities in the quality of nursing home care. Milbank Q. 2004;82(2):227-56. doi: 10.1111/j.0887-378X.2004.00309.x. PMID: 15225329; PMCID: PMC2690171.
Connor-Smith JK, Flachsbart C. Relations between personality and coping: a meta-analysis. J Pers Soc Psychol. 2007 Dec;93(6):1080-107. doi: 10.1037/0022-3514.93.6.1080. PMID: 18072856.
Use & Usability
Use
The level of analysis is the facility-level. The care settings are skilled nursing and assisted living facilities.
The level of analysis is the facility-level. The care settings are skilled nursing and assisted living facilities.
State-level analysis. The care setting is assisted living facilities.
State-level analysis. The care setting is assisted living facilities.
Usability
Improving performance relies on the testing of change and benchmarking. Frequently collecting data is a necessary step to enhance and maximize quality improvement. Data collected during tests provides critical insight that is needed to determine the best path forward. Benchmarking is a process used to measure the quality and performance of your organization. Benchmarking plays a significant role in identifying patterns, providing context, and then guiding decision-making processes.
The CoreQ Resident Satisfaction measure allows assisted living facilities to measure the impact of tests of change and benchmark their performance relative to other facilities. Specifically, facilities can increase the number of staff and/or improve staff training and measure the impact using CoreQ. Similarly, improvements in reduced adverse events, such as falls and hospitalizations, increase resident rating of care received and increase satisfaction. Finally, facilities can understand and address the needs and wants of residents, like certain activities or food, to increase their willingness to recommend the facility and CoreQ performance
The actions needed to improve performance are not difficult once a process or plan for improvement is developed (e.g. Quality Assurance/Performance Improvement (QAPI)). Measured entities can overcome difficulties by monitoring data and results. Monitoring data often ensures you preserve the advances of the quality improvement effort. Developing a feedback and monitoring system to sustain continuous improvement helps providers preserve the advances of the quality improvement effort.
The CoreQ measure for assisted living residents has elevated the resident and family voice as well as help guide consumer choices as another way for potential residents to review the quality of a care facility. Specifically, the CoreQ measure has been independently tested as a valid and reliable measure of customer satisfaction. The CoreQ is a short survey with three to four questions which reduces response burden on residents and allows organizations to benchmark their results with consistent questions and response scale. Satisfaction vendors and providers have particularly appreciated how easy it is to integrate the CoreQ questions to their satisfaction surveys. They believe the short length relative to other survey tools, like HCAHPS, helps increase and maintain high response rates.
AHCA/NCAL developed LTC Trend Tracker, a web-based tool that enables long term and post-acute care providers, including assisted living, to access key information that can help their organization succeed. The CoreQ report and upload feature within LTC Trend Tracker includes an API (application programming interface) for vendors performing the survey on behalf of ALs to upload data, so that the aggregate CoreQ results will be available to providers. Given that LTC Trend Tracker is the leading method for NCAL AL members to profile their quality and other data, the incorporation of CoreQ into LTC Trend Tracker means it will immediately become the de facto standard for customer satisfaction surveys for the AL industry. AHCA/NCAL continues to work with customer satisfaction vendors to promote CoreQ and receives requests for vendors to be added to the list of those incorporating CoreQ. Currently, there are over 40 vendors across the nation who can administer the CoreQ survey.
We also are working with states who require satisfaction measurement to incorporate CoreQ into their process. AHCA/NCAL has a presence in each state, and our state affiliates continue to promote the use of the CoreQ.
Feedback is continuously obtained through meetings with facility operators and vendors serving on AHCA/NCAL’s Customer Experience Committee and the CoreQ Vendors’ Workgroup. The purpose of the Customer Experience Committee is to champion the importance of meeting customer expectations now and in the future. This includes defining quality from the consumer’s perspective. Key areas of focus include collecting, analyzing, and using data to drive performance improvement, and the application of successful practices. The CoreQ Vendors’ Workgroup was created to help improve CoreQ usage and discuss ways to best support the CoreQ Vendors’ who administer the surveys.
AHCA/NCAL developed LTC Trend Tracker, a web-based tool that enables long term and post-acute care providers, including assisted living, to access key information that can help their organization succeed. The CoreQ report and upload feature within LTC Trend Tracker includes an API for vendors performing the survey on behalf of ALs to upload data, so that the aggregate CoreQ results will be available to providers. Given that LTC Trend Tracker is the leading method for NCAL AL members to profile their quality and other data, the incorporation of CoreQ into LTC Trend Tracker means it will immediately become the de facto standard for customer satisfaction surveys for the AL industry. AHCA/NCAL continues to work with customer satisfaction vendors to promote CoreQ and receives requests for vendors to be added to the list of those incorporating CoreQ.
Among providers and vendors, we receive feedback during committee and workgroup meetings. For feedback on LTC Trend Tracker, we scope out the cost and feasibility of suggested enhancements. For example, we added a more graphical user interface option for the API, in addition to the original command line interface that was more technical, based on feedback from vendors.
For some of the feedback we receive, we use it as an opportunity to educate about best practices in survey collection and administration. For example, some vendors and providers inquire about administering CoreQ over the phone or other mixed modes of collection. In this instance, we caution vendors and providers about possible response or interviewer bias and recommend using written surveys as the primary method because it has been tested and shown to be reliable and valid.
LTC Trend Tracker is a web-based tool that enables long term and post-acute care providers, including assisted living, to access key information that can help their organization succeed. AL facilities report CoreQ performance results in LTC Trend Tracker for benchmarking and state comparisons. AHCA/NCAL monitored the impact of the COVID-19 pandemic on satisfaction trends among AL residents in the nation. The data shows:
- In 2020Q1, satisfaction rates were 86.3% which represented 255 AL facilities.
- In 2021Q1, satisfaction rates decreased to 80.3% which represented 140 AL facilities. By the end of 2021 satisfaction rates dropped to 76.4% which represented 227 AL facilities.
- In 2024Q3, satisfaction rates increased to 81.0% which represented 200 AL facilities.
Monitoring satisfaction rates during the pandemic and after helped facilities/operators benchmark and trend their COVID-19 related performance.
There were no negative consequences to individuals or populations identified during testing or evidence of unintended negative consequences to individuals or populations reported since the implementation of the CoreQ: AL Resident Satisfaction questionnaire or the measure that is calculated using this questionnaire. This is consistent with satisfaction surveys in general in nursing facilities. Many other satisfaction surveys are used in AL facilities with no reported unintended consequences to patients or their families.
There are no potentially serious physical, psychological, social, legal, or other risks for patients. However, in some cases the satisfaction questionnaire can highlight poor care for some dissatisfied patients, and this may make them further dissatisfied.
Comments
Staff Preliminary Assessment
CBE #3420 Staff Assessment
Importance
Strengths:
- Logic Model: A clear logic model is provided, depicting the relationships between inputs (i.e., domains assessed by CoreQ: Assisted Living (AL) Resident Satisfaction Questionnaire), activities (i.e., example processes that could improve satisfaction with care like staff training), and desired outcomes (e.g., ratings of care received). This model demonstrates how the measure's implementation will lead to resident satisfaction.
Patient Input: Description of patient input supports the conclusion that the CoreQ: AL Resident Satisfaction Questionnaire (which underlies the measure) is meaningful to patients with at least moderate certainty. Patient input was obtained through focus groups conducted with 40 residents of 5 AL facilities in the Pittsburg area.
Limitations:
- Evidence and Literature Review: The measure was previously endorsed in 2019. Nearly all literature cited is from before 2018 and may not reflect recent advances in this area.
- Performance Gap: Data from 511 facilities collected in 2023 and 2024 show a decile range from 55% to 100%. While performance differences by decile are compelling, summing the number of entities across deciles results in 565, rather than 511 In addition, the n of entities in each decile do not appear to reflect decile sizes, which one would expect to be approximately 10% of the total n of entities. Finally, the performance scores and the number of entities by decile are nearly identical to those reported for measure #3422. Thus, it is difficult to determine based on this table if a gap in care remains.
Rationale:
- Summary: This previously endorsed measure meets many criteria for 'Met' due to its clear business case, documented performance gap, and its well-articulated logic model. Measuring resident satisfaction is essential to helping AL facilities understand patient preferences, supporting patients and their families in choosing facilities, and allowing AL facilities to monitor and improve the quality of care they provide. However, the Recommendation Group should consider if more recent evidence should be incorporated into the literature review. Further, the performance gap data submitted for this measure are identical to those submitted for measure #3422, and has other limitations as well. The committee may wish to seek clarification on the issues identified.
Feasibility Acceptance
Strengths:
- Data Collection Strategy: The measure developer asserts all data used to calculate the measure are routinely generated and used during care. To support measure maintenance, AL communities completing the measure were given instructions to calculate the measure. Facilities had all information available except for cognitive status.
- Licensing and Fees: There are no fees, licensing, or other requirements to use any aspect of the measure (e.g., value/code set, risk model, programming code, algorithm).
Limitations:
- Data Collection Strategy: To support measure maintenance, AL communities completing the measure were given instructions to calculate the measure. Facilities had all information needed available except for cognitive status, which is necessary to determine residents whose surveys should be excluded from the measure. Incorporating this exclusion may create burden for AL facilities. It is unclear if all data used to calculate the measure is in an electronic format, and if not, if there is a near-term plan to support routine and electronic data capture
Rationale:
- This previously endorsed measure meets most criteria for 'Met' due to its established data collection strategy and lack of licensing and fees, ensuring practical implementation within the healthcare system. However, it is unclear if all data used to calculate the measure is in an electronic format, and if not, if there is a near-term plan to support routine and electronic data capture.
Scientific Acceptability
Strengths:
- Data Sources and Dates: Data used for testing were sourced from responses to a CoreQ (COnsolidated criteria for REporting Qualitative research) questionnaire created by the developer during 2023 and 2024. The five entities included in the analysis were assisted living communities from across the country.
- A minimum sample size of 20 and overall response rate of 30% is needed for the measure.
- Patient or Encounter Level Reliability: The developer conducted inter-abstractor reliability testing at the person- or encounter-level for all critical data elements. The developer reported 97-98% (0.97-0.98) agreement for each question in the survey and 95-100% (0.95-1.00) agreement for each response within each question, which meets the expected threshold of 0.4.
Limitations:
- Data Sources and Dates: Reliability testing is only done at five facilities.
- Accountable Entity Level Reliability: The developer appears to have conducted a bootstrap version of reliability at the accountable entity-level. Only the mean signal-to-noise reliability (0.84) is given so there is insufficient evidence to know whether >70% of entities have reliability >0.60. The method description and interpretation of the reliability results does not appear to match the approved accountable entity-level reliability methods needed to evaluate a maintenance measure.
Rationale:
- The current accountable entity-level reliability metrics do not meet the established thresholds, indicating potential issues with the consistency and accuracy of the results across different settings and populations. However, the identified limitations are deemed addressable, as the developer may consider increasing the sample size to meet the requirements of the selected statistical methods. By addressing this/these issue/s, there is potential to enhance the reliability.
Strengths:
- The developer provides an extensive discussion of person- or episode-level validity particularly in the context of instrument development. Overall, the validity of the instrument domains were supported by literature, resident assessments, the infrequency of missing data, exploratory factor analysis, and correlation between the 4-item version and the 20-item version. To substantiate the validity claim, namely a causal association between the facility response [known and effective] to the measure and the measure focus, the developer provided association studies. Association studies included the importance table (Table 1) that demonstrated a correlation between the facility and the measure focus, and associations (correlations) with related process and outcome measures (Hospitalization, Rehospitalization, Off-label use of Antipsychotic drugs, LPN Turnover, Aide Turnover, Administration Turnover, DON Turnover, All Staff Turnover, and Occupancy). The correlations ranged from 0.21 to 0.02 with 7 of the 9 quality indicators in the direction hypothesized.
Limitations:
- Causal claims based on association (correlation) studies alone are prone to bias (i.e., confounding due to a common cause cannot be ruled-out). Additional support from mechanism studies that confirm the existence of a suitable (plausible) mechanism capable of accounting for the observed correlation would strengthen the causal claim. Otherwise, statements about the relative magnitude of the observed correlations and whether those magnitudes are greater or lesser than what one might anticipate are difficult to evaluate. Face validity of the performance score was not systematically assessed. The data are from 2018. The Importance Table offers mixed support to the validity claim with a low level of variation among entities.
- Measure is not risk adjusted. Rationale for not performing risk adjustment is based on dated literature (2003-2014) and should be reassessed.
Rationale:
- The data should be updated to reflect more recent performance. Going forward, additional studies that either rule-out potential confounding or describe features of potential mechanisms will strengthen causal claims.
- Measure is not risk adjusted. Rationale for not performing risk adjustment is based on dated literature (2003-2014) and should be reassessed.
Equity
Strengths:
- The developer analyzed performance scores for white and black residents in nursing homes, highlighting racial disparities.
- The developer acknowledged that that black residents often receive care in lower-quality facilities, which impacts performance scores.
Limitations:
- Interpretation of Results: The measure developer reports they are currently comparing CoreQ scores for White residents to CoreQ scores for Black residents. While scores for Black residents are lower than scores for White residents, Black residents are disproportionately cared for in lower quality facilities, which may influence scores. Additionally, only 2% of respondents were Black. Therefore, additional efforts are needed to determine if the measure can identify differences in care for certain patient populations.
Rationale:
- While the measure attempts to assess equity in health care delivery and outcomes, additional work is needed to ensure the measure provides valid comparisons between Black and White residents who responded to the CoreQ:AL Resident Satisfaction Questionnaire. This limits the ability to provide a comprehensive understanding of the differences in performance across different populations.
Use and Usability
Strengths:
- Current Use: The measure is currently used in the National Quality Award Program, the LTC Trend Tracker, Residential Care Quality Metrics Program/Oregon Department of Human Services, and the Assisted Living Report Card/MN Department of Health Aging and Adult Services Division (AASD).
- Actions for Improvement: The developer provides a summary of how accountable entities can use the measure results to improve performance. They note facilities can improve their scores by increasing the number of staff, improving staff training, reducing adverse events like falls and hospitalizations, and addressing the needs and wants of residents like providing activities or enhancing food quality.
- Feedback Mechanism: The measure developer continuously obtains feedback on the measure through meetings with facility operators and vendors on the HCA/NCAL’s Customer Experience Committee and the CoreQ Vendors’ Workgroup. AHCA/NCAL developed LTC Trend Tracker, a web-based tool that enables long term and post-acute care providers, including assisted living, to access key information that can help their organization succeed. They have made improvements to the LTC Trend Tracker based on user feedback.
- Performance Trends: The developer reports changes in performance during the COVID-19 pandemic. In Q1 2020, satisfaction rates averaged 86.3% across 255 AL facilities. In Q4 2021, this had dropped to 76.4% across 227 AL facilities, consistent with challenges related to the COVID-19 pandemic. In Q3, 2024, satisfaction rates averaged 81.0% across 200 AL facilities, indicating the measures ability to identify differences in performance over time. These data are identical to data submitted to support the usability of measure #3422; however, the developer confirmed that the performance data presented is correct for measure #3420.
- Findings Identified: The developer reports no unexpected findings.
- Performance Trends: The developer reports changes in performance during the COVID-19 pandemic. In Q1 2020, satisfaction rates averaged 86.3% across 255 AL facilities. In Q4 2021, this had dropped to 76.4% across 227 AL facilities, consistent with challenges related to the COVID-19 pandemic. In Q3, 2024, satisfaction rates averaged 81.0% across 200 AL facilities, indicating the measures ability to identify differences in performance over time."
Limitations:
- None.
Rationale:
- For maintenance, the measure is actively used in at least one accountability application, with a clear feedback approach that allows for continuous updates based on stakeholder feedback. The developer reports no unexpected findings. Usability data demonstrate the measure is sensitive to changes in the care environment.
Committee Independent Review
3420 Summary
Importance
Agree with staff preliminary assessment - literature review needs to be updated. Items based on early focus groups with a small number of residents (n=40) - would be important to determine if resident priorities remain the same or have changed since the focus groups were conducted.
Feasibility Acceptance
Agree with staff preliminary assessment - overall feasibility strong but important to evaluate availability of data in electronic format.
Scientific Acceptability
Agree with staff preliminary assessment
Agree with staff preliminary assessment. Justification for not risk adjusting needs to be re-evaluated.
Equity
Important gap in previous evaluations of current measure.
Use and Usability
Agree with staff preliminary assessment
Summary
I support the measure with conditions. This measure is one of a small set of PRO-PM measures for assisted living and fills an important gap. The measure developers should be asked to complete the addressible issues including updating the literature review, evaluating readiness for electronic data capture, and further evaluation of reliability and validity to assure that the the measure meets established thresholds. The equity issues that are detailed in the advisory committee comments and in the staff preliminary assessment need to be addressed in substantive detail.
Not supported
Importance
- Focus groups at five facilities in Pittsburgh for the original submission (2018) and none since, not that informative. The CoreQ Measures use a poor-excellent Likert scale. Nothing person-centered, specific, or actionable. What is poor, what is excellent, for what person or staff member
Feasibility Acceptance
Without cognitive status, we wouldn’t know who filled it out and who should be excluded
Scientific Acceptability
Testing done pre-2018. I’m confused about resident-level data and facility-level data. Somewhere I ready about 64% and 100% response rates. How can that be?
Testing done pre-2018. I’m confused about resident-level data and facility-level data. Somewhere I ready about 64% and 100% response rates. How can that be?
Equity
I can’t tell what diversity is being measured—for race little variation is shown. Could consider income level or dual Medicare/Medicaid
Use and Usability
As a Quality Improvement professional, I couldn’t use this data to monitor improvement. There’s no meat. Could use text responses examined with Large Language Models.
Summary
The background data is pre-2018. The Lickert scale has no definition. Couldn't use this to improve anything.
Needed measure
Importance
Seeing staff note of if a gap in care remains, which will be good to document, albeit I'm sure there is one.
Feasibility Acceptance
Noted staff highlight of electronic data collection, these days let's consider this a must.
Scientific Acceptability
Update to current data
Update to current
Equity
Agree, need to simply note this
Use and Usability
Adequate
Summary
Update to current data and input techniques
CBE 3420 CoreQ: AL Resident Satisfaction
Importance
As a patient partner, I find this measure to be very important to AL patients. The developer defined this well:
- Measuring satisfaction is valuable to patients and improves the patient-provider relationship.
- Satisfaction information can help facilities improve the quality of care they provide.
I also agree with the Staff Assessment that the references should be updated (and also the logic model) to reflect more current data.
What was difficult for a patient to understand is how this survey would help patients and their families choose a health care facility with the data provided.
Feasibility Acceptance
I agree with the Staff Assessment that this previously endorsed measure meets most criteria for 'Met' due to its established data collection strategy and lack of licensing and fees, ensuring practical implementation within the healthcare system.
As a patient partner, it wasn’t clear if the AL communities were using the AHCA/NCAL developed LTC Trend Tracker to input their data and if the developers collect their data from this tool or multiple tools. It was mentioned in the Advisory Committee discussion that data hasn’t changed much, many repositories of data.
Scientific Acceptability
As a patient partner, I am not an expert on Scientific Acceptability Reliability. As such, I will agree with the Staff Assessment that the identified limitations are deemed addressable, as the developer may consider increasing the sample size to meet the requirements of the selected statistical methods.
As a patient partner, I am not an expert on Scientific Acceptability Validity. As such, I will agree with the Staff Assessment that the data should be updated to reflect more recent performance. Rationale for not performing risk adjustment is based on dated literature (2003-2014) and should be reassessed.
Equity
As a patient partner, I noted that the majority patients in the AL community are 85+ or older female’s that are non-Hispanic white (per AHCA/NCAL).
The developer notes that in the AL data they received, very few (<2%) respondents were black. Because of this, they have been reaching out to other facilities with minority populations these last 2 years.
The developer did highlight racial disparities in nursing homes between white and black residents noting that overall scores for black residents are lower than those for white residents. The developer also mentioned adding newer dimensions to the survey (such as insurance since most stays are private pay) that may capture why there is a gap in minority respondents.
Use and Usability
I agree with the Staff Assessment.
Summary
I support this measure with conditions noted in the Staff Assessment.
Measure 3420
Importance
Agree with staff recommendations
Feasibility Acceptance
Agree with staff summary
Scientific Acceptability
Why is a response rate of 30% needed, if meeting minimum sample size? Important data may be lost due to this exclusion and given survey fatigue in our society, I would like to see this threshold reconsidered.
Agree with staff recommendation
Equity
More data is need from more facilities
Use and Usability
Agree with staff summary
Summary
This measure needs more data
Mostly Met Criteria
Importance
Resident satisfaction in assisted living facilities is undoubtedly an important part of care in this difficult setting. Although the literature cited is in need of update, there is no reason to believe that the importance of this measure has changed recently or will change in the future.
Feasibility Acceptance
This is a straightforward that is relatively easy to implement and easy to collect and track the data for.
Scientific Acceptability
To some extent, it matters on how this data is used. If the facilities will be graded based on their scores on this metric, it is very unclear to what extent a lower or higher score reflects the quality of the facility versus factors relating to the residents that are beyond the control of the facility. Risk adjustment based on the co-morbidities of the residents is an interesting idea, although I am unsure if it would truly address this issue.
Additionally, I have concerns with the denominator exclusion for surveys returned outside of the 2 month time frame. This potentially can allow for the facility to have some control over who does and does not return the survey in a timely fashion.
See above. The measure is scientifically valid but it is not entirely clear to me how to interpret a higher or lower score visa vis what this means about the facility's quality.
Equity
The facility can only take care of their patient population. The fact that this survey is given to all residents of a facility to me means that they are being graded the same on how they care for everyone who is there.
Use and Usability
This survey is useable and appears to very much be in use.
Summary
This measure mostly met criteria for maintenance in my estimation, with a few areas that need to be addressed as detailed above.
Summary
Importance
The measure aligns with the healthcare shift toward person-centered care, emphasizing patient and family satisfaction as integral to quality improvement. It provides standardized tools to benchmark and compare satisfaction scores across assisted living (AL) facilities, addressing the variability in current satisfaction instruments.
Feasibility Acceptance
The administrative burden for data collection is high due to staffing shortages in ALs. The minimum sample size of 20 and a 30% response rate exclude most AL facilities, which typically have fewer than 10 residents in the U.S., making the measure impractical for most providers.
Scientific Acceptability
The absence of risk adjustment makes it difficult to account for differences in resident populations (e.g., socioeconomic factors, racial composition), which may influence satisfaction scores independently of facility quality. The lack of stratification prevents targeted identification of disparities within and across facilities. The measure relies on data collected as early as 2018 for testing validity and reliability. While updated data from 2023–2024 is mentioned, comprehensive analyses using current data are limited.
Agree with staff
Equity
The sample data lacks representation of non-white residents, who may form the majority population in AL facilities depending on geography. Efforts to oversample diverse populations and stratify results by race and ethnicity are ongoing but insufficient.The sample’s lack of diversity raises concerns about the measure’s generalizability and ability to identify equity gaps.
Use and Usability
While the measure provides actionable insights for quality improvement, the sample size requirements and lack of risk adjustment make it inaccessible to smaller ALs. Broader adoption and practical usability require adjustments to thresholds and better support for facilities.
Summary
The measure’s importance is well-established, but improvements in feasibility, scientific acceptability, equity, and usability are needed to ensure broader adoption and meaningful impact. Lowering sample size thresholds, updating validity testing, incorporating equity-focused adjustments, and simplifying administration could address these gaps.
#3420
Importance
Agree with staff comments, especially about performance gap data; I also did not understand these data
Feasibility Acceptance
Some degree of burden to the facility for data collection; how submitted/returned data are managed is not clear
Scientific Acceptability
Agree with staff comments
Agree with staff comments
Equity
The developers note that disparities currently exist in nursing home between black and white residents and they state that “We are continuing to examine this data
Use and Usability
Currently for public reporting, regulatory and accreditation programs and internal and external QI.
Usability data is the same ere as for the Family Satisfaction metric
Summary
no additional comments
needs improvement
Importance
Most data related to coreQ is old in term of evidence. even this recent review https://pmc.ncbi.nlm.nih.gov/articles/PMC7855470/
This does not negate the fact that this is important in relation to satisfaction though in healthcare, this is cannot be taken similarly to any satisfaction with any product. Under there is a better assessment of satisfaction overall in healthcare, this seems next best options.
Feasibility Acceptance
administrative burden is high , having a third party collecting is expensive. the cut off exclude many AL if they have fewer patients
Scientific Acceptability
evidence is old
exclusion criteria make it very difficult to validate . Though satisfaction is different that quality of care, they are related. This scale might not reflect this relation well
Equity
more data is needed.
interoperability by CMS becomes essential when we suggest /endorse these measure in places where staff need to be more focused on patients
Use and Usability
this is used currently with some changes in trend with COVID suggesting its sensitivity to patient and family needs.
That being said , newer tools in this age and date from qualitative data should be considered (such LLM but this needs to be supported by CMS/government an not at the burden of the facilities)
Summary
scalee may need to be changed. based on newer data for patient experience
electronic collection needs to be integrated but with support (no burden on facilities)
Important and meaningful measure
Importance
Measure 3420 is an important measure to monitor ALF resident satisfaction. As a maintenance measure, additional evidence of performance gap for this measure since the time of initial approval is needed.
Feasibility Acceptance
Measure 3420 meets feasibility as an initial measure but additional input and effort from measure developer is needed as a maintenance measure.
Scientific Acceptability
Measure 3420 has not fully met criteria for reliability for a maintenance measure to address potential issues with the consistency and accuracy of the results across different settings and populations.
Measure 3420 has not fully met criteria for validity for a maintenance measure to address potential issues with content validity and risk adjustment.
Equity
Developers of Measure 3420 have opportunity to address additional equity elements that reflect social determinants of health.
Use and Usability
Measure 3420 has met maintenance requirements for use and usability.
Summary
Measure 3420 helps potential audiences and consumers of ALF care to learn more about performance of ALF related to resident satisfaction but issues mentioned above for all criteria as a maintenance measure need to be addressed.
Important and meaningful measure
Importance
Measure 3420 is an important measure to monitor ALF resident satisfaction. As a maintenance measure, additional evidence of performance gap for this measure since the time of initial approval is needed.
Feasibility Acceptance
Measure 3420 meets feasibility as an initial measure but additional input and effort from measure developer is needed as a maintenance measure.
Scientific Acceptability
Measure 3420 has not fully met criteria for reliability for a maintenance measure to address potential issues with the consistency and accuracy of the results across different settings and populations.
Measure 3420 has not fully met criteria for validity for a maintenance measure to address potential issues with content validity and risk adjustment.
Equity
Developers of Measure 3420 have opportunity to address additional equity elements that reflect social determinants of health.
Use and Usability
Measure 3420 has met maintenance requirements for use and usability.
Summary
Measure 3420 helps potential audiences and consumers of ALF care to learn more about performance of ALF related to resident satisfaction but issues mentioned above for all criteria as a maintenance measure need to be addressed.
3420
Importance
Agree with limitations/concerns identified in staff assessment re: performance gap data and dated evidence.
Other specific concerns: The preferences of 40 residents in five assisted living facilities in Pittsburgh, determined some time before 2018, formed the basis for a resident satisfaction measure used in AL facilities nationwide. The evidence submission refers to both patient-centered care and patient satisfaction. However, the measure and related tool focuses on satisfaction, not on patient-centeredness, which eliminates much of the evidence cited. A more representative focus group, conducted post-covid, might identify different priorities associated with “satisfaction.” A 2021 study testing a different AL resident satisfaction tool noted that the Core Q tool and its 4 items do not include aspects of satisfaction such as meaningful relationships, social activities, home-like physical environment, adequate access to health care, and positive interactions with care staff (Holmes, et. all, 2021)
Feasibility Acceptance
Concerns about the cost burden to the facility. The tool is available without charge, but what are the costs of data collection? Of data analysis?
Developer acknowledges data collection burden. These are paper surveys.
Developer estimates cost to collect data is $2.80 per survey/resident. Does this include abstraction? Does it include the cost to access the online platforma and reporting features?
Scientific Acceptability
Concur with staff assessments re: limitations of reliability testing.
Concur with staff assessment re: limitations of validity testing, everything submitted appears to be based on the original work to develop the survey and obtain initial endorsement. I am also confused by the mention of correlations with related (but not cited) process and outcome measures. Those related measures seem more relevant to SNF metrics than AL metrics. In my state, by statutory definition, AL care does not include nursing care, so there would be no measures related to DON, RN or LPN turnover relevant to AL.
Risk Adjustment: Rationale for risk adjustment is questionable and dated.
Equity
Agree with staff assessment. Not clear why this section starts with reference to nursing facilities when the measure pertains to AL. Developer asserts “However, we know that black residents are disproportionately cared for in lower quality facilities”apparently referring to nursing facilities and providing no evidence for this assertion or its relevance to AL. Would like to see a much more thoughtful and setting specific consideration of equity and exploration of potential disparities. The statement that only 2% of AL residents are black also links back to concerns regarding the original development of the survey on which the measure is based (preferences of 40 residents in 5 AL facilities in one city.)
Use and Usability
What payment program uses this measure? The entire usability section reads like a pitch to the AL industry to use the CoreQ or to join NCAL to access TrendTracker. Other than the suggestion that facilities improve their meals, it’s not clear how any of the suggested QAPI activities might improve the satisfaction scores on the other three survey items.
Summary
I do not support maintaining endorsement of this measure at this time. Less dated evidence and the developers own data showing a decrease in scores during covid suggest that the current measure may not be a valid measure of resident satisfaction nor a useful measure to support quality improvement. These issues are all addressable, but it seems it should start with an updated literature review and validity testing.
AL ratings needs more...
Importance
The data used to support the importance criteria is old as mentioned in the staff recommendations.
I would also like the rationale to address why imputation was used in tabulation and to understand the number of questions in the questionnaire. Were residents given only 4 questions?
Feasibility Acceptance
no additional comments.
Scientific Acceptability
The staff report suggests that relying on 2018 data is insufficient. I agree.
I am confused about the text which describes how the readministration of questionnaires relates go validity rating.
Validity lacks somewhat based on the window that residents have to respond. I also wonder about the survey instrument itself. Is the design appropriate? If it is, how long does it take practically speaking for a resident to complete it?
Equity
The summary of equity did not make its case. I am also concerned about the sample size in validity. Was equity considered in sample selection?
Use and Usability
no additional comments.
Summary
no additional comments.
Public Comments
Overall support
Importance performance gap: Overall scores look better than anticipated.
Feasibility: However the survey is administered, it is important to differentiate in some way from facility generated surveys. Hopefully this is part of the protocol.
Equity: The sponsors are aware of this significant concern.It is not misguided.