The measure calculates the percentage of Assisted living (AL) residents, those living in the facility for two weeks or more, who are satisfied. This patient reported outcome measure is based on the CoreQ: AL Resident Satisfaction questionnaire that is a four-item questionnaire.
-
-
1.5 Measure Type1.6 Composite MeasureNo1.7 Electronic Clinical Quality Measure (eCQM)1.8 Level Of Analysis1.9 Care Setting1.9b Specify Other Care SettingAssisted Living Facility1.10 Measure Rationale
Collecting satisfaction information from Assisted Living (AL) residents and family members is more important now than ever. We have seen a philosophical change in healthcare that now includes the patient and their preferences as an integral part of the system of care. The Institute of Medicine (IOM) endorses this change by putting the patient as central to the care system (IOM, 2001). For this philosophical change to person-centered care to succeed, we have to be able to measure patient satisfaction for these three reasons:
(1) Measuring satisfaction is necessary to understand patient preferences.
(2) Measuring and reporting satisfaction with care helps patients and their families choose and trust a health care facility.
(3) Satisfaction information can help facilities improve the quality of care they provide.
The implementation of person-centered care in long-term care has already begun, but there is still room for improvement. The Centers for Medicare and Medicaid Services (CMS) demonstrated interest in consumers’ perspective on quality of care by supporting the development of the Consumer Assessment of Healthcare Providers and Systems (CAHPS) survey for patients in nursing facilities (Sangl et al., 2007). We have developed three skilled nursing facility (SNF) and two assisted living CoreQ measures, and all five are endorsed by a consensus-based entity (NQF at the time).
Further supporting person-centered care and resident satisfaction are ongoing organizational change initiatives. These include: the Center for Excellence in Assisted Living (CEAL) which has developed a measure of person-centeredness of assisted living with the University of North Carolina at Chapel Hill; the Advancing Excellence in America’s Nursing Homes campaign (2006), which lists person-centered care as one of its goals; Action Pact, Inc., which provides workshops and consultations with long-term care facilities on how to be more person-centered through their physical environment and organizational structure; and Eden Alternative, which uses education, consultation, and outreach to further person-centered care in long-term care facilities. All these initiatives have identified the measurement of resident satisfaction as an essential part in making, evaluating, and sustaining effective clinical and organizational changes that ultimately result in a person-centered philosophy of care.
The importance of measuring resident satisfaction as part of quality improvement cannot be stressed enough. Quality improvement initiatives, such as total quality management (TQM) and continuous quality improvement (CQI), emphasize meeting or exceeding “customer” expectations. William Deming, one of the first proponents of quality improvement, noted that “one of the five hallmarks of a quality organization is knowing your customer’s needs and expectations and working to meet or exceed them” (Deming, 1986). Measuring resident satisfaction can help organizations identify deficiencies that other quality metrics may struggle to identify, such as communication between a patient and the provider.
As part of the US Department of Commerce renowned Baldrige Criteria for organizational excellence, applicants are assessed on their ability to describe the links between their mission, key customers, and strategic position. Applicants are also required to show evidence of successful improvements resulting from their performance improvement system. An essential component of this process is the measurement of customer, or resident, satisfaction (Shook & Chenoweth, 2012).
The CoreQ: AL Resident Satisfaction questionnaire and measure can strategically help AL facilities achieve organizational excellence and provide high quality care by being a tool that targets a unique and growing patient population. Moreover, improving the care for AL patients is tenable. A review of the literature on satisfaction surveys in long-term care facilities (Castle, 2007) concluded that substantial improvements in resident satisfaction could be made in many facilities by improving care (i.e., changing either structural or process aspects of care). This was based on satisfaction scores ranging from 60 to 80% on average (with 100% as a maximum score).
It is worth noting, few other generalizations can be made because existing instruments used to collect satisfaction information are not standardized (except CoreQ). Thus, benchmarking scores and comparison scores (i.e., best in class) are difficult to establish. The CoreQ: AL Resident Satisfaction Measure has considerable relevance in establishing benchmarking scores and comparison scores. Benchmark and comparison scores are available with CoreQ, and come from tens of thousands of surveys returned.
We developed three skilled nursing facility (SNF) based CoreQ measures: CoreQ: Long-Stay Family Satisfaction Measure, CoreQ: Long-Stay Resident Satisfaction Measure, and CoreQ: Short-Stay Discharge Measure. All three of these measures received NQF endorsement in 2016. Then, the assisted living CoreQ Resident and Family Satisfaction Measures received NQF endorsement in 2019. With these five satisfaction measures, it enables providers, researchers, and regulators to measure satisfaction across the long-term care continuum with valid and reliable measures.
The measure’s relevance are furthered by recent federal legislative actions. The Affordable Care Act of 2010 requires the Secretary of Health and Human Services (HHS) to implement a Quality Assurance & Performance Improvement Program (QAPI) within nursing facilities. This means all nursing facilities have increased accountability for continuous quality improvement efforts. In CMS’s “QAPI at a Glance” document there are references to customer-satisfaction surveys and organizations utilizing them to identify opportunities for improvement. Some AL communities have implemented QAPI in their organizations.
Lastly, in CMS’s National Quality Strategy (2024), one of the four key areas is advancing equity and engagement for all individuals. Specifically, CMS calls out expanding the use of person-reported outcomes and experience measures as a key action. Similarly, in the most recent SNF payment rule (CMS, August 2024), CMS acknowledges an opportunity to add patient experience or satisfaction measures to the Quality Reporting Program (QRP) that spans across post-acute and long-term care providers and created by the IMPACT Act of 2014. While CMS does not provide direct oversight of assisted living, more states are covering assisted living as part of home and community-based Medicaid waivers. As of 2020, 44% of assisted living communities were Medicaid certified (CDC, 2020). Thus, the principles of CMS’s Quality Strategy apply and the CoreQ: AL resident measure can further CMS’s quality efforts.
Castle, N.G. (2007). A literature review of satisfaction instruments used in long-term care settings. Journal of Aging and Social Policy, 19(2), 9-42.
CDC (2020). National Post-Acute and Long-Term Care Study. https://www.cdc.gov/nchs/npals/webtables/overview.htm
CMS (2009). Skilled Nursing Facilities Non Swing Bed - Medicare National Summary. http://www.cms.hhs.gov/MedicareFeeforSvcPartsAB/Downloads/NationalSum2007.pdf
CMS, University of Minnesota, and Stratis Health. QAPI at a Glance: A step by step guide to implementing quality assurance and performance improvement (QAPI) in your nursing home. https://www.cms.gov/Medicare/Provider-Enrollment-and-Certification/QAPI/Downloads/QAPIAtaGlance.pdf.
CMS (April 2024). Quality in Motion: Acting on CMS National Quality Strategy. https://www.cms.gov/files/document/quality-motion-cms-national-quality-strategy.pdf
CMS (August 6, 2024). Medicare Program; Prospective Payment System and Consolidated Billing for Skilled Nursing Facilities; Updates to the Quality Reporting Program and Value-Based Purchasing Program for Federal Fiscal Year 2025. https://www.federalregister.gov/d/2024-16907/p-588
1.11 Measure Webpage1.20 Testing Data Sources1.25 Data SourcesThe collection instrument is the CoreQ: AL Resident Satisfaction Questionnaire and exclusions are from the facility health information systems.
-
1.14 Numerator
The numerator is the sum of the individuals in the facility that have an average satisfaction score of =>3 for the four questions on the CoreQ: AL Resident Satisfaction questionnaire.
1.14a Numerator DetailsA specific date is chosen. On that date all residents in the facility are identified. The data is then collected from all the residents in the facility meeting eligibility criteria on that date. Residents are given a maximum 2-month time window to complete the survey. While the frequency in which the questionnaires are administered is left up to the provider, they should at least administer the Core Q questionnaire once a year. Only surveys returned within two months of the resident initially receiving the survey are included in the calculation.
The numerator includes all of the AL residents that had an average response greater than or equal to 3 on the CoreQ: AL Resident Satisfaction Questionnaire that do not meet any of the denominator exclusions or are missing responses for 2 or more questions.
The calculation of an individual patient’s average satisfaction score is done in the following manner:
• Respondents within the appropriate time window and who do not meet the exclusions (See: S.8) are identified.
• A numeric score is associated with each response scale option on the CoreQ: AL Resident Satisfaction Questionnaire (that is, Poor=1, Average=2, Good=3, Very Good=4, and Excellent=5).
• The following formula is utilized to calculate the individual’s average satisfaction score. [Numeric Score Question 1 + Numeric Score Question 2 + Numeric Score Question 3 + Numeric Score Question 4]/4
• The number of respondents whose average satisfaction score is greater than or equal to 3 are summed together and function as the numerator.
For residents with one missing data point (from the 4 items included in the questionnaire) imputation is used (representing the average value from the other three available questions). Residents with more than one missing data point, are not counted in the measure (i.e., no imputation is used for these residents since their responses are excluded).
-
1.15 Denominator
The denominator includes all of the residents that have been in the AL facility for two weeks or more regardless of payer status; who received the CoreQ: AL Resident Satisfaction Questionnaire.
1.15a Denominator DetailsResidents have up to 2 months to complete and return the survey. The length of stay is identified from AL facility records.
1.15d Age GroupOlder Adults (65 years and older)
-
1.15b Denominator Exclusions
Exclusions made at the time of sample selection are the following: (1) Residents who have poor cognition (described below in 1.15c); (2) residents receiving hospice; (3) residents with a legal court appointed guardian; and (4) residents who have lived in the AL facility for less than two weeks. Additionally, once the survey is administered, the following exclusions are applied: a) surveys received outside of the time window (two months after the administration date) b) surveys that have more than one questionnaire item missing c) surveys from residents who indicate that someone else answered the questions for the resident. (Note this does not include cases where the resident solely had help such as reading the questions or writing down their responses.)
1.15c Denominator Exclusions DetailsIndividuals are excluded based on information from facility records.
(1) Residents who have poor cognition: The Brief Interview for Mental Status (BIMS), a well validated dementia assessment tool is used. BIMS ranges are 0-7 (lowest); 8-12; and 13-15 (highest). Residents with BIMS scores of equal or less than 7 are excluded. Or Mini-Mental State Exam (MMSE) score of 12 or lower {Note: we understand that some AL communities may not have information on cognitive function. We suggest administering the survey to all AL residents and assume that those with cognitive impairment will not complete the survey or have someone else complete on their behalf and in either case they will be excluded them from the analysis. The main impact of including all residents with any level of cognitive impairment is a drop in the response rate, which for smaller communities can result in their not having a reportable measure (see response rate exclusion discussed later) (Saliba, et al., 2012).
(2) Residents receiving or having received any hospice. This is recorded in facility health information systems. This exclusion is consistent with other CMS CAHPS surveys.
(3) Residents with court appointed legal guardian for all decisions will be identified from facility health information systems.
(4) Residents who have lived in the AL facility for less than two weeks will be identified from facility health information systems.
(5) Residents that respond after the 2 month response period.
(6) Residents whose responses were completed by someone other than the resident will be excluded. Identified from an additional question on the CoreQ: AL Resident Satisfaction questionnaire. We have developed a CoreQ: Family Satisfaction for families to respond to.
(7) Residents without usable data (defined as missing data for 2 or more questions on the survey).
Saliba D, Buchanan J, Edelen MO, Streim J, Ouslander J, Berlowitz D, Chodosh J.
J Am Med Dir Assoc. 2012 Sep;13(7):611-7. doi: 10.1016/j.jamda.2012.06.004. Epub 2012 Jul 15.
-
1.13a Data dictionary not attachedYes1.16 Type of Score1.17 Measure Score InterpretationBetter quality = Higher score1.18 Calculation of Measure Score
1. Identify the residents that have been residing in the AL facility for two weeks or more.
2. Take the residents that have been residing in the AL facility for greater than or equal to two weeks and exclude the following:
- Residents who have poor cognition.
- Patients receiving or having received any hospice. This is recorded in facility health information systems.
- Residents with Court appointed legal guardian for all decisions will be identified from facility health information systems.
3. Administer the CoreQ: AL Resident Satisfaction questionnaire to these individuals. The questionnaire should be administered to all residents in the facility after exclusions in step 2 above. Communicate to residents that we will include surveys received up to two months from administration. Providers should use follow-up to increase response rates.
4. Create a tracking sheet with the following columns:
- Data Administered
- Data Response Received
- Time to Receive Response ([Date Response Received – Date Administered])
5. Exclude any surveys received after 2 months from administration.
6. Exclude responses not completed by the intended recipient (e.g. questions were answered by a friend or family members (Note: this does not include cases where the resident solely had help such as reading the questions or writing down their responses).
7. Exclude responses that are missing data for 1 or more of the CoreQ questions.
8. All of the remaining surveys are totaled and become the denominator.
9. Combine the CoreQ: AL Resident Satisfaction questionnaire items to calculate a resident level score. Responses for each item should be given the following scores:
- Poor = 1,
- Average = 2,
- Good = 3,
- Very Good =4 and
- Excellent = 5.
10. Impute missing data if only one of the three questions are missing data.
11. Calculate resident score from usable surveys.
- Patient score= (Score for Item 1 + Score for Item 2 + Score for Item 3 + Score for Item 4) / 4.
- For example, a resident rates their satisfaction on the four Core Q questions as excellent = 5, very good = 4, very good = 4, and good = 3. The resident’s total score will be 5 + 4 + 4 + 3 for a total of 16. The resident total score (16) will then be divided by the number of questions (4), which equals 4.0. Thus, the residents average satisfaction rating is 4.0. Since the resident’s score is >3.0, this resident will be counted in the numerator.
- Flag those patients with a score equal to or greater than 3.0. These residents will be included in the numerator.
12. Calculate the CoreQ: AL Resident Satisfaction Measure which represents the percent of residents with average scores of 3.0 or above. CoreQ: AL Resident Satisfaction Measure= ([number of respondents with an average score of ≥3.0] / [total number of respondents])*100.
13. No risk-adjustment is used.
1.19 Measure Stratification DetailsNo stratification is used.
1.21b Attach Data Collection Tool(s)1.21a Data Source URL(s) (if applicable)1.22 Are proxy responses allowed?No1.23 Survey Respondent1.24 Data Collection and Response Rate1. Administer the CoreQ: AL Resident Satisfaction questionnaire to AL residents who have resided in the AL facility for greater than or equal to two weeks and who do not fall into one of the following exclusions:
- Residents who have poor cognition; recorded in the facility health information system.
- Residents receiving or having received any hospice. This is recorded in the facility health information system.
- Residents with Court appointed legal guardian for all decisions will be identified from facility health information system.
2. Administer the CoreQ: AL Resident Satisfaction questionnaire to residents.
3. Instruct residents that they must respond to the survey within 2 months.
4. The response rate is calculated based on the number of usable surveys returned divided by the number of surveys administered.
- As stated in S.14, surveys with missing responses for more than 1 question, surveys received outside of the time window (more than two months after administration date), and surveys who were completed by someone else other than the intended resident are excluded
- A minimum response rate of 30% needs to be achieved for results to be reported for an AL.
5. Regardless of response rate, facilities must also achieve a minimum number of 20 usable questionnaires (e.g. denominator). If after 2 months, less than 20 usable questionnaires are received then a facility level satisfaction measure is not reported.
6. All the questionnaires that are received (other than those with more than one missing value; or those returned after 2 months; or those completed by another person other than the intended resident) must be used in the calculations.
Saliba, D., Buchanan, J., Edelen, M.O., Streim, J., Ouslander, J., Berlowitz, D, & Chodosh J. (2012). MDS 3.0: brief interview for mental status. Journal of the American Medical Directors Association, 13(7): 611-617.
1.26 Minimum Sample SizeA minimum sample size of 20 and overall response rate of 30% is needed for the measure.
-
7.1 Supplemental Attachment
-
StewardAmerican Health Care Association/National Center for Assisted LivingSteward Organization POC EmailSteward Organization URLSteward Organization Copyright
None
Measure Developer Secondary Point Of ContactNicholas Castle
University of West Virginia
P.O. Box 9190, 64 Medical Center Drive
Morgantown, WV 26506
United StatesMeasure Developer Secondary Point Of Contact Email
-
-
-
2.1 Attach Logic Model2.2 Evidence of Measure Importance
Collecting satisfaction information from Assisted Living (AL) residents and family members is more important now than ever. We have seen a philosophical change in healthcare that now includes the patient and their preferences as an integral part of the system of care. The Institute of Medicine (IOM) endorses this change by putting the patient as central to the care system (IOM, 2001). For this philosophical change to person-centered care to succeed, we have to be able to measure patient satisfaction for these three reasons:
(1) Measuring satisfaction is necessary to understand patient preferences.
(2) Measuring and reporting satisfaction with care helps patients and their families choose and trust a health care facility.
(3) Satisfaction information can help facilities improve the quality of care they provide.
The implementation of person-centered care in long-term care has already begun, but there is still room for improvement. The Centers for Medicare and Medicaid Services (CMS) demonstrated interest in consumers’ perspective on quality of care by supporting the development of the Consumer Assessment of Healthcare Providers and Systems (CAHPS) survey for patients in nursing facilities (Sangl et al., 2007). We have developed three skilled nursing facility (SNF) and two assisted living CoreQ measures, and all five are endorsed by a consensus-based entity (NQF at the time).
Further supporting person-centered care and resident satisfaction are ongoing organizational change initiatives. These include: the Center for Excellence in Assisted Living (CEAL) which has developed a measure of person-centeredness of assisted living with the University of North Carolina at Chapel Hill; the Advancing Excellence in America’s Nursing Homes campaign (2006), which lists person-centered care as one of its goals; Action Pact, Inc., which provides workshops and consultations with long-term care facilities on how to be more person-centered through their physical environment and organizational structure; and Eden Alternative, which uses education, consultation, and outreach to further person-centered care in long-term care facilities. All these initiatives have identified the measurement of resident satisfaction as an essential part in making, evaluating, and sustaining effective clinical and organizational changes that ultimately result in a person-centered philosophy of care.
The importance of measuring resident satisfaction as part of quality improvement cannot be stressed enough. Quality improvement initiatives, such as total quality management (TQM) and continuous quality improvement (CQI), emphasize meeting or exceeding “customer” expectations. William Deming, one of the first proponents of quality improvement, noted that “one of the five hallmarks of a quality organization is knowing your customer’s needs and expectations and working to meet or exceed them” (Deming, 1986). Measuring resident satisfaction can help organizations identify deficiencies that other quality metrics may struggle to identify, such as communication between a patient and the provider.
As part of the US Department of Commerce renowned Baldrige Criteria for organizational excellence, applicants are assessed on their ability to describe the links between their mission, key customers, and strategic position. Applicants are also required to show evidence of successful improvements resulting from their performance improvement system. An essential component of this process is the measurement of customer, or resident, satisfaction (Shook & Chenoweth, 2012).
The CoreQ: AL Resident Satisfaction questionnaire and measure can strategically help AL facilities achieve organizational excellence and provide high quality care by being a tool that targets a unique and growing patient population. Moreover, improving the care for AL patients is tenable. A review of the literature on satisfaction surveys in long-term care facilities (Castle, 2007) concluded that substantial improvements in resident satisfaction could be made in many facilities by improving care (i.e., changing either structural or process aspects of care). This was based on satisfaction scores ranging from 60 to 80% on average (with 100% as a maximum score).
It is worth noting, few other generalizations can be made because existing instruments used to collect satisfaction information are not standardized (except CoreQ). Thus, benchmarking scores and comparison scores (i.e., best in class) are difficult to establish. The CoreQ: AL Resident Satisfaction Measure has considerable relevance in establishing benchmarking scores and comparison scores. Benchmark and comparison scores are available with CoreQ, and come from tens of thousands of surveys returned.
We developed three skilled nursing facility (SNF) based CoreQ measures: CoreQ: Long-Stay Family Satisfaction Measure, CoreQ: Long-Stay Resident Satisfaction Measure, and CoreQ: Short-Stay Discharge Measure. All three of these measures received NQF endorsement in 2016. Then, the assisted living CoreQ Resident and Family Satisfaction Measures received NQF endorsement in 2019. With these five satisfaction measures, it enables providers, researchers, and regulators to measure satisfaction across the long-term care continuum with valid and reliable measures.
The measure’s relevance are furthered by recent federal legislative actions. The Affordable Care Act of 2010 requires the Secretary of Health and Human Services (HHS) to implement a Quality Assurance & Performance Improvement Program (QAPI) within nursing facilities. This means all nursing facilities have increased accountability for continuous quality improvement efforts. In CMS’s “QAPI at a Glance” document there are references to customer-satisfaction surveys and organizations utilizing them to identify opportunities for improvement. Some AL communities have implemented QAPI in their organizations.
Lastly, in CMS’s National Quality Strategy (2024), one of the four key areas is advancing equity and engagement for all individuals. Specifically, CMS calls out expanding the use of person-reported outcomes and experience measures as a key action. Similarly, in the most recent SNF payment rule (CMS, August 2024), CMS acknowledges an opportunity to add patient experience or satisfaction measures to the Quality Reporting Program (QRP) that spans across post-acute and long-term care providers and created by the IMPACT Act of 2014. While CMS does not provide direct oversight of assisted living, more states are covering assisted living as part of home and community-based Medicaid waivers. As of 2020, 44% of assisted living communities were Medicaid certified (CDC, 2020). Thus, the principles of CMS’s Quality Strategy apply and the CoreQ: AL resident measure can further CMS’s quality efforts.
Castle, N.G. (2007). A literature review of satisfaction instruments used in long-term care settings. Journal of Aging and Social Policy, 19(2), 9-42.
CDC (2020). National Post-Acute and Long-Term Care Study. https://www.cdc.gov/nchs/npals/webtables/overview.htm
CMS (2009). Skilled Nursing Facilities Non Swing Bed - Medicare National Summary. http://www.cms.hhs.gov/MedicareFeeforSvcPartsAB/Downloads/NationalSum2007.pdf
CMS, University of Minnesota, and Stratis Health. QAPI at a Glance: A step by step guide to implementing quality assurance and performance improvement (QAPI) in your nursing home. https://www.cms.gov/Medicare/Provider-Enrollment-and-Certification/QAPI/Downloads/QAPIAtaGlance.pdf.
CMS (April 2024). Quality in Motion: Acting on CMS National Quality Strategy. https://www.cms.gov/files/document/quality-motion-cms-national-quality-strategy.pdf
CMS (August 6, 2024). Medicare Program; Prospective Payment System and Consolidated Billing for Skilled Nursing Facilities; Updates to the Quality Reporting Program and Value-Based Purchasing Program for Federal Fiscal Year 2025. https://www.federalregister.gov/d/2024-16907/p-588
Deming, W.E. (1986). Out of the crisis. Cambridge, MA. Massachusetts Institute of Technology, Center for Advanced Engineering Study.
Institute of Medicine (2001). Improving the Quality of Long Term Care. National Academy Press, Washington, D.C., 2001.
MedPAC. (2015). Report to the Congress: Medicare Payment Policy. http://www.medpac.gov/documents/reports/mar2015_entirereport_revised.pdf?sfvrsn=0.
Sangl, J., Bernard, S., Buchanan, J., Keller, S., Mitchell, N., Castle, N.G., Cosenza, C., Brown, J., Sekscenski, E., and Larwood, D. (2007). The development of a CAHPS instrument for nursing home residents. Journal of Aging and Social Policy, 19(2), 63-82.
Shook, J., & Chenoweth, J. (2012, October). 100 Top Hospitals CEO Insights: Adoption Rates of Select Baldrige Award Practices and Processes. Truven Health Analytics. http://www.nist.gov/baldrige/upload/100-Top-Hosp-CEO-Insights-RB-final.pdf.
-
2.6 Meaningfulness to Target Population
The consumer movement has fostered the notion that patient evaluations should be an integral component of health care. Patient satisfaction, which is one form of patient evaluation, became an essential outcome of health care widely advocated for use by researchers and policy makers. Managed care organizations, accreditation and certification agencies, and advocates of quality improvement initiatives, among others, now promote the use of satisfaction surveys. For example, satisfaction information is included in the Health Plan Employer Data Information Set (HEDIS), which is used as a report card for managed care organizations (NCQA, 2016).
Measuring and improving patient satisfaction is valuable to patients, because it is a way forward on improving the patient-provider relationship, which influences health care outcomes. A 2014 systematic review and meta-analysis of randomized controlled trials, in which the patient-provider relationship was systematically manipulated and tracked with health care outcomes, found a small but statistically significant positive effect of the patient-provider relationship on health care outcomes (Kelly et al., 2014). This finding aligns with other studies that show a link between patient satisfaction and the following health-related behaviors:
1. Keeping follow-up appointments (Hall, Milburn, Roter, & Daltroy, 1998);
2. Disenrollment from health plans (Allen & Rogers, 1997); and,
3. Litigation against providers (Penchansky & Macnee, 1994).
The positive effect of person-centered care and patient satisfaction is not precluded from AL facilities. A 2013 systematic review of studies on the effect of person-centered initiatives in long-term care facilities, such as the Eden Alternative, found person-centered care associated with psychosocial benefits to residents and staff, notwithstanding variations and limitations in study designs (Brownie & Nancarrow, 2013).
From the AL facility and provider perspective, there are numerous ways to improve patient satisfaction. One study found conversations regarding end-of-life care options with family members improve overall satisfaction with care and increase use of advance directives (Reinhardt et al., 2014). Another found an association between improving symptom management of long-term care residents with dementia and higher satisfaction with care (Van Uden et al., 2013). Improvements in a long-term care food delivery system also were associated with higher overall satisfaction and improved resident health (Crogan et al., 2013). The advantage of the CoreQ: AL Resident Satisfaction questionnaire is it is broad enough to capture dissatisfaction on various provided services and signal to providers to drill down and discover ways of improving the patient experience at their facility.
Specific to the Core Q: AL questionnaire, the importance of the satisfaction areas assessed were examined with focus groups of residents and family members. The respondents were patients (N=40) in five AL facilities in the Pittsburgh region. The overall ranking used was 10=Most important and 1=Least important. That the final three questions included in the measure had average scores ranging from 9.50 to 9.69 clearly shows that the respondents value the items used in the Core Q: AL measure.
Allen HM, & Rogers WH. (1997). The Consumer Health Plan Value Survey: Round Two. Health Affairs. 1997;16(4):156–66
Brownie, S. & Nancarrow, S. (2013). Effects of person-centered care on residents and staff in aged-care facilities: a systematic review. Clinical Interventions In Aging. 8:1-10.
Crogan, N.L., Dupler, A.E., Short, R., & Heaton, G. (2013). Food choice can improve nursing home resident meal service satisfaction and nutritional status. Journal of Gerontological Nursing. 39(5):38-45.
Hall J, Milburn M, Roter D, Daltroy L (1998). Why are sicker patients less satisfied with their medical care? Tests of two explanatory models. Health Psychol. 17(1):70–75
Kelley J.M., Kraft-Todd G, Schapira L, Kossowsky J, & Riess H. (2014). The influence of the patient-clinician relationship on healthcare outcomes: a systematic review and meta analysis of randomized controlled trials. PLoS One. 9(4): e94207.
Li, Y., Cai, X., Ye, Z., Glance, L.G., Harrington, C., & Mukamel, D.B. (2013). Satisfaction with Massachusetts nursing home care was generally high during 2005-09, with some variability across facilities. Health Affairs. 32(8):1416-25.
Lin, J., Hsiao, C.T., Glen, R., Pai, J.Y., & Zeng, S.H. (2014). Perceived service quality, perceived value, overall satisfaction and happiness of outlook for long-term care institution residents. Health Expectations. 17(3):311-20.
National Committee for Quality Assurance (NCQA) (2016). HEDIS Measures. http://www.ncqa.org/HEDISQualityMeasurement/HEDISMeasures.aspx. Accessed March 2016.
Penchansky and Macnee, (1994). Initiation of medical malpractice suits: a conceptualization and test. Medical Care. 32(8): pp. 813–831
Reinhardt, J.P., Chichin, E., Posner, L., & Kassabian, S. (2014). Vital conversations with family in the nursing home: preparation for end-stage dementia care. Journal Of Social Work In End-Of-Life & Palliative Care. 10(2):112-26.
Van Uden, N., Van den Block, L., van der Steen, J.T., Onwuteaka-Philipsen, B.D., Vandervoort, A., Vander Stichele, R., & Deliens, L. (2013). Quality of dying of nursing home residents with dementia as judged by relatives. International Psychogeriatrics. 25(10):1697-707.
-
2.4 Performance Gap
The data were collected in 2023 and 2024. 511 facilities participated with 17,482 surveys collected. The facilities were from across the US. Participation was voluntary. The scores and facilities used for the data below were all calculated after the previously mentioned resident exclusions were applied. In addition, scores were only used from facilities with 20 or more responses and a 30% or more response rate.
Table 1. Performance Scores by DecilePerformance Gap Overall Minimum Decile_1 Decile_2 Decile_3 Decile_4 Decile_5 Decile_6 Decile_7 Decile_8 Decile_9 Decile_10 Maximum Mean Performance Score 79 20 55 65 75 80 84 85 90 95 99 100 100 N of Entities 511 2 52 59 55 118 39 46 80 43 38 35 35 N of Persons / Encounters / Episodes 17482 51 2058 2071 1812 2287 1292 1715 2622 1340 1164 1121 1121
-
-
-
3.1 Feasibility Assessment
All of the data elements used in data collection are used in normal facility operations. As part of the data we collected as part of this maintenance, instructions were sent to AL communities detaining the process of collecting the CoreQ surveys from residents. With the exception of cognitive status, all facilities had the information needed readily available.
From the data collected from the recent 511 participating facilities missing data was rare. Of the 17,482 surveys received imputation for one of the four question responses was used in 391 cases (i.e., 2.2%). In addition, surveys not used (i.e., those with 2 or more missing responses) accounted for 1.8% of returns (i.e., N=322).
Facilities have no data entry burden. However, they do have data collection burden. In work we have done with CMS for a different CoreQ survey (NH discharge survey) the cost burden for the facility was calculated to be $2.80 per respondent. This calculation was based on requiring more that 20 data elements; whereas, here only 4 are needed. The cost will likely be less than $2.80.
No barriers were encountered with the measure specifications. The measure calculation was sometimes confused with an average score. The CoreQ measure is not an average. This is explained on reports produced and in the technical manual.
All of the patient surveys are anonymous. In addition, scores are only calculated with 20 or more survey returns. Thus, patient confidentiality is protected.
There were no negative consequences to individuals or populations identified during testing or evidence of unintended negative consequences to individuals or populations reported since the implementation of the CoreQ: AL Resident Satisfaction questionnaire or the measure that is calculated using this questionnaire. This is consistent with satisfaction surveys in general in nursing facilities. Many other satisfaction surveys are used in AL facilities with no reported unintended consequences to patients or their families.
There are no potentially serious physical, psychological, social, legal, or other risks for patients. However, in some cases the satisfaction questionnaire can highlight poor care for some dissatisfied patients, and this may make them further dissatisfied.
3.3 Feasibility Informed Final MeasureThis is a maintenance application. As detailed above we have continued to collect CoreQ data to examine any changes in scores and implementation issues. No adjustment to the measure has occurred.
-
3.4a Fees, Licensing, or Other Requirements
N/A
3.4 Proprietary InformationProprietary measure or components (e.g., risk model, codes), without fees
-
-
-
4.1.3 Characteristics of Measured Entities
The analysis included five measured entities. All entities were assisted living communities. Reliability and validity testing of the Pilot CoreQ: AL Resident Satisfaction questionnaire was examined using responses from 411 residents from a national sample of facilities. Validity testing of the Pilot CoreQ: AL Resident Satisfaction questionnaire was examined using responses from 100 residents from the Pittsburgh area. CoreQ: AL Resident Satisfaction measure was examined using 321 facilities and included responses from 12,553 residents. These facilities were located across multiple states. Resident-level sociodemographic (SDS) variables were examined using a sample of 3000 residents from a national sample of AL facilities. This included 205 facilities. In addition, the CoreQ: AL Resident Satisfaction measure was examined along with other outcome measures using a national sample of 483 facilities (with 29,799 residents).
4.1.1 Data Used for TestingThis is a maintenance application. The data used for NQF approval was collected in 2018 and the reliability, validity, and exclusions were reported. As detailed above we have continued to collect CoreQ data to examine any changes in scores and implementation issues. This data was collected in 2023 and 2024.
The 2018 testing and analysis included four data sources (Table A below):
- Reliability and validity testing of the Pilot CoreQ: AL Resident Satisfaction questionnaire was examined using responses from 411 residents from a national sample of facilities.
- Validity testing of the Pilot CoreQ: AL Resident Satisfaction questionnaire was examined using responses from 100 residents from the Pittsburgh area.
- CoreQ: AL Resident Satisfaction measure was examined using 321 facilities and included responses from 12,553 residents. These facilities were located across multiple states.
- Resident-level sociodemographic (SDS) variables were examined using a sample of 3000 residents from a national sample of AL facilities. This included 205 facilities.
- In addition, the CoreQ: AL Resident Satisfaction measure was examined along with other outcome measures using a national sample of 483 facilities (with 29,799 residents).
More information is located in Table A: Information on Data Sources Utilized in Analyses in the 7.1 Supplement.
4.1.4 Characteristics of Units of the Eligible PopulationThe descriptive characteristics of the residents are given in the following table that includes information from all the data used (the education level and race information is derived from the sample described above with 3000 respondents, as this data was not collected for the other samples).
More information is located in the attachment in 7.1 Supplement.
4.1.2 Differences in DataThis is a maintenance application. The data used for NQF approval was collected in 2018 and the reliability, validity, and exclusions were reported. As detailed below several different sources of data were used for reliability and validity testing. This data was used for NQF approval and was collected in 2018.
Resident Level of Analysis
Data was used from the CoreQ: AL Resident Satisfaction questionnaire. The questionnaire was administered to all residents (with the exclusions described in the Specification section). The testing and analysis included:- The Pilot CoreQ: AL Resident Satisfaction questionnaire was examined using responses from 411 residents from a national sample of facilities.
- Validity testing of the Pilot CoreQ: AL Resident Satisfaction questionnaire was examined using responses from 100 residents from the Pittsburgh area.
- CoreQ: AL Resident Satisfaction measure was examined using 321 facilities and included responses from 12,553 residents. These facilities were located across multiple states.
- In addition, resident-level sociodemographic (SDS) variables were examined using a sample of 3000 residents from a national sample of AL facilities. This included 205 facilities.
[Note: Data source #5 above was used for facility level analyses, and is not included in the resident level of analysis]
The descriptive characteristics of the residents are given in the following table that includes information from all the data used (the education level and race information comes only from the sample described above with 3000 respondents, as this data was not collected for the other samples).
More information is located in Table B: Descriptive Characteristics of Residents Included in the Analysis (all samples pooled) its attached to the document in 7.1 Supplement.
-
4.2.1 Level(s) of Reliability Testing Conducted4.2.2 Method(s) of Reliability Testing
We measured reliability at the: (1) data element level; (2) the person/questionnaire level; and, (3) at the measure (i.e., facility) level. More detail of each analysis follows.
(1) Data Element Level. To determine if the CoreQ: AL Resident Satisfaction questionnaire data elements were repeatable (i.e. producing the same results a high proportion of the time when assessed in the same population in the same time period) we re-administered the questionnaire to residents 1 month after the submission of their first survey. The Pilot CoreQ: AL Resident Satisfaction questionnaire had responses from 100 residents; we re-administered the survey to all 100 residents (98 answered the repeat survey). The re-administered sample was a sample of convenience as they represented residents from the Pittsburgh area (the location of the team testing the questionnaire). To measure the agreement, we calculated first the distribution of responses by question in the original round of surveys, and then again in the follow-up surveys (they should be distributed similarly); and second, calculated the correlations between the original and follow-up responses by question (they should be highly correlated).
(2) Person/Questionnaire Level. Having tested whether the data elements matched between the pilot responses and the re- administered responses, we then examined whether the person-level results matched between the Pilot CoreQ: AL Resident Satisfaction questionnaire responses and their corresponding re- administered responses. In particular, we calculated the percent of time that there was agreement between whether or not the pilot response was poor, average, good, very good or excellent, and whether or not the re- administered response was poor, average, good, very good or excellent.
(3) Measure (Facility) Level. We measured stability of the facility-level measure when the
facility’s score is calculated using multiple “draws” from the same population. This measures how stable the facility’s score would be if the underlying residents are from the same population but are subject to the kind of natural sample variation that occurs over time. We did this by bootstrap with 10,000 repetitions of the facility score calculation, and present the percent of facility resamples where the facility score is within 1 percentage point, 3 percentage points, 5 percentage points, and 10 percentage points of the original score calculated on the Pilot CoreQ: AL Resident Satisfaction questionnaire sample. We also conducted two-level signal-to-noise analysis which identifies two sources of variability, those between ratees (facilities) and those for each ratee (respondents). No imputed values were used in the analysis and only AL facilities with 20 or more responses were included.
4.2.3 Reliability Testing ResultsData Element Level. Table 2a2.3.a shows the four CoreQ: AL Resident Satisfaction Questionnaire items, and the response per item for both the pilot survey of 100 residents and the re-administered survey of 98 residents. The responses in the pilot survey are not statistically significant from the re-administered survey. This shows that the data elements were highly repeatable and produced the same results a high proportion of the time when assessing the same population in the same time period.
Table 2a2.3.b shows the average of the percent agreement from the first survey score to the second survey score for each item in the CoreQ: AL Resident Satisfaction questionnaire. This shows very high levels of agreement.
- Person/Questionnaire Level. Having tested whether the data elements matched between
the pilot responses and the re-administered responses, we then examined whether the person-level results matched between the Pilot CoreQ: AL Resident Satisfaction Questionnaire responses and their corresponding re-administered responses. In particular, we calculated the percent of time that there was agreement between whether or not the pilot response was poor, average, good, very good or excellent, and whether or not the re-administered response was poor, average, good, very good or excellent. The table (2a2.3.c) shows the CoreQ: AL Resident Satisfaction Questionnaire items, and the agreement in response per item for both the pilot survey of 100 residents compared with the re-administered survey of 98 residents. The person-level responses in the pilot survey are not statistically significant from the re-administered survey. This shows that a high percent of time there was agreement between whether or not the pilot response was poor, average, good, very good or excellent, and whether or not the re-administered response was poor, average, good, very good or excellent.
- MEASURE (FACILITY) LEVEL. After having performed the 10,000-repetition bootstrap, 21% of
bootstrap repetition scores were within 1 percentage point of the score under the original pilot sample, 33% were within 3 percentage points, 65% were within 5 percentage points, and 95% were within 10 percentage points. For the two-level signal-to-noise analysis for AL resident, R=0.84 (this result is the mean), indicating that 84% of facilities true score can be attributed to ratings from the respondents (AL residents) and remaining 16% is due to noise and differences among respondents. This result exceeds what is generally considered a good reliability coefficient of 0.8 (Campbell et al., 2010).
In summary, the measure displays a high degree of element-level, questionnaire-level, and measure (facility)-level reliability. First, the CoreQ: AL Resident Satisfaction questionnaire data elements were highly repeatable, with pilot and re-administered responses agreeing between 95% to 100% of the time, depending on the question. That is, this produced the same results a high proportion of the time when assessed in the same population in the same time period. Second, the questionnaire level scores were also highly repeatable, with pilot and re-administered responses agreeing 98% of the time. Third, a facility drawing residents from the same underlying population only varied modestly. The 10,000-repetition bootstrap results showed that the CoreQ: AL Resident Satisfaction measure scores from the same facility are very stable.
4.2.3a- Table 2
This information cannot be provided because this was not conducted in the initial testing.
Campbell, JA, Narayanan, A., Burford, B., Greco, MJ. Validation of a multi-source feedback tool for use in general practice. Education in Primary Care, 2010, 21, 165-179.
4.2.3a Attach Additional Reliability Testing ResultsTable 2. Accountable Entity–Level Reliability Testing Results by Denominator-Target Population SizeAccountable Entity-Level Reliability Testing Results Overall Minimum Decile_1 Decile_2 Decile_3 Decile_4 Decile_5 Decile_6 Decile_7 Decile_8 Decile_9 Decile_10 Maximum Reliability 0.84 Mean Performance Score N of Entities N of Persons / Encounters / Episodes 411 4.2.4 Interpretation of Reliability ResultsIn summary, the measure displays a high degree of element-level, questionnaire-level, and measure (facility)-level reliability. First, the CoreQ: AL Resident Satisfaction questionnaire data elements were highly repeatable, with pilot and re-administered responses agreeing between 95% to 100% of the time, depending on the question. That is, this produced the same results a high proportion of the time when assessed in the same population in the same time period. Second, the questionnaire level scores were also highly repeatable, with pilot and re-administered responses agreeing 98% of the time. Third, a facility drawing residents from the same underlying population only varied modestly. The 10,000-repetition bootstrap results showed that the CoreQ: AL Resident Satisfaction measure scores from the same facility are very stable.
-
4.3.1 Level(s) of Validity Testing Conducted4.3.2 Type of accountable entity-level validity testing conducted4.3.3 Method(s) of Validity Testing
In the development of the CoreQ: AL Resident Satisfaction questionnaire, four sources of data were used to perform three levels of validity testing. Each is described further below. The first source of data (convenience sampling) was used in developing and choosing the format to be utilized in the CoreQ: AL Resident Satisfaction questionnaire (i.e., response scale). The second source of data was pilot data collected from 411 residents (described below). This data was used in choosing the items to be used in the CoreQ: AL Resident Satisfaction Questionnaire. The third source of data (collected from 321facilities (n=12,553)) was used to examine the validity of the CoreQ: AL Resident Satisfaction Measure (i.e., facility and summary score validity). An additional source of data (collected from 483 facilities described in Section 1.5) was used to examine the correlations between the CoreQ: AL Resident Satisfaction measure scores and other quality metrics from the facilities.
Thus, the following sections describe this validity testing:
1. Validity testing of the questionnaire format used in the CoreQ: AL Resident Satisfaction Questionnaire;
2. Testing the items for the CoreQ: AL Resident Satisfaction Questionnaire;
3. To determine if a sub-set of items could reliably be used to produce an overall indicator of satisfaction (Core Q: AL Resident Measure);
4. Validity testing for the CoreQ: AL Resident Satisfaction measure.
In summary, the overall intent of these analyses was to determine if a subset of items could reliably be used to produce an overall indicator of satisfaction for AL residents.
1. Validity Testing for the Questionnaire Format used in the CoreQ: AL Resident Satisfaction Questionnaire
A. The face validity of the domains used in the CoreQ: AL Resident Satisfaction questionnaire was evaluated via a literature review. The literature review was conducted to examine important areas of satisfaction for long-term care residents. The research team examined 12 commonly used satisfaction surveys and reports to determine the most valued satisfaction domains. These surveys were identified by completing internet searches in PubMed and Google. Key terms that were searched included “resident satisfaction, long-term care satisfaction, assisted living satisfaction, and elderly satisfaction”.
B. The face validity of the domains was also examined using residents. The overall ranking used was 1=Most important and 22=Least important. The respondents were residents (N=40) in five AL facilities in the Pittsburgh region.
C. The face validity of the Pilot CoreQ: AL Resident Satisfaction questionnaire response scale was also examined. The respondents were residents (N=40) in five AL facilities in the Pittsburgh region. The percent of respondents that stated they “fully understood” how the response scale worked, could complete the scale, and in cognitive testing understood the scale was used.
D. The Flesch-Kinkaid scale (Streiner & Norman, 1995) was used to determine if respondent correctly understood the questions being asked (Streiner & Norman, 1995).
2. Testing the Items for the CoreQ: AL Resident Satisfaction Questionnaire
The analyses above were performed to provide validity information on the format in the CoreQ: AL Resident Satisfaction questionnaire (i.e, domains and format). The second series of validity testing was used to further identify items that should be included in the CoreQ: AL Resident Satisfaction Questionnaire. This analysis was important, as all items in a satisfaction measure should have adequate psychometric properties (such as low basement or ceiling effects). For this testing, a Pilot version of the CoreQ: AL Resident Satisfaction questionnaire survey was administered consisting of 20 items (N= 411 residents). The testing consisted of:
A. The Pilot CoreQ: AL Resident Satisfaction Questionnaire items performance with respect to the distribution of the response scale and with respect to missing responses.
B. The intent of the pilot instrument was to have items that represented the most important areas of satisfaction (as identified above) and to be parsimonious. Additional analyses were used to eliminate items in the pilot instrument. More specifically, analyses such as exploratory factor analysis (EFA) were used to further refine the pilot instrument. This was an iterative process that included using Eigenvalues from the principal factors (unrotated) and correlation analysis of the individual items.
3. Determine if a Sub-Set of Items Could Reliably be used to Produce an Overall Indicator of Satisfaction (The CoreQ: AL Resident Satisfaction measure).
The CoreQ: AL Resident Satisfaction questionnaire is meant to represent overall satisfaction with as few items as possible. The testing given below describes how this was achieved.
A. To support the construct validity (i.e. that the CoreQ items measured a single concept of “satisfaction”) we performed a correlation analysis using all items in the instrument.
B. In addition, using all items in the instruments a factor analysis was conducted. Using the global items Q1 (“How satisfied are you with the facility?”) the Cronbach’s Alpha of adding the “best” additional item was explored.
4. Validity Testing for the Core Q: AL Resident Measure.
The overall intent of the analyses described above was to identify if a sub-set of items could reliably be used to produce an overall indicator of satisfaction, the CoreQ: AL Resident Satisfaction questionnaire. Further testing was conducted to determine if the 4 items in the CoreQ: AL Resident Satisfaction questionnaire were a reliable indicator of satisfaction.
A. To determine if the 4 items in the CoreQ: AL Resident Satisfaction questionnaire were a reliable indicator of satisfaction, the correlation between these four items in the CoreQ: AL Resident Satisfaction Measure and all of the items on the pilot CoreQ instrument was conducted.
B. We performed additional validity testing of the facility-level CoreQ: AL Resident measure by measuring the correlations between the CoreQ: AL Resident Satisfaction measure scores and other quality metrics from the facilities. If the CoreQ AL Resident scores correlate negatively with the measures that decrease as they get better, and positively with the measures that increase as they get better, then this supports the validity of the CoreQ AL Resident measure.
Secondary data from AL is rare. As part of our validity testing staff stability information and turnover information was collected. These had a high correlation (>.4) with the CoreQ score.
Reference: Streiner, D. L. & Norman, G.R. 1995. Health measurement scales: A practical guide to their development and use. 2nd ed. New York: Oxford.
4.3.4 Validity Testing ResultsValidity Testing for the Questionnaire Format used in the CoreQ: AL Resident Satisfaction Questionnaire
A. The face validity of the Domains used in the CoreQ: AL Resident Satisfaction Questionnaire was evaluated via a literature review (described in 2b2.2). Specifically, the research team examined the surveys and reports to identify the different domains that were included. The research team scored the domains by simply counting if an instrument included the domain. Table 2b1.3.a gives the domains that were found throughout the search, as their respective score. An example is the domain food, this was used in 11 out of the 12 surveys. An interpretation of this finding would be that items addressing food are extremely important in satisfaction surveys in AL. These domains were used in developing the pilot CoreQ: AL Resident Satisfaction questionnaire items.
B. The face validity of the domains was also examined using residents (described above). The following abbreviated table (Table 2b1.3.b) shows the rank of importance for each group of domains. The overall ranking used was 1=Most important and 22=Least important. The ranking of the 4 areas used in the CoreQ: AL Resident Satisfaction questionnaire are shown in Table 2b1.3.b.C. The face validity of the pilot CoreQ: AL Resident Satisfaction questionnaire response scale was also examined (described above). Table 2b1.3.c gives the percent of respondents that stated they fully understood how the response scale worked, could complete the scale, AND in cognitive testing understood the scale.
D. The CoreQ: AL Resident Satisfaction Questionnaire was purposefully written using simple language. No a priori goal for reading level was set, however a Flesch-Kinkaid scale score of six, or lower, is achieved for all questions.
Testing the Items for the CoreQ: AL Resident Satisfaction Questionnaire
A. The pilot CoreQ: AL Resident Satisfaction questionnaire items all performed well with respect to the distribution of the response scale and with respect to missing responses.
B. Using all items in the instruments (excluding the global item Q1 (“How would you rate the facility?”)) exploratory factor analysis (EFA) was used to evaluate the construct validity of the measure. The Eigenvalues from the principal factors 1 and 2 (unrotated) were 10.93 and 0.710, respectively. Sensitivity analyses using principal factors and rotating provide highly similar findings.
Determine if a Sub-Set of Items could Reliably be used to Produce an Overall Indicator of Satisfaction (The Core Q: AL Resident Measure).
A. To support the construct validity that the idea that the CoreQ items measured a single concept of “satisfaction” – we performed a correlation analysis using all items in the instrument. The analysis identifies the pairs of CoreQ items with the highest correlations. The highest correlations are shown in Table 2b1.3.d. Items with the highest correlation are potentially providing similar satisfaction information. Note, the table provides 7 sets of correlations, the analysis was conducted examining all possible correlations between items. Because items with the highest correlation were potentially providing similar satisfaction information they could be eliminated from the instrument.
B. In addition, using all items in the instrument a factor analysis was conducted. Using the global items Q1 (“How satisfied are you with the facility?”) the Cronbach’s Alpha of adding the “best” additional item is shown in table 2b1.3.e. Cronbach’s alpha measures the internal consistency of the values entered into the factor analysis, where a value of 0.7 or higher is generally considered acceptably high. The additional item(s) is considered best in the sense that it is most highly correlated with the existing item, and therefore provides little additional information about the same construct. So, this analysis was also used to eliminate items. Note, the table again provides a limited set of correlations, the analysis was conducted examining all possible correlations between items.
Thus, using the correlation information and factor analysis 4 items representing the CoreQ: AL Resident Satisfaction questionnaire were identified.
Validity testing for the Core Q: AL Resident Measure
The overall intent of the analyses described above was to identify if a sub-set of items could reliably be used to produce an overall indicator of satisfaction, the CoreQ: AL Resident Satisfaction Questionnaire.
A. The items were all scored according to the rules identified elsewhere. The same scoring was used in creating the 4 item CoreQ: AL Resident Satisfaction Questionnaire summary score and the satisfaction score using the Pilot CoreQ: AL Resident Satisfaction Questionnaire. The correlation was identified as having a value of 0.94. That is, the correlation score between the final “CoreQ: AL Resident Satisfaction Measure” and all of the 20 items used in the Pilot instrument indicates that the satisfaction information is approximately the same if we had included either the 4 items or the 20 item Pilot instrument.
B. We performed additional validity testing of the facility-level CoreQ: AL Resident Satisfaction Measure by measuring the correlations between the CoreQ: AL Resident Satisfaction measure scores and several other quality metrics from facilities (see Table 2b1.3.f). Therefore, we hypothesize that for each facility in the sample there is a positive correlation with other quality indicators.
4.3.4a Attach Additional Validity Testing Results4.3.5 Interpretation of Validity ResultsValidity Testing for the Questionnaire Format used in the CoreQ: AL Resident Satisfaction Questionnaire
A. The literature review shows that domains used in the Pilot CoreQ: AL Resident Satisfaction questionnaire items have a high degree of both face validity and content validity.
B. Residents overall rankings, show the general “domain” areas used indicates a high degree of both face validity and content validity.
C. The results show that 100% of residents are able to complete the response format used. This testing indicates a high degree of both face validity and content validity.
D. The Flesch-Kinkaid scale score achieved for all questions indicates that respondents have a high degree of understanding of the items.
2. Testing the Items for the CoreQ: AL Resident Satisfaction Questionnaire
A. The percent of missing responses for the items is very low. The distribution of the summary score is wide. This is important for quality improvement purposes, as AL facilities can use benchmarks.
B. EFA shows that one factor explains the common variance of the items. A single factor can be interpreted as the only “concept” being measured by those variables. This means that the instrument measures the global concept of satisfaction and not multiple areas of satisfaction. This supports the validity of the CoreQ instrument as measuring a single concept of “customer satisfaction”. This testing indicates a high degree of criterion validity.
3. Determine if a Sub-Set of Items Could Reliably be Used to Produce an Overall Indicator of Satisfaction (The Core Q: AL Resident Measure).
A. Using the correlation information of the Pilot Core Q: AL Resident Questionnaire (20 items) and the 4 items representing the CoreQ: AL Resident Satisfaction Questionnaire a high degree of correlation was identified. This testing indicates a high degree of criterion validity.
B. EFA shows that one factor explains the common variance of the items. A single factor can be interpreted as the only “concept” being measured by those variables. This means that the instrument measures the global concept of satisfaction and not multiple areas of satisfaction. This supports the validity of the CoreQ instrument as measuring a single concept of “customer satisfaction”. This testing indicates a high degree of criterion validity.
4. Validity Testing for the Core Q: AL Resident Measure.
A. The correlation of the 4 item CoreQ: AL Resident Satisfaction measure summary score (identified elsewhere in this document) with the overall satisfaction score (scored using all data and the same scoring metric) gave a value of 0.96. That is, the correlation score between actual the CoreQ: AL Resident Satisfaction Measure and all of the 20 items used in the Pilot instrument indicates that the satisfaction information is approximately the same if we had included either the 4 items or the 20 item Pilot questions. This indicates that the CoreQ: AL Resident Satisfaction instrument summary score adequately represents the overall satisfaction of the facility. This testing indicates a high degree of criterion validity.
B. Relationship with Quality Indicators
The 9 quality indicators examined had a moderate level of correlation with the CoreQ: AL Resident Satisfaction measure. These correlations range from 0.21 to 0.02. The CoreQ: AL Resident Satisfaction measure is associated with 7 of the 9 quality indicators in the direction hypothesized (that is higher CoreQ scores are associated with better quality indicator scores). This testing indicates a moderate degree of construct validity and convergent validity.
As noted by Mor and associates (2003, p.41) when addressing quality of long-term care facilities, “there is only a low level of correlation among the various measures of quality.” Castle and Ferguson (2010) also show the pattern of findings of quality indicators in long-term care facilities is consistently moderate with respect to the correlations identified. Thus, it is not surprising that “very high” levels of correlations were not identified. As described in the literature, some correlation was identified in the direction as expected, which is in support of validity of the CoreQ: Family Satisfaction Measure.
-
4.4.1 Methods used to address risk factors4.4.1b If an outcome or resource use measure is not risk adjusted or stratified
No research (to date) has risk adjusted or stratified satisfaction information from AL facilities. Testing on this was conducted as part of the development of the federal initiative to develop a CAHPS® Nursing Home Survey to measure nursing home residents’ experience (hereafter referred to as NHCAHPS) (RTI International, 2003). No empirical or theoretical or empirical risk adjusted or stratified reporting of satisfaction information was recommended as the evidence showed that no clear relationship existed with respect to resident characteristics and the satisfaction scores. We note, this testing was in nursing facilities; not AL. However, it is cited here as very little information exists on satisfaction testing in AL facilities.
Education may influence responses to the questions asked. That is, respondents with lower education levels may not appropriately interpret the items. To address this, our items were written and tested to very low Flesh-Kincaid levels. In testing, no differences in average item scores were identified based on education levels (p<.05) (Table2b3.4b.c) . A t-test analysis was used to compare the CoreQ mean scores, adjusting for race (Table 2b3.4b.d). This analysis demonstrated the CoreQ: AL Resident Satisfaction measure is not significantly different based on race. Based on these results, education level makeup of the respondents or the racial makeup of the respondents does not appear to be related to this measure. We included these background characteristics for two reasons. First, to examine if any responses were different based on these factors (in no case were the responses different). Second, to examine the representativeness of the samples (the samples examined were representative of national AL figures).
Multiple studies in the past twenty years have examined racial disparities in the care of nursing facility residents and have consistently found poorer care in facilities with high minority populations (Fennell et al., 2000; Mor et al., 2004; Smith et al., 2007). No equivalent work in AL facilities exists; therefore, the nursing facility work is referenced here.
Work on racial disparities in nursing facilities’ quality of care between elderly white and black residents within nursing facility has shown clearly that nursing homes remain relatively segregated and that specifically nursing home care can be described as a tiered system in which Blacks are concentrated in marginal-quality homes (Li, Ye, Glance & Temkin-Greener, 2014; Fennell, Feng, Clark & Mor, 2010; Li, Yin, Cai, Temkin-Greener, Mukamel, 2011; Chisholm, Weech-Maldonado, Laberge, Lin, & Hyer, 2013; Mor et al., 2004; Smith et al., 2007). Such homes tend to have serious deficiencies in staffing ratios, performance, and are more financially vulnerable (Smith et al, 2007; Chisholm et al., 2013). Based on a review of the nursing facility disparities literature, Konetzka and Werner concluded that disparities in care are likely related to this racial and socioeconomic segregation as opposed to within-provider discrimination (Konetzka & Werner 2009). This conclusion is supported, for example, by Grunier and colleagues who found that as the proportion of black residents in the nursing home increased the risk of hospitalization among all residents, regardless of race, also increased (Grunier et al., 2008). Thus, adjusting for racial status has the unintended effect of adjusting for poor quality providers not to differences due to racial status and not within-provider discrimination.
Lower satisfaction scores also likely increase as the proportion of black residents increases, indicating that the best measure of racial disparities in satisfaction rates is one that measures scores at the facility level. That is, ethnic and social economic status differences are related to inter-facility differences not to intra-facility differences in care. Therefore, the literature suggests that racial status should not be risk adjusted otherwise one is adjusting for the poor quality of the SNFs rather than differences due to racial status. We believe the same is true for AL facilities.
Chisholm L, Weech-Maldonado R, Laberge A, Lin FC, Hyer K. (2013). Nursing home quality and financial performance: does the racial composition of residents matter? Health Serv Res;48(6 Pt 1):2060–2080.
Fennell ML, Feng Z, Clark MA, Mor V. (2010). Elderly Hispanics more likely to reside in poor-quality nursing homes. Health Aff (Millwood);29(1):65–73.
International, R. (2003). RTI International Annual Report. Research Triangle Park: RTI’s Office of Communications, Information and Marketing.
Gruneir, A., Miller, S. C., Feng, Z., Intrator, O., & Mor, V. (2008). Relationship between state Medicaid policies, nursing home racial composition, and the risk of hospitalization for black and white residents. Health Services Research, 43(3), 869-881.
Konetzka RT, Werner RM. Disparities in long-term care: building equity into market-based reforms. Med Care Res Rev. 2009 Oct;66(5):491-521. doi: 10.1177/1077558709331813. Epub 2009 Feb 18. PMID: 19228634.
Li Y, Ye Z, Glance LG, Temkin-Greener H. Trends in family ratings of experience with care and racial disparities among Maryland nursing homes. Med Care. 2014 Jul;52(7):641-8. doi: 10.1097/MLR.0000000000000152. PMID: 24926712; PMCID: PMC4058647.
Li Y, Yin J, Cai X, Temkin-Greener J, Mukamel DB. Association of race and sites of care with pressure ulcers in high-risk nursing home residents. JAMA. 2011 Jul 13;306(2):179-86. doi: 10.1001/jama.2011.942. PMID: 21750295; PMCID: PMC4108174.
Mor V, Zinn J, Angelelli J, Teno JM, Miller SC. Driven to tiers: socioeconomic and racial disparities in the quality of nursing home care. Milbank Q. 2004;82(2):227-56. doi: 10.1111/j.0887-378X.2004.00309.x. PMID: 15225329; PMCID: PMC2690171.
Connor-Smith JK, Flachsbart C. Relations between personality and coping: a meta-analysis. J Pers Soc Psychol. 2007 Dec;93(6):1080-107. doi: 10.1037/0022-3514.93.6.1080. PMID: 18072856.
Risk adjustment approachOffRisk adjustment approachOffConceptual model for risk adjustmentOffConceptual model for risk adjustmentOff
-
-
-
5.1 Contributions Towards Advancing Health Equity
For all of the CoreQ surveys we are examining scores for white and black residents. In nursing homes, overall scores for black residents are lower than those for white residents. However, we know that black residents are disproportionately cared for in lower quality facilities. This may influence the overall scores. We are continuing to examine this data. In AL from the data we received, very few (<2%) respondents were black. Thus, we are continuing to collect data from AL communities trying to over-sample communities with more black residents.
-
-
-
6.1.1 Current StatusYes6.1.3 Current Use(s)6.1.4 Program DetailsNational Quality Award Program, https://www.ahcancal.org/Quality/National-Quality-Award-Program/Pages/default.aspx?utm_source=ahcancal_homepage&utm_medium=main_rotator&utm_campaign=QAITA, The AHCA/NCAL National Quality Awards Program is a progressive program that is based on the Baldrige Criteria for Performance Excellence. This nationa, The geographic area is the nation. The AHCA/NCAL National Quality Awards Program is used across the nation. Over 1,700 entities have received an award, The level of analysis is the facility-level. The care settings are skilled nursing and assisted living facilities.LTC Trend Tracker, https://www.ahcancal.org/Data-and-Research/LTC-Trend-Tracker/Pages/default.aspx, The program allows skilled nursing and assisted living organizations to benchmark personal metrics to those of their peers and examine ongoing quality, Skilled Nursing and Assisted living facilities across the United States utilize LTC Trend Tracker. About 15,266 Skilled Nursing Facilities and 9,280 A, The level of analysis is the facility-level. The care settings are skilled nursing and assisted living facilities.Residential Care Quality Metrics Program/Oregon Department of Human Services, https://www.oregon.gov/odhs/licensing/community-based-care/pages/quality-metrics.aspx#requirements, The purpose is to improve the quality of service and give consumers and facilities a means of comparison., Oregon. There are 577 accountable entities who serve about 30,145 residents., State-level analysis. The care setting is assisted living facilities.Assisted Living Report Card/MN Department of Health Aging and Adult Services Division (AASD), https://mn.gov/dhs/partners-and-providers/news-initiatives-reports-workgroups/aging/assisted-living-report-card/assisted-living-reports.jsp, Once the report card is fully implemented by the DHS Aging and Adult Services Division (AASD) along with the Minnesota Board on Aging (MBA), results w, Minnesota. There are 156 accountable entities who serve about 5,164 residents., State-level analysis. The care setting is assisted living facilities.
-
6.2.1 Actions of Measured Entities to Improve Performance
Improving performance relies on the testing of change and benchmarking. Frequently collecting data is a necessary step to enhance and maximize quality improvement. Data collected during tests provides critical insight that is needed to determine the best path forward. Benchmarking is a process used to measure the quality and performance of your organization. Benchmarking plays a significant role in identifying patterns, providing context, and then guiding decision-making processes.
The CoreQ Resident Satisfaction measure allows assisted living facilities to measure the impact of tests of change and benchmark their performance relative to other facilities. Specifically, facilities can increase the number of staff and/or improve staff training and measure the impact using CoreQ. Similarly, improvements in reduced adverse events, such as falls and hospitalizations, increase resident rating of care received and increase satisfaction. Finally, facilities can understand and address the needs and wants of residents, like certain activities or food, to increase their willingness to recommend the facility and CoreQ performance
The actions needed to improve performance are not difficult once a process or plan for improvement is developed (e.g. Quality Assurance/Performance Improvement (QAPI)). Measured entities can overcome difficulties by monitoring data and results. Monitoring data often ensures you preserve the advances of the quality improvement effort. Developing a feedback and monitoring system to sustain continuous improvement helps providers preserve the advances of the quality improvement effort.
6.2.2 Feedback on Measure PerformanceThe CoreQ measure for assisted living residents has elevated the resident and family voice as well as help guide consumer choices as another way for potential residents to review the quality of a care facility. Specifically, the CoreQ measure has been independently tested as a valid and reliable measure of customer satisfaction. The CoreQ is a short survey with three to four questions which reduces response burden on residents and allows organizations to benchmark their results with consistent questions and response scale. Satisfaction vendors and providers have particularly appreciated how easy it is to integrate the CoreQ questions to their satisfaction surveys. They believe the short length relative to other survey tools, like HCAHPS, helps increase and maintain high response rates.
AHCA/NCAL developed LTC Trend Tracker, a web-based tool that enables long term and post-acute care providers, including assisted living, to access key information that can help their organization succeed. The CoreQ report and upload feature within LTC Trend Tracker includes an API (application programming interface) for vendors performing the survey on behalf of ALs to upload data, so that the aggregate CoreQ results will be available to providers. Given that LTC Trend Tracker is the leading method for NCAL AL members to profile their quality and other data, the incorporation of CoreQ into LTC Trend Tracker means it will immediately become the de facto standard for customer satisfaction surveys for the AL industry. AHCA/NCAL continues to work with customer satisfaction vendors to promote CoreQ and receives requests for vendors to be added to the list of those incorporating CoreQ. Currently, there are over 40 vendors across the nation who can administer the CoreQ survey.
We also are working with states who require satisfaction measurement to incorporate CoreQ into their process. AHCA/NCAL has a presence in each state, and our state affiliates continue to promote the use of the CoreQ.
Feedback is continuously obtained through meetings with facility operators and vendors serving on AHCA/NCAL’s Customer Experience Committee and the CoreQ Vendors’ Workgroup. The purpose of the Customer Experience Committee is to champion the importance of meeting customer expectations now and in the future. This includes defining quality from the consumer’s perspective. Key areas of focus include collecting, analyzing, and using data to drive performance improvement, and the application of successful practices. The CoreQ Vendors’ Workgroup was created to help improve CoreQ usage and discuss ways to best support the CoreQ Vendors’ who administer the surveys.
6.2.3 Consideration of Measure FeedbackAHCA/NCAL developed LTC Trend Tracker, a web-based tool that enables long term and post-acute care providers, including assisted living, to access key information that can help their organization succeed. The CoreQ report and upload feature within LTC Trend Tracker includes an API for vendors performing the survey on behalf of ALs to upload data, so that the aggregate CoreQ results will be available to providers. Given that LTC Trend Tracker is the leading method for NCAL AL members to profile their quality and other data, the incorporation of CoreQ into LTC Trend Tracker means it will immediately become the de facto standard for customer satisfaction surveys for the AL industry. AHCA/NCAL continues to work with customer satisfaction vendors to promote CoreQ and receives requests for vendors to be added to the list of those incorporating CoreQ.
Among providers and vendors, we receive feedback during committee and workgroup meetings. For feedback on LTC Trend Tracker, we scope out the cost and feasibility of suggested enhancements. For example, we added a more graphical user interface option for the API, in addition to the original command line interface that was more technical, based on feedback from vendors.
For some of the feedback we receive, we use it as an opportunity to educate about best practices in survey collection and administration. For example, some vendors and providers inquire about administering CoreQ over the phone or other mixed modes of collection. In this instance, we caution vendors and providers about possible response or interviewer bias and recommend using written surveys as the primary method because it has been tested and shown to be reliable and valid.
6.2.4 Progress on ImprovementLTC Trend Tracker is a web-based tool that enables long term and post-acute care providers, including assisted living, to access key information that can help their organization succeed. AL facilities report CoreQ performance results in LTC Trend Tracker for benchmarking and state comparisons. AHCA/NCAL monitored the impact of the COVID-19 pandemic on satisfaction trends among AL residents in the nation. The data shows:
- In 2020Q1, satisfaction rates were 86.3% which represented 255 AL facilities.
- In 2021Q1, satisfaction rates decreased to 80.3% which represented 140 AL facilities. By the end of 2021 satisfaction rates dropped to 76.4% which represented 227 AL facilities.
- In 2024Q3, satisfaction rates increased to 81.0% which represented 200 AL facilities.
Monitoring satisfaction rates during the pandemic and after helped facilities/operators benchmark and trend their COVID-19 related performance.
6.2.5 Unexpected FindingsThere were no negative consequences to individuals or populations identified during testing or evidence of unintended negative consequences to individuals or populations reported since the implementation of the CoreQ: AL Resident Satisfaction questionnaire or the measure that is calculated using this questionnaire. This is consistent with satisfaction surveys in general in nursing facilities. Many other satisfaction surveys are used in AL facilities with no reported unintended consequences to patients or their families.
There are no potentially serious physical, psychological, social, legal, or other risks for patients. However, in some cases the satisfaction questionnaire can highlight poor care for some dissatisfied patients, and this may make them further dissatisfied.
-
Importance performance gap: Overall scores look better than anticipated.
Feasibility: However the survey is administered, it is important to differentiate in some way from facility generated surveys. Hopefully this is part of the protocol.
Equity: The sponsors are aware of this significant concern.It is not misguided.