The measure calculates the percentage of family or designated responsible party for assisted living (AL) residents. This consumer reported outcome measure is based on the CoreQ: AL Family Satisfaction questionnaire that has three items.
-
-
1.5 Measure Type1.6 Composite MeasureNo1.7 Electronic Clinical Quality Measure (eCQM)1.8 Level Of Analysis1.9 Care Setting1.9b Specify Other Care SettingAssisted Living Facility1.10 Measure Rationale
Collecting satisfaction information from Assisted Living (AL) residents and family members is more important now than ever. We have seen a philosophical change in healthcare that now includes the patient and their preferences as an integral part of the system of care. The Institute of Medicine (IOM) endorses this change by putting the patient as central to the care system (IOM, 2001). For this philosophical change to person-centered care to succeed, we have to be able to measure patient satisfaction for these three reasons:
(1) Measuring satisfaction is necessary to understand patient preferences.
(2) Measuring and reporting satisfaction with care helps patients and their families choose and trust a health care facility.
(3) Satisfaction information can help facilities improve the quality of care they provide.
The implementation of person-centered care in long-term care has already begun, but there is still room for improvement. The Centers for Medicare and Medicaid Services (CMS) demonstrated interest in consumers’ perspective on quality of care by supporting the development of the Consumer Assessment of Healthcare Providers and Systems (CAHPS) survey for patients in nursing facilities (Sangl et al., 2007). We have developed three SNF based CoreQ measures, and these are NQF endorsed. We have also developed two CoreQ measures for AL, and these are also NQF endorsed.
Further supporting person-centered care and resident satisfaction are ongoing organizational change initiatives. These include: the Center for Excellence in Assisted Living (CEAL) which has developed a measure of person-centeredness of assisted living with UNC, the Advancing Excellence in America’s Nursing Homes campaign (2006), which lists person-centered care as one of its goals; Action Pact, Inc., which provides workshops and consultations with long-term care facilities on how to be more person-centered through their physical environment and organizational structure; and Eden Alternative, which uses education, consultation, and outreach to further person-centered care in long-term care facilities. All of these initiatives have identified the measurement of resident satisfaction as an essential part in making, evaluating, and sustaining effective clinical and organizational changes that ultimately result in a person-centered philosophy of care.
The importance of measuring resident satisfaction as part of quality improvement cannot be stressed enough. Quality improvement initiatives, such as total quality management (TQM) and continuous quality improvement (CQI), emphasize meeting or exceeding “customer” expectations. William Deming, one of the first proponents of quality improvement, noted that “one of the five hallmarks of a quality organization is knowing your customer’s needs and expectations and working to meet or exceed them” (Deming, 1986). Measuring resident satisfaction can help organizations identify deficiencies that other quality metrics may struggle to identify, such as communication between a patient and the provider.
As part of the US Department of Commerce renowned Baldrige Criteria for organizational excellence, applicants are assessed on their ability to describe the links between their mission, key customers, and strategic position. Applicants are also required to show evidence of successful improvements resulting from their performance improvement system. An essential component of this process is the measurement of customer, or resident, satisfaction (Shook & Chenoweth, 2012).
The CoreQ: AL Family Satisfaction questionnaire and measure can strategically help AL facilities achieve organizational excellence and provide high quality care by being a tool that targets a unique and growing patient population. Moreover, improving the care for AL patients is tenable. A review of the literature on satisfaction surveys in long-term care facilities (Castle, 2007) concluded that substantial improvements in resident satisfaction could be made in many facilities by improving care (i.e., changing either structural or process aspects of care). This was based on satisfaction scores ranging from 60 to 80% on average (with 100% as a maximum score).
It is worth noting, few other generalizations could be made because existing instruments used to collect satisfaction information are not standardized. Thus, benchmarking scores and comparison scores (i.e., best in class) were difficult to establish. The CoreQ: AL Family Satisfaction Measure has considerable relevance in establishing benchmarking scores and comparison scores. AHCA/NCAL developed three skilled nursing facility (SNF) based CoreQ measures: CoreQ: Long-Stay Family Satisfaction Measure, CoreQ: Long-Stay Resident Satisfaction Measure, and CoreQ: Short-Stay Discharge Measure. All three of these measures received NQF endorsement in 2016. In addition to the CoreQ Family Satisfaction Measure received NQF endorsement in 2019. With these five satisfaction measures, it enables providers to measure satisfaction across the long term care continuum with valid and reliable measures.
This measure’s relevance is furthered by recent federal legislative actions. The Affordable Care Act of 2010 requires the Secretary of Health and Human Services (HHS) to implement a Quality Assurance & Performance Improvement Program (QAPI) within nursing facilities. This means all nursing facilities have increased accountability for continuous quality improvement efforts. In CMS’s “QAPI at a Glance” document there are references to customer-satisfaction surveys and organizations utilizing them to identify opportunities for improvement. Some assisted living communities have implemented QAPI in their organizations.
Lastly, in CMS’s National Quality Strategy (2024), one of the four key areas is advancing equity and engagement for all individuals. Specifically, CMS calls out expanding the use of person-reported outcomes and experience measures as a key action. Similarly, in the most recent SNF payment rule (CMS, August 2024), CMS acknowledges an opportunity to add patient experience or satisfaction measures to the Quality Reporting Program (QRP) that spans across post-acute and long-term care providers and created by the IMPACT Act of 2014. While CMS does not provide direct oversight of assisted living, more states are covering assisted living as part of home and community-based Medicaid waivers. As of 2020, 44% of assisted living communities were Medicaid certified (CDC, 2020). Thus, the principles of CMS’s Quality Strategy apply and the CoreQ: AL family measure can further CMS’s quality efforts.
Castle, N.G. (2007). A literature review of satisfaction instruments used in long-term care settings. Journal of Aging and Social Policy, 19(2), 9-42.
CDC (2020). National Post-Acute and Long-Term Care Study. https://www.cdc.gov/nchs/npals/webtables/overview.htm
CMS (2009). Skilled Nursing Facilities Non Swing Bed - Medicare National Summary. http://www.cms.hhs.gov/MedicareFeeforSvcPartsAB/Downloads/NationalSum2007.pdf
CMS, University of Minnesota, and Stratis Health. QAPI at a Glance: A step by step guide to implementing quality assurance and performance improvement (QAPI) in your nursing home. https://www.cms.gov/Medicare/Provider-Enrollment-and-Certification/QAPI/Downloads/QAPIAtaGlance.pdf.
CMS (April 2024). Quality in Motion: Acting on CMS National Quality Strategy. https://www.cms.gov/files/document/quality-motion-cms-national-quality-strategy.pdf
CMS (August 6, 2024). Medicare Program; Prospective Payment System and Consolidated Billing for Skilled Nursing Facilities; Updates to the Quality Reporting Program and Value-Based Purchasing Program for Federal Fiscal Year 2025. https://www.federalregister.gov/d/2024-16907/p-588
Deming, W.E. (1986). Out of the crisis. Cambridge, MA. Massachusetts Institute of Technology, Center for Advanced Engineering Study.
Institute of Medicine (2001). Improving the Quality of Long Term Care. National Academy Press, Washington, D.C., 2001.
Medicare and Medicaid Programs; Reform of Requirements for Long-Term Care Facilities; Department of Health and Human Services. 80 Fed. Reg. 136 (July 16, 2015) (to be codified at 42 CFR Parts 405, 431, 447, et al.).
MedPAC. (2015). Report to the Congress: Medicare Payment Policy. http://www.medpac.gov/documents/reports/mar2015_entirereport_revised.pdf?sfvrsn=0.
Sangl, J., Bernard, S., Buchanan, J., Keller, S., Mitchell, N., Castle, N.G., Cosenza, C., Brown, J., Sekscenski, E., and Larwood, D. (2007). The development of a CAHPS instrument for nursing home residents. Journal of Aging and Social Policy, 19(2), 63-82.
Shook, J., & Chenoweth, J. (2012, October). 100 Top Hospitals CEO Insights: Adoption Rates of Select Baldrige Award Practices and Processes. Truven Health Analytics. http://www.nist.gov/baldrige/upload/100-Top-Hosp-CEO-Insights-RB-final.pdf.
1.11 Measure Webpage1.20 Testing Data Sources1.25 Data SourcesThe collection instrument is the Core Q: AL Family questionnaire and exclusions are from the facility health information systems.
-
1.14 Numerator
The numerator assesses the number of families or designated responsible party for AL residents that are satisfied. Specifically, the numerator is the sum of the family or designated responsible party for AL residents that have an average satisfaction score of =>3 for the three questions on the CoreQ: AL Family Satisfaction questionnaire.
1.14a Numerator DetailsWhile the frequency in which the questionnaires are administered is left up to the provider, they should at least be administered once a year. Once the questionnaire is administered to the family member or designated responsible party for AL residents, they have up to 2 months to return the questionnaire. Only surveys returned within two months of the resident initially receiving the survey are included in the calculation.
The numerator includes all the family or designated responsible party members for AL residents that had an average response =>3 on the CoreQ: AL Family Satisfaction questionnaire.
We calculate the average satisfaction score for the individual family or designated responsible party member for AL residents in the following manner:
- Respondents within the appropriate time window and who do not meet the exclusions are identified.
- A numeric score is associated with each response scale option on the CoreQ: AL Family Satisfaction questionnaire (that is, Poor=1, Average=2, Good=3, Very Good=4, and Excellent=5).
- The following formula is utilized to calculate the individual’s average satisfaction score: [Numeric Score Question 1 + Numeric Score Question 2 + Numeric Score Question 3]/3
- The number of respondents whose average satisfaction score >=3 are summed together and function as the numerator.
For respondents with one missing data point (from the 3 items included in the questionnaire) imputation will be used (representing the average value from the other two available questions). For respondents with more than one missing data point, they will be excluded from the analyses (i.e., no imputation will be used for these family members). Imputation details are described further below.
-
1.15 Denominator
The target population is family or designated responsible party members of a resident residing in the facility for at least two weeks.
1.15a Denominator DetailsThe denominator includes all of the family or the designated responsible party members for residents that have been in the facility for at least two weeks or more regardless of payer status; who received the CoreQ: AL Family Satisfaction questionnaire (e.g. people meeting exclusions do not receive the questionnaire), and who responded to the questionnaire within the two month time window.
The length-of-stay (of the resident of the family member or designated responsible party) will be identified from facility records.
1.15d Age GroupOlder Adults (65 years and older)
-
1.15b Denominator Exclusions
Exclusions made at the time of sample selection are the following: (1) Court-appointed guardian; (2) family of residents receiving hospice; (3) Family members who reside in another country and (4) family of residents who have lived in the AL facility for less than two weeks.
Additionally, once the survey is administered, the following exclusions are applied: a) surveys received outside of the time window (two months after the administration date) and b) surveys that have more than one questionnaire item missing.
1.15c Denominator Exclusions DetailsPlease note, the resident representative for each current resident is initially eligible regardless of their being a family member or not. Only one primary contact per resident should be selected.
Exclusions made at the time of sample selection include: (1) family or designated responsible party for residents with hospice; (2) family or designated responsible party for residents with a legal court appointed guardian; (3) representatives of residents who have lived in the facility for less than two weeks; and (4) all representatives reside in another country.
Additionally, once the survey is administered, the following exclusions are applied: a) surveys received outside of the time window (more than two months after the administration date) and b) surveys that have more than one questionnaire item missing.
No stratification is used.
Exclusions will be based on information from the facility health information system.
-
1.13a Data dictionary not attachedYes1.16 Type of Score1.17 Measure Score InterpretationBetter quality = Higher score1.18 Calculation of Measure Score
1. Identify the representatives of residents that have been residing in the facility for two weeks or more.
2. Take the representatives of residents that have been residing in the facility for >=two weeks and exclude the following:
- Representatives of residents on hospice. This is recorded in the facility health information system.
- Residents with Court appointed legal guardian for all decisions as identified from the facility health information system.
3. Exclude representatives of residents who reside in another country.
4. Administer the CoreQ: AL Family Satisfaction questionnaire to the representatives that do not meet these exclusion criteria. Provide the family or designated responsible party member for the resident two months to respond to the survey.
- Create a tracking sheet with the following columns:
- Date Administered
- Date Response Received
- Time to Receive Response: ([Date Response Received – Date Administered])
- Exclude any surveys where Time to Receive Response >60 days (2 months)
5. Combine the CoreQ: AL Family Satisfaction questionnaire items to calculate a resident’ representative satisfaction score. Responses for each item should be given the following scores:
- Poor = 1,
- Average = 2,
- Good = 3,
- Very good =4 and
- Excellent = 5.
6. Impute missing data if only one of the three questions are missing data. Drop all survey responses if 2 or more survey questions have missing data.
7. Calculate resident’s representative score from usable surveys.
- Representative average score = (Score for Item 1 + Score for Item 2 + Score for Item 3) / 3.
- Flag those representatives with a score equal to or greater than 3.0
- For example, a representative of a resident rates their satisfaction on the three CoreQ questions as excellent = 5, very good = 4, and good = 3. The family member’s total score will be 5 + 4 + 3 for a total of 12. The representative of the AL resident total score (12) will then be divided by the number of questions (3), which equals 4.0. Thus, the representative’s average satisfaction rating is 4.0. Since this person’s average response is >3.0 they would be counted in the numerator. If it was <3.0 they would not be counted.
8. Calculate the facility’s CoreQ: AL Family Satisfaction Measure which represents the percent of respondents with average scores of 3.0 or above.
- CoreQ: AL Family Satisfaction Measure = ([number of respondents with an average score of ≥3.0] / [total number of valid responses])*100
9. No risk-adjustment is used.
1.19 Measure Stratification DetailsNo stratification is used.
1.21b Attach Data Collection Tool(s)1.21a Data Source URL(s) (if applicable)1.22 Are proxy responses allowed?No1.23 Survey Respondent1.24 Data Collection and Response Rate1. Identify the representatives of residents that have been residing in the facility for two weeks or more.
2. Take the representatives of residents that have been residing in the facility for >= two weeks and exclude the following:
- Representatives of residents on hospice. This is recorded in the facility health information system.
- b. Residents with Court appointed legal guardian for all decisions as identified from the facility health information system.
3. Exclude representatives of residents who reside in another country.
4. Exclude representatives of residents who died in the facility
5. Administer the CoreQ: AL Family Satisfaction questionnaire to family or designated responsible party members for AL residents.
6. Instruct representatives that they must respond to the survey within 2 months.
7. The response rate for a center is calculated by counting the number of usable surveys returned divided by the number of surveys administered.
- Surveys returned as undeliverable are not counted as usable.
- Surveys with missing responses for two or more questions are also not counted as usable.
- A minimum response rate of 30% needs to be achieved for results to be reported for a facility.
8. Regardless of response rate, AL facilities must also achieve a minimum number of 20 usable questionnaires (e.g. denominator). If after 2 months, less than 20 usable questionnaires are received than a facility level satisfaction measure cannot be reported.
9. All the questionnaires that are received (other than those that satisfy the exclusion criteria seen in section S.9) must be used in the calculations.
1.26 Minimum Sample Size20 usable surveys.
-
7.1 Supplemental Attachment
-
StewardAmerican Health Care Association/National Center for Assisted LivingSteward Organization POC EmailSteward Organization URLSteward Organization Copyright
None
Measure Developer Secondary Point Of ContactNicholas Castle
University of West Virginia
P.O. Box 9190, 64 Medical Center Drive
Morgantown, WV 26506
United StatesMeasure Developer Secondary Point Of Contact Email
-
-
-
2.1 Attach Logic Model2.2 Evidence of Measure Importance
Collecting satisfaction information from Assisted Living (AL) residents and family members is more important now than ever. We have seen a philosophical change in healthcare that now includes the patient and their preferences as an integral part of the system of care. The Institute of Medicine (IOM) endorses this change by putting the patient as central to the care system (IOM, 2001). For this philosophical change to person-centered care to succeed, we have to be able to measure patient satisfaction for these three reasons:
- Measuring satisfaction is necessary to understand patient preferences.
- Measuring and reporting satisfaction with care helps patients and their families choose and trust a health care facility.
- Satisfaction information can help facilities improve the quality of care they provide.
The implementation of person-centered care in long-term care has already begun, but there is still room for improvement. The Centers for Medicare and Medicaid Services (CMS) demonstrated interest in consumers’ perspective on quality of care by supporting the development of the Consumer Assessment of Healthcare Providers and Systems (CAHPS) survey for patients in nursing facilities (Sangl et al., 2007). We have developed three SNF based CoreQ measures, and these are NQF endorsed. But no equivalent instrument exists for AL.
Further supporting person-centered care and resident satisfaction are ongoing organizational change initiatives. These include: the Center for Excellence in Assisted Living (CEAL) which has developed a measure of person-centeredness of assisted living with UNC, the Advancing Excellence in America’s Nursing Homes campaign (2006), which lists person-centered care as one of its goals; Action Pact, Inc., which provides workshops and consultations with long-term care facilities on how to be more person-centered through their physical environment and organizational structure; and Eden Alternative, which uses education, consultation, and outreach to further person-centered care in long-term care facilities. All of these initiatives have identified the measurement of resident satisfaction as an essential part in making, evaluating, and sustaining effective clinical and organizational changes that ultimately result in a person-centered philosophy of care.
The importance of measuring resident satisfaction as part of quality improvement cannot be stressed enough. Quality improvement initiatives, such as total quality management (TQM) and continuous quality improvement (CQI), emphasize meeting or exceeding “customer” expectations. William Deming, one of the first proponents of quality improvement, noted that “one of the five hallmarks of a quality organization is knowing your customer’s needs and expectations and working to meet or exceed them” (Deming, 1986). Measuring resident satisfaction can help organizations identify deficiencies that other quality metrics may struggle to identify, such as communication between a patient and the provider.
As part of the US Department of Commerce renowned Baldrige Criteria for organizational excellence, applicants are assessed on their ability to describe the links between their mission, key customers, and strategic position. Applicants are also required to show evidence of successful improvements resulting from their performance improvement system. An essential component of this process is the measurement of customer, or resident, satisfaction (Shook & Chenoweth, 2012).
The CoreQ: AL Family Satisfaction questionnaire and measure can strategically help AL facilities achieve organizational excellence and provide high quality care by being a tool that targets a unique and growing patient population. Moreover, improving the care for AL patients is tenable. A review of the literature on satisfaction surveys in long-term care facilities (Castle, 2007) concluded that substantial improvements in resident satisfaction could be made in many facilities by improving care (i.e., changing either structural or process aspects of care). This was based on satisfaction scores ranging from 60 to 80% on average (with 100% as a maximum score).
It is worth noting, few other generalizations could be made because existing instruments used to collect satisfaction information are not standardized. Thus, benchmarking scores and comparison scores (i.e., best in class) were difficult to establish. The CoreQ: AL Family Satisfaction Measure has considerable relevance in establishing benchmarking scores and comparison scores. AHCA/NCAL developed three skilled nursing facility (SNF) based CoreQ measures: CoreQ: Long-Stay Family Satisfaction Measure, CoreQ: Long-Stay Resident Satisfaction Measure, and CoreQ: Short-Stay Discharge Measure. All three of these measures received NQF endorsement in 2016. In addition to the CoreQ Resident Satisfaction Measure received NQF endorsement in 2019. With these five satisfaction measures, it enables providers to measure satisfaction across the long term care continuum with valid and reliable measures.
This measure’s relevance is furthered by recent federal legislative actions. The Affordable Care Act of 2010 requires the Secretary of Health and Human Services (HHS) to implement a Quality Assurance & Performance Improvement Program (QAPI) within nursing facilities. This means all nursing facilities have increased accountability for continuous quality improvement efforts. In CMS’s “QAPI at a Glance” document there are references to customer-satisfaction surveys and organizations utilizing them to identify opportunities for improvement. Some AL communities have implemented QAPI in their organizations. States such as OR now also require AL communities to collect CoreQ information for public quality reporting purposes.
Lastly, in CMS’s National Quality Strategy (2024), one of the four key areas is advancing equity and engagement for all individuals. Specifically, CMS calls out expanding the use of person-reported outcomes and experience measures as a key action. Similarly, in the most recent SNF payment rule (CMS, August 2024), CMS acknowledges an opportunity to add patient experience or satisfaction measures to the Quality Reporting Program (QRP) that spans across post-acute and long-term care providers and created by the IMPACT Act of 2014. While CMS does not provide direct oversight of assisted living, more states are covering assisted living as part of home and community-based Medicaid waivers. As of 2020, 44% of assisted living communities were Medicaid certified (CDC, 2020). Thus, the principles of CMS’s Quality Strategy apply and the CoreQ: AL family measure can further CMS’s quality efforts. .
Castle, N.G. (2007). A literature review of satisfaction instruments used in long-term care settings. Journal of Aging and Social Policy, 19(2), 9-42.
CDC (2020). National Post-Acute and Long-Term Care Study. https://www.cdc.gov/nchs/npals/webtables/overview.htm
CMS (2009). Skilled Nursing Facilities Non Swing Bed - Medicare National Summary. http://www.cms.hhs.gov/MedicareFeeforSvcPartsAB/Downloads/NationalSum2007.pdf
CMS, University of Minnesota, and Stratis Health. QAPI at a Glance: A step by step guide to implementing quality assurance and performance improvement (QAPI) in your nursing home. https://www.cms.gov/Medicare/Provider-Enrollment-and-Certification/QAPI/Downloads/QAPIAtaGlance.pdf.
CMS (April 2024). Quality in Motion: Acting on CMS National Quality Strategy. https://www.cms.gov/files/document/quality-motion-cms-national-quality-strategy.pdf
CMS (August 6, 2024). Medicare Program; Prospective Payment System and Consolidated Billing for Skilled Nursing Facilities; Updates to the Quality Reporting Program and Value-Based Purchasing Program
Deming, W.E. (1986). Out of the crisis. Cambridge, MA. Massachusetts Institute of Technology, Center for Advanced Engineering Study.
Institute of Medicine (2001). Improving the Quality of Long Term Care. National Academy Press, Washington, D.C., 2001.
Medicare and Medicaid Programs; Reform of Requirements for Long-Term Care Facilities; Department of Health and Human Services. 80 Fed. Reg. 136 (July 16, 2015) (to be codified at 42 CFR Parts 405, 431, 447, et al.).
MedPAC. (2015). Report to the Congress: Medicare Payment Policy. http://www.medpac.gov/documents/reports/mar2015_entirereport_revised.pdf?sfvrsn=0.
Sangl, J., Bernard, S., Buchanan, J., Keller, S., Mitchell, N., Castle, N.G., Cosenza, C., Brown, J., Sekscenski, E., and Larwood, D. (2007). The development of a CAHPS instrument for nursing home residents. Journal of Aging and Social Policy, 19(2), 63-82.
Shook, J., & Chenoweth, J. (2012, October). 100 Top Hospitals CEO Insights: Adoption Rates of Select Baldrige Award Practices and Processes. Truven Health Analytics. http://www.nist.gov/baldrige/upload/100-Top-Hosp-CEO-Insights-RB-final.pdf.
-
2.6 Meaningfulness to Target Population
The consumer movement has fostered the notion that patient evaluations should be an integral component of health care. Patient satisfaction, which is one form of patient evaluation, became an essential outcome of health care widely advocated for use by researchers and policy makers. Managed care organizations, accreditation and certification agencies, and advocates of quality improvement initiatives, among others, now promote the use of satisfaction surveys. For example, satisfaction information is included in the Health Plan Employer Data Information Set (HEDIS), which is used as a report card for managed care organizations (NCQA, 2016).
Measuring and improving patient satisfaction is valuable to patients, because it is a way forward on improving the patient-provider relationship, which influences health care outcomes. A 2014 systematic review and meta-analysis of randomized controlled trials, in which the patient-provider relationship was systematically manipulated and tracked with health care outcomes, found a small but statistically significant positive effect of the patient-provider relationship on health care outcomes (Kelly et al., 2014). This finding aligns with other studies that show a link between patient satisfaction and the following health-related behaviors:
1. Keeping follow-up appointments (Hall, Milburn, Roter, & Daltroy, 1998);
2. Disenrollment from health plans (Allen & Rogers, 1997); and,
3. Litigation against providers (Penchansky & Macnee, 1994).
The positive effect of person-centered care and patient satisfaction is not precluded from AL facilities. A 2013 systematic review of studies on the effect of person-centered initiatives in long-term care facilities, such as the Eden Alternative, found person-centered care associated with psychosocial benefits to residents and staff, notwithstanding variations and limitations in study designs (Brownie & Nancarrow, 2013).
From the AL facility and provider perspective, there are numerous ways to improve patient satisfaction. One study found conversations regarding end-of-life care options with family members improve overall satisfaction with care and increase use of advance directives (Reinhardt et al., 2014). Another found an association between improving symptom management of long-term care residents with dementia and higher satisfaction with care (Van Uden et al., 2013). Improvements in a long-term care food delivery system also were associated with higher overall satisfaction and improved resident health (Crogan et al., 2013). The advantage of the CoreQ: AL Family Satisfaction questionnaire is it is broad enough to capture family’s dissatisfaction on various provided services and signal to providers to drill down and discover ways of improving the patient experience at their facility.
Specific to the CoreQ: AL questionnaire, the importance of the satisfaction areas assessed were examined with focus groups of residents and family members. The respondents were patients (N=40) in five AL facilities in the Pittsburgh region. The overall ranking used was 10=Most important and 1=Least important. That the final three questions included in the measure had average scores ranging from 9.50 to 9.69 clearly shows that the respondents value the items used in the Core Q: AL measure.
Allen HM, & Rogers WH. (1997). The Consumer Health Plan Value Survey: Round Two. Health Affairs. 1997;16(4):156–66
Brownie, S. & Nancarrow, S. (2013). Effects of person-centered care on residents and staff in aged-care facilities: a systematic review. Clinical Interventions In Aging. 8:1-10.
Crogan, N.L., Dupler, A.E., Short, R., & Heaton, G. (2013). Food choice can improve nursing home resident meal service satisfaction and nutritional status. Journal of Gerontological Nursing. 39(5):38-45.
Hall J, Milburn M, Roter D, Daltroy L (1998). Why are sicker patients less satisfied with their medical care? Tests of two explanatory models. Health Psychol. 17(1):70–75
Kelley J.M., Kraft-Todd G, Schapira L, Kossowsky J, & Riess H. (2014). The influence of the patient-clinician relationship on healthcare outcomes: a systematic review and metaanalysis of randomized controlled trials. PLoS One. 9(4): e94207.
Li, Y., Cai, X., Ye, Z., Glance, L.G., Harrington, C., & Mukamel, D.B. (2013). Satisfaction with Massachusetts nursing home care was generally high during 2005-09, with some variability across facilities. Health Affairs. 32(8):1416-25.
Lin, J., Hsiao, C.T., Glen, R., Pai, J.Y., & Zeng, S.H. (2014). Perceived service quality, perceived value, overall satisfaction and happiness of outlook for long-term care institution residents. Health Expectations. 17(3):311-20.
National Committee for Quality Assurance (NCQA) (2016). HEDIS Measures. http://www.ncqa.org/HEDISQualityMeasurement/HEDISMeasures.aspx. Accessed March 2016.
Penchansky and Macnee, (1994). Initiation of medical malpractice suits: a conceptualization and test. Medical Care. 32(8): pp. 813–831
Reinhardt, J.P., Chichin, E., Posner, L., & Kassabian, S. (2014). Vital conversations with family in the nursing home: preparation for end-stage dementia care. Journal Of Social Work In End-Of-Life & Palliative Care. 10(2):112-26.
Van Uden, N., Van den Block, L., van der Steen, J.T., Onwuteaka-Philipsen, B.D., Vandervoort, A., Vander Stichele, R., & Deliens, L. (2013). Quality of dying of nursing home residents with dementia as judged by relatives. International Psychogeriatrics. 25(10):1697-707.
-
2.4 Performance Gap
The data were collected in 2023 and 2024. 433 facilities participated with 14,689 surveys collected. The facilities were from across the US. Participation was voluntary. The scores and facilities used for the data below were all calculated after the previously mentioned resident exclusions were applied. In addition, scores were only used from facilities with 20 or more responses and a 30% or more response rate.
Table 1. Performance Scores by DecilePerformance Gap Overall Minimum Decile_1 Decile_2 Decile_3 Decile_4 Decile_5 Decile_6 Decile_7 Decile_8 Decile_9 Decile_10 Maximum Mean Performance Score 79 10 55 65 75 80 84 85 90 95 99 100 100 N of Entities 433 1 52 59 55 118 39 46 80 43 38 35 30 N of Persons / Encounters / Episodes 14689 51 2055 1872 1370 1672 1258 1366 1925 1239 975 973 1121
-
-
-
3.1 Feasibility Assessment
All of the data elements used in data collection are used in normal facility operations. As part of the data we collected as part of this maintenance, instructions were sent to AL communities detaining the process of collecting the CoreQ surveys from family. With the exception of cognitive status, all facilities had the information needed readily available.
From the data collected from the recent 433 participating facilities missing data was rare. Of the 14,689 surveys received imputation for one of the three question responses was used in 220 cases (i.e., 1.5%). In addition, surveys not used (i.e., those with 2 or more missing responses) accounted for 1.4% of returns (i.e., N=212).
Facilities have no data entry burden. However, they do have data collection burden. In work we have done with CMS for a different CoreQ survey (NH discharge survey) the cost burden for a facility was calculated to be $2.80 per respondent. This calculation was based on requiring more that 20 data elements; whereas, here only 3 are needed. The cost will likely be less than $2.80.
No barriers were encountered with the measure specifications. The measure calculation was sometimes confused with an average score. The CoreQ measure is not an average. This is explained on reports produced and in the technical manual.
All of the patient surveys are anonymous. In addition, scores are only calculated with 20 or more survey returns. Thus, patient confidentiality is protected.
3.3 Feasibility Informed Final MeasureThis is a maintenance application. As detailed above we have continued to collect CoreQ data to examine any changes in scores and implementation issues. No adjustment to the measure has occurred.
-
3.4a Fees, Licensing, or Other Requirements
N/A
3.4 Proprietary InformationProprietary measure or components (e.g., risk model, codes), without fees
-
-
-
4.1.3 Characteristics of Measured Entities
All entities were Assisted Living Communities.
4.1.1 Data Used for TestingThis is a maintenance application. The data used for NQF approval was collected in 2018 and the reliability, validity, and exclusions were reported. As detailed above we have continued to collect CoreQ data to examine any changes in scores and implementation issues. This data was collected in 2023 and 2024.
The testing and analysis included four data sources, one of which had additional variables collected for a subset of respondents:
- The Pilot CoreQ: AL Family Satisfaction Questionnaire was examined using responses from 1,521 Family members or resident representatives from a national sample of AL facilities (Data Source #1).
- In addition, Family-level sociodemographic (SDS) variables were examined using this same sample of 1,521 Family members or resident representatives (#1 above) in AL facilities across the US. (Data Source #1).
- Validity testing of the Pilot CoreQ: AL Family Satisfaction Questionnaire was examined using responses from 100 Family members or resident representatives from the Pittsburgh area. (Data Source #2).
- Core Q: AL Family measure was examined using 375 facilities and included responses from 13,095 Family members or resident representatives. These AL facilities were located in multiple states across the US. (Data Source #3).
- In addition, the CoreQ: AL Family Satisfaction measure was examined along with other outcome measures using a national sample of 486 facilities (with 29,693 family members) [Data Source #4].
More information is located in 7.1 Supplemental Attachment- Table 1: Descriptive Statistics of Centers Included in the Analysis.
4.1.4 Characteristics of Units of the Eligible PopulationData was used from the CoreQ: AL Family Satisfaction Questionnaire. The questionnaire was administered to all eligible AL family (with the exclusions described in the Specification part of this application). The testing and analysis included:
- The Pilot CoreQ: AL Family Satisfaction questionnaire was examined using responses from 1,521 family members or resident representatives from a national sample of nursing facilities. (Data #1)
- In addition, Family-level sociodemographic (SDS) variables were examined using this same sample of 1,521 family members (Data #1 above) in AL facilities across the US.
- Validity testing of the Pilot Core Q: AL Family Satisfaction questionnaire was examined using responses from 100 family members from the Pittsburgh area. (Data #2)
- CoreQ: AL Family Satisfaction questionnaire measure was examined using 375 facilities and included responses from 13,095 family members or resident representatives. These AL facilities were located in multiple states across the US. (Data #3)
[Note: Data source #4 above was used for facility level analyses, and is not included in the resident level of analysis]
The descriptive characteristics of the family members are given in the following table that includes information from all the data used (the education level and race information comes only from the sample described above with 1,521 respondents, as this data was not collected for the other samples).
More information is located in the 7.1 Supplemental Attachment: Table 1.6: Respondent Demographics.
4.1.2 Differences in DataThis is a maintenance application. The data used for NQF approval was collected in 2018 and the reliability, validity, and exclusions were reported. As detailed above we have continued to collect CoreQ data to examine any changes in scores and implementation issues. This data was collected in 2023 and 2024.
- The Pilot CoreQ: AL Family Satisfaction Questionnaire was examined using responses from 1,521 Family members or resident representatives from a national sample of AL facilities (Data Source #1).
-
4.2.1 Level(s) of Reliability Testing Conducted4.2.2 Method(s) of Reliability Testing
We measured reliability at the: (1) data element level; (2) the person/questionnaire level; and, (3) at the measure (i.e., facility) level. More detail of each analysis follows.
(1) DATA ELEMENT LEVEL
To determine if the CoreQ: AL Family Satisfaction questionnaire items were repeatable, producing the same results a high proportion of the time when assessed in the same population in the same time period, we re-administered the questionnaire to family members 1 month after their first survey. The Pilot CoreQ: AL Family Satisfaction questionnaire had responses from 100 family members; we re-administered the survey to 100 of these same family members, and 97% responded. The re-administered sample was a sample of convenience as they represented family members from the Pittsburgh area (the location of the team testing the questionnaire. To measure the agreement, we calculated first the distribution of responses by question in the original round of surveys, and then again in the follow-up surveys (they should be distributed similarly); and second, calculated the correlations between the original and follow-up responses by question (they should be highly correlated).
(2) PERSON/QUESTIONNAIRE LEVEL
Having tested whether the data elements matched between the pilot responses and the re-administered responses, we then examined whether the person-level results matched between the Pilot CoreQ: AL Family Satisfaction questionnaire responses and their corresponding re-administered responses. In particular, we calculated the percent of time that there was agreement between whether or not the pilot response was poor, average, good, very good or excellent, and whether or not the re-administered response was poor, average, good, very good or excellent.
(3) MEASURE (FACILITY) LEVEL
We measured stability of the facility-level measure when the facility’s score is calculated using multiple “draws” from the same population. This measures how stable the facility’s score would be if the underlying family members are from the same population but are subject to the kind of natural sample variation that occurs over time. We did this by bootstrap with 10,000 repetitions of the facility score calculation, and present the percent of facility resamples where the facility score is within 1 percentage point, 3 percentage points, 5 percentage points, and 10 percentage points of the original score calculated on the Pilot Core Q: AL Family questionnaire sample. We also conducted two-level signal-to-noise analysis which identifies two sources of variability, those between ratees (facilities) and those for each ratee (respondents). No imputed values were used in the analysis and only AL facilities with 20 or more responses were included.
4.2.3 Reliability Testing Results- DATA ELEMENT LEVEL
Table 2a2.3.a shows the three CoreQ: AL Family Satisfaction questionnaire items, and the response per item for both the pilot survey of 100 family members and the re-administered survey of 100 family members (i.e., 97 responses). The responses in the pilot survey are not statistically significant from the re-administered survey. This shows that the data elements were highly repeatable and produced the same results a high proportion of the time when assessing the same population in the same time period.
- PERSON/QUESTIONNAIRE LEVEL
Table 2a2.3.c shows the CoreQ: AL Family Satisfaction questionnaire items, and the agreement in response per item for both the pilot survey of 100 family members compared with the re-administered survey of 97 family members. The person-level responses in the pilot survey are not statistically significantly different from the re-administered survey. This shows that a high percent of time there was agreement between whether or not the pilot response was poor, average, good, very good or excellent, and whether or not the re-administered response was poor, average, good, very good or excellent. Table 2a2.3.d shows the agreement between the pilot and re-administered responses. In summary, 96% or more of the re-administered responses agreed with their corresponding pilot responses, in terms of whether or not they were rated in the categories of poor or average or good, very good or excellent.
- MEASURE (FACILITY) LEVEL
After having performed the 10,000-repetition bootstrap, 15% of bootstrap repetition scores were within 1 percentage point of the score under the original pilot sample, 29% were within 3 percentage points, 42% were within 5 percentage points, and 79% were within 10 percentage points. For the two-level signal-to-noise analysis for CoreQ: AL Family R=0.82 (this result is the mean), indicating that 82% of facilities true score can be attributed to ratings from the respondents (AL families) and remaining 18% is due to noise and differences among respondents. This result exceeds what is generally considered a good reliability coefficient of 0.8 (Campbell et al., 2010).
4.2.3a- Table 2
Table 2 cannot be completed because this was not conducted in the initial testing.
Campbell, JA, Narayanan, A., Burford, B., Greco, MJ. Validation of a multi-source feedback tool for use in general practice. Education in Primary Care, 2010, 21, 165-179.
4.2.3a Attach Additional Reliability Testing Results4.2.4 Interpretation of Reliability ResultsIn summary, the measure displays a high degree of element-level, questionnaire-level, and measure (facility)-level reliability. First, the Core Q: AL Family questionnaire data elements were highly repeatable, with pilot and re-administered responses agreeing between 97% and 100% of the time depending on the question. That is, this produced the same results a high proportion of the time when assessed in the same population in the same time period. Second, the questionnaire level scores were also highly repeatable, with pilot and re-administered responses agreeing 98% of the time (or more). Third, a facility drawing family members from the same underlying population will only vary modestly. The 10,000-repetition bootstrap results show that the CoreQ: AL Family Satisfaction measure scores from the same facility are moderately stable given the minimum sample size of 20 was set for this measure; and the maximum sample size was 125.
-
4.3.1 Level(s) of Validity Testing Conducted4.3.2 Type of accountable entity-level validity testing conducted4.3.3 Method(s) of Validity Testing
In the development of the CoreQ: AL Family Satisfaction questionnaire, four sources of data were used to perform three levels of validity testing. These are described above in Section 1.5.
The first source of data (data from a sample of convenience collected near the researchers developing the questionnaire in Pittsburgh) was used in developing and choosing the format to be utilized in the CoreQ: AL Family Satisfaction questionnaire (i.e., response scale).
The second source of data, was pilot data collected from a national sample of 1,521 family members. This data was used in choosing the items to be used in the CoreQ: AL Family Satisfaction questionnaire (i.e., questionnaire items). This data was also used in examining Family-level sociodemographic (SDS) variables.
The third source of data (collected from 375 facilities) was used examine the validity of the CoreQ: AL Family Satisfaction measure (i.e., facility and summary score validity). These family members / AL facilities were from multiple states across the U.S.
The fourth source of data (collected from 487 facilities described in Section 1.5) was used to examine the correlations between the CoreQ: AL Family Satisfaction measure scores and other quality metrics from the facilities.
Thus, the following sections describe this validity testing:
1. Validity Testing of the questionnaire format used in the CoreQ: AL Family Satisfaction questionnaire (using data source 1, from above);
2. Testing the items for the CoreQ: AL Family Satisfaction questionnaire (using data source 2, from above);
3. Testing to determine if a sub-set of items could reliably be used to produce an overall indicator of satisfaction (CoreQ: AL Family Satisfaction measure) (using data source 3, from above);
4. Validity testing for the CoreQ: AL Family Satisfaction measure (also using data source 1, from above and data source 4).
1. Validity Testing for the Questionnaire Format used in the CoreQ: AL Family Satisfaction Questionnaire
A. The face validity of the domains used in the CoreQ: AL Family Satisfaction questionnaire was evaluated via a literature review. The literature review was conducted to examine important areas of satisfaction for LTC family. Specifically, the research team examined 12 commonly used satisfaction surveys and reports to determine the most valued domains when looking at satisfaction. These surveys were identified by completing internet searches in PubMed and Google. Key terms that were searched included: Family satisfaction, long-term care satisfaction, and elderly satisfaction.
B. The face validity of the domains was also examined using a focus group of family members. The overall ranking used was 1=Most important and 22=Least important. That is family members were asked to rank the domains from most important to least important. The respondents were family members (N=40) of residents in five AL facilities in the Pittsburgh region.
C. The face validity of the Pilot CoreQ: AL Family Satisfaction questionnaire response scale was also examined. The respondents were family members (N=40) with residents in five AL facilities in the Pittsburgh region. The percent of respondents that stated they “fully understood” how the response scale worked, could complete the scale, AND in cognitive testing understood the scale was used.
D. The Flesch-Kinkaid scale was used to determine if respondent correctly understood the questions being asked (Streiner & Norman, 1995).
Reference: Streiner, D. L. & Norman, G.R. (1995). Health measurement scales: A practical guide to their development and use. 2nd ed. New York: Oxford.
2. Testing the Items for the CoreQ: AL Family Satisfaction Questionnaire
The second series of validity testing was used to further identify items that should be included in the CoreQ: AL Family Satisfaction questionnaire. This analysis was important, as all items in a satisfaction measure should have adequate psychometric properties (such as low basement or ceiling effects). For this testing, (1) A pilot group of 40 family members was first used in focus groups; (2) a Pilot version of the CoreQ: AL Family Satisfaction questionnaire survey was administered consisting of 18 items (N= 1,521 family members). The testing consisted of:
A. Family members were asked to rate the 18 different satisfaction questions related to their experience in AL. This was conducted with a pilot group of 40 family members in focus groups.
B. The Pilot CoreQ: AL Family Satisfaction questionnaire items performance with respect to the distribution of the response scale and with respect to missing responses. (Using 1,521 family members described above)
C. The intent of the Pilot instrument was to have items that represented the most important areas of satisfaction (as identified above) in a parsimonious manner. Additional analyses such as exploratory factor analysis (EFA) were used to eliminate items in the Pilot instrument. This was an iterative process that included using Eigenvalues from the principal factors (unrotated) and correlation analysis of the individual items (using 1,521 family members described above).
3. To determine if a Sub-Set of Items could be used to Produce an Overall Indicator of Satisfaction (The Core Q: AL Family Measure).
The Core Q: AL Family measure under development was meant to represent overall satisfaction with as few items as possible. The testing given below describes how this was achieved.
A. To support the construct validity that the idea that the CoreQ items measured a single concept of “satisfaction” – we performed a correlation analysis using all items in the instrument.
B. In addition, using all items in the instruments a factor analysis was conducted. Using the global items Q1 (“How satisfied are you with the facility?”) the Cronbach’s Alpha of adding the “best” additional item was examined.
4. Validity Testing for the Core Q: AL Family Measure.
A. To determine if the 3 items in the CoreQ: AL Family Satisfaction questionnaire were a reliable indicator of satisfaction, the correlation between these three items (the “CoreQ: AL Family Satisfaction Measure”) and ALL of the items on the Pilot CoreQ instrument was conducted.
B. We performed additional validity testing of the facility-level CoreQ: AL Family measure by examining the correlations between the CoreQ: AL Family Satisfaction measure scores and several quality metrics from the AL facilities. If the CoreQ: AL Family Satisfaction scores correlate negatively with the measures that decrease as they get better, and positively with the measures that increase as they get better, then this supports the validity of the CoreQ: AL Family Satisfaction measure.
4.3.4 Validity Testing Results1. Validity Testing for the Questionnaire Format used in the CoreQ: AL Family Satisfaction Questionnaire
- The face validity of the domains used in the CoreQ: AL Family Satisfaction questionnaire was evaluated via a literature review (described above).
- The research team examined the surveys and reports to identify the different domains that were included. The research team scored the domains by simply counting if an instrument included the domain. Table 2b1.3.a gives the domains that were found throughout the search, as well as a score. An example is the domain clinical care, this was used in 10 out of the 12 surveys identified in the literature. An interpretation of this finding would be that items addressing clinical care are extremely important in satisfaction surveys. These domains were used in developing the pilot CoreQ: AL Family Satisfaction questionnaire items.
- The face validity of the domains was also examined using family members. The following abbreviated table shows the rank of importance for each group of domains. The overall ranking used was 1=Most important and 22=Least important. The ranking of the 3 areas used in the CoreQ: AL Family Satisfaction questionnaire are shown. Note, the food domain was ranked third – but was excluded from the CoreQ: AL Family Satisfaction measure based on: 1) additional analyses showing that it was highly correlated with the overall domain; 2) food was in many cases not actually experienced by family members; 3) it was included in the CoreQ: Resident Satisfaction Measure -- thus, it added little to this family measure.
- The face validity of the pilot CoreQ: AL Family Satisfaction questionnaire response scale was also examined. Table 2b1.3.c gives the percent of respondents that stated they “fully understood” how the response scale worked, could complete the scale, AND in cognitive testing understood the scale.
- The CoreQ: AL Family Satisfaction questionnaire was purposefully written using simple language. No a priori goal for reading level was set, however a Flesch-Kinkaid scale score of six, or lower, is achieved for all questions.
2. Testing the Items for the CoreQ: AL Family Satisfaction Questionnaire
- Each family member was asked to rate on a scale of 1 to 10 (with 10 as the best) how important they thought the question was for evaluating the experience with AL care. The three questions included in the CoreQ were highly rated out of all the questions and in analysis of family member’s responses to 18 questions. That is, these three items were shown to provide unique information to distinguish satisfaction with AL. Specifically, “In recommending this facility to your friends and family, how would you rate it overall?” had an average score of 8.9; “Overall, how would you rate the staff?” had an average score of 9.4; and, “How would you rate the care your family member received?” had an average score of 9.2. This shows a very pervasive influence of the satisfaction items with the experience of AL care.
- The pilot Core Q: AL Family questionnaire items all performed well with respect to the distribution of the response scale and with respect to missing responses.
- Using all items in the instruments (excluding the global item Q1 (“How would you rate the facility?”)) exploratory factor analysis (EFA) was used to evaluate the construct validity of the measure. The Eigenvalues from the principal factors (unrotated) were 10.62 for Factor 1 and 0.87 for Factor 2. In this analysis, the first Eigenvalue is overwhelmingly greater than the second Eigenvalue, this supports the proposition that the CoreQ instrument is measuring a single global concept of customer satisfaction – rather than a number of sub-concepts of customer satisfaction. Sensitivity analyses using principal factors and rotating provide highly similar findings.
3. To determine if a Sub-Set of Items could Reliably be used to Produce an Overall Indicator of Satisfaction (The CoreQ: AL Family Satisfaction measure).
- To support the construct validity that the idea that the CoreQ items measured a single concept of “satisfaction” – we performed a correlation analysis using all items in the instrument. The analysis identifies the pairs of CoreQ items with the highest correlations. The highest correlations are shown in the Table 2b1.3.d. Items with the highest correlation are potentially providing similar satisfaction information. Because items with the highest correlation were potentially providing similar satisfaction information they could be eliminated from the instrument. Note, the table provides 3 sets of correlations, however the analysis was conducted examining all possible correlations between items.
- In addition, using all items in the instrument a factor analysis was conducted. Using the global items Q1 (“How satisfied are you with the facility?”) the Cronbach’s Alpha of adding the “best” additional item is shown in the table below. Cronbach’s alpha measures the internal consistency of the values entered into the factor analysis; a value of 0.7 or higher is generally considered acceptably high. The additional item(s) is considered best in the sense that it is most highly correlated with the existing item, and therefore provides little additional information about the same construct. Therefore, this analysis was also used to eliminate items. Note, table 2b1.3.e again provides 3 sets of correlations, however the analysis was conducted examining all possible correlations between items.
Thus, using the correlation information and factor analysis 3 items representing the CoreQ: AL Family Satisfaction questionnaire were identified.
4. Validity Testing for the Core Q: AL Family Measure.
The overall intent of the analyses described above was to identify if a sub-set of items could reliably be used to produce an overall indicator of satisfaction, the CoreQ: AL Family Satisfaction questionnaire.
- The items were all scored according to the rules identified elsewhere. The same scoring was used in creating the 3 item CoreQ: AL Family Satisfaction questionnaire summary score and the satisfaction score using the Pilot Core Q: AL Family questionnaire. The correlation was identified as having a value of 0.91.
- That is, the correlation score between the actual “CoreQ: AL Family Satisfaction Measure” and all of the 18 items used in the Pilot instrument indicates that the satisfaction information is approximately the same if we had included either the 3 items (much less burdensome, and therefore likely to yield a higher response rate) or the 18 item Pilot instrument. Thus, we only included the three measures as additional measures did not provide additional information for a quality measure to assess a facilities satisfaction score. Additional questions may help with quality improvement efforts to identify specific areas of satisfaction or dissatisfaction.
- We performed additional validity testing of the facility-level CoreQ: AL Family Satisfaction measure by measuring the correlations between the CoreQ: AL Family Satisfaction measure scores and several other quality metrics from AL providers (see Table 2b1.3.f) . CoreQ: AL Family Satisfaction measure is the percentage of family members of residents who, on average for the three CoreQ items included in the measure, rated the facility >= 3. We measured satisfaction using family’s responses to the three items from the CoreQ: AL Family Satisfaction questionnaire. The summary score from the 3 CoreQ: AL Family Satisfaction questionnaire items is calculated in the following way: Respondents answering poor are given a score of 1, average = 2, good =3, very good =4 and excellent =5. For the 3 questionnaire items the average score for the Family is calculated. The facility score represents the percent of family members with average scores of 3 or above. This score should be associated with quality. Therefore, for each facility in the sample the correlation with other quality indicators was examined.
4.3.4a Attach Additional Validity Testing Results4.3.5 Interpretation of Validity Results1. Validity Testing for the Questionnaire Format used in the CoreQ: AL Family Satisfaction Questionnaire
A. The literature review shows that domains used in the Pilot CoreQ: AL Family Satisfaction questionnaire items have a high degree of both face validity and content validity.
B. Family’s overall rankings, show the general “domain” areas used indicates a high degree of both face validity and content validity.
C. The results show that 100% of Family members are able to complete the response format used. This testing indicates a high degree of both face validity and content validity.
D. The Flesch-Kinkaid scale score achieved for all questions indicates that respondents have a high degree of understanding of the item.
2. Testing the Items for the CoreQ: AL Family Satisfaction Questionnaire
A. The percent of missing responses for the items is very low. The distribution of the summary score is wide. This is important for quality improvement purposes, as AL facilities can use benchmarks etc.
B. EFA shows that one factor explains the common variance of the items. A single factor can be interpreted as the only “concept” being measured by those variables. This means that the instrument measures the global concept of satisfaction and not multiple areas of satisfaction. This supports the validity of the CoreQ instrument as measuring a single concept of “customer satisfaction”. This testing indicates a high degree of criterion validity.
3. Testing to Determine if a Sub-Set of Items could Reliably be used to Produce an Overall Indicator of Satisfaction (The CoreQ: AL Family Satisfaction measure)
A. Using the correlation information of the CoreQ: AL Family Satisfaction questionnaire (18 items) and the 3 items representing the CoreQ: AL Family Satisfaction questionnaire a high degree of correlation was identified. This testing indicates a high degree of criterion validity.
B. EFA shows that one factor explains the common variance of the items. A single factor can be interpreted as the only “concept” being measured by those variables. This means that the instrument measures the global concept of satisfaction and not multiple areas of satisfaction. This supports the validity of the CoreQ instrument as measuring a single concept of “customer satisfaction”. This testing indicates a high degree of criterion validity.
4. Validity Testing for the CoreQ: AL Family Satisfaction Measure
A. The correlation of the 3 item CoreQ: AL Family Satisfaction measure summary score (identified elsewhere in this document) with the overall satisfaction score (scored using all data and the same scoring metric) gave a value of 0.91. That is, the correlation score between the actual “CoreQ: AL Family Satisfaction Satisfaction Measure” and all of the 18 items used in the Pilot instrument indicates that the satisfaction information is approximately the same if we had included either the 3 items or the 18 item Pilot questions. This indicates that the CoreQ: AL Family Satisfaction measure score adequately represents the overall satisfaction of the facility. This testing indicates a high degree of criterion validity.
B. Relationship with Quality Indicators
The 9 quality indicators examined had a moderate level of correlation with the CoreQ: AL Family Satisfaction measure. These correlations range from 0.27 to 0.01. The CoreQ: AL Family Satisfaction measure is associated with all of the 9 quality indicators in the direction hypothesized (that is higher CoreQ scores are associated with better quality indicator scores). This testing indicates a moderate degree of construct validity and convergent validity.
As noted by Mor and associates (2003, p.41) when addressing quality of long-term care facilities, “there is only a low level of correlation among the various measures of quality.” Castle and Ferguson (2010) also show the pattern of findings of quality indicators in long-term care facilities is consistently moderate with respect to the correlations identified. Thus, it is not surprising that “very high” levels of correlations were not identified. As described in the literature, some correlation was identified in the direction as expected, which is in support of validity of the CoreQ: Family Satisfaction Measure.
-
4.4.1 Methods used to address risk factors4.4.1b If an outcome or resource use measure is not risk adjusted or stratified
To date, results from satisfaction surveys have not used risk adjustment. The CoreQ measures overall satisfaction. Providing the scores across entities without any adjustment does provide a fair comparison. In addition, the data elements that could be used for any adjustment are not collected.
No research (to date) has risk adjusted or stratified satisfaction information from AL facilities. Testing on this was conducted as part of the development of the federal initiative to develop a CAHPS® Nursing Home Survey to measure nursing home residents’ experience (hereafter referred to as NHCAHPS) (RTI International, 2003). No empirical or theoretical or empirical risk adjusted or stratified reporting of satisfaction information was recommended as the evidence showed that no clear relationship existed with respect to resident characteristics and the satisfaction scores. We note, this testing was in nursing facilities; not AL. However, it is cited here as very little information exists on satisfaction testing in AL facilities.
Education may influence responses to the questions asked. That is, respondents with lower education levels may not appropriately interpret the items. To address this, our items were written and tested to very low Flesh-Kincaid levels. In testing, no differences in average item scores were identified based on education levels (p<.05) (Table2b3.4b.c) . A t-test analysis was used to compare the CoreQ mean scores, adjusting for race (Table 2b3.4b.d). This analysis demonstrated the CoreQ: AL Resident Satisfaction measure is not significantly different based on race. Based on these results, education level makeup of the respondents or the racial makeup of the respondents does not appear to be related to this measure. We included these background characteristics for two reasons. First, to examine if any responses were different based on these factors (in no case were the responses different). Second, to examine the representativeness of the samples (the samples examined were representative of national AL figures).
Multiple studies in the past twenty years have examined racial disparities in the care of nursing facility residents and have consistently found poorer care in facilities with high minority populations (Fennell et al., 2000; Mor et al., 2004; Smith et al., 2007). No equivalent work in AL facilities exists; therefore, the nursing facility work is referenced here.
Work on racial disparities in nursing facilities’ quality of care between elderly white and black residents within nursing facility has shown clearly that nursing homes remain relatively segregated and that specifically nursing home care can be described as a tiered system in which Blacks are concentrated in marginal-quality homes (Li, Ye, Glance & Temkin-Greener, 2014; Fennell, Feng, Clark & Mor, 2010; Li, Yin, Cai, Temkin-Greener, Mukamel, 2011; Chisholm, Weech-Maldonado, Laberge, Lin, & Hyer, 2013; Mor et al., 2004; Smith et al., 2007). Such homes tend to have serious deficiencies in staffing ratios, performance, and are more financially vulnerable (Smith et al, 2007; Chisholm et al., 2013). Based on a review of the nursing facility disparities literature, Konetzka and Werner concluded that disparities in care are likely related to this racial and socioeconomic segregation as opposed to within-provider discrimination (Konetzka & Werner 2009). This conclusion is supported, for example, by Grunier and colleagues who found that as the proportion of black residents in the nursing home increased the risk of hospitalization among all residents, regardless of race, also increased (Grunier et al., 2008). Thus, adjusting for racial status has the unintended effect of adjusting for poor quality providers not to differences due to racial status and not within-provider discrimination.
Lower satisfaction scores also likely increase as the proportion of black residents increases, indicating that the best measure of racial disparities in satisfaction rates is one that measures scores at the facility level. That is, ethnic and social economic status differences are related to inter-facility differences not to intra-facility differences in care. Therefore, the literature suggests that racial status should not be risk adjusted otherwise one is adjusting for the poor quality of the SNFs rather than differences due to racial status. We believe the same is true for AL facilities.
Chisholm L, Weech-Maldonado R, Laberge A, Lin FC, Hyer K. Nursing home quality and financial performance: does the racial composition of residents matter? Health Serv Res. 2013 Dec;48(6 Pt 1):2060-80. doi: 10.1111/1475-6773.12079. Epub 2013 Jun 26. PMID: 23800123; PMCID: PMC3805666.
Connor-Smith JK, Flachsbart C. Relations between personality and coping: a meta-analysis. J Pers Soc Psychol. 2007 Dec;93(6):1080-107. doi: 10.1037/0022-3514.93.6.1080. PMID: 18072856.
Fennell ML, Feng Z, Clark MA, Mor V. (2010). Elderly Hispanics more likely to reside in poor-quality nursing homes. Health Aff (Millwood);29(1):65–73.
Fennell ML, Miller SC, Mor V. Facility effects on racial differences in nursing home quality of care. Am J Med Qual. 2000 Jul-Aug;15(4):174-81. doi: 10.1177/106286060001500408. Erratum in: Am J Med Qual 2000 Sep-Oct;15(5):206. PMID: 10948790.
Gruneir, A., Miller, S. C., Feng, Z., Intrator, O., & Mor, V. (2008). Relationship between state Medicaid policies, nursing home racial composition, and the risk of hospitalization for black and white residents. Health Services Research, 43(3), 869-881.
International, R. (2003). RTI International Annual Report. Research Triangle Park: RTI’s Office of Communications, Information and Marketing.
Konetzka RT, Werner RM. Disparities in long-term care: building equity into market-based reforms. Med Care Res Rev. 2009 Oct;66(5):491-521. doi: 10.1177/1077558709331813. Epub 2009 Feb 18. PMID: 19228634.
Li Y, Ye Z, Glance LG, Temkin-Greener H. Trends in family ratings of experience with care and racial disparities among Maryland nursing homes. Med Care. 2014 Jul;52(7):641-8. doi: 10.1097/MLR.0000000000000152. PMID: 24926712; PMCID: PMC4058647.
Li Y, Yin J, Cai X, Temkin-Greener J, Mukamel DB. Association of race and sites of care with pressure ulcers in high-risk nursing home residents. JAMA. 2011 Jul 13;306(2):179-86. doi: 10.1001/jama.2011.942. PMID: 21750295; PMCID: PMC4108174.
Mor V, Zinn J, Angelelli J, Teno JM, Miller SC. Driven to tiers: socioeconomic and racial disparities in the quality of nursing home care. Milbank Q. 2004;82(2):227-56. doi: 10.1111/j.0887-378X.2004.00309.x. PMID: 15225329; PMCID: PMC2690171.
Risk adjustment approachOffRisk adjustment approachOffConceptual model for risk adjustmentOffConceptual model for risk adjustmentOff
-
-
-
5.1 Contributions Towards Advancing Health Equity
For all of the CoreQ surveys we are examining scores for white and black residents. In nursing homes, overall scores for black residents are lower than those for white residents. However, we know that black residents are disproportionately cared for in lower quality facilities. This may influence the overall scores. We are continuing to examine this data. In AL from the data we received, very few (<2%) respondents were black. Thus, we are continuing to collect data from AL communities trying to over-sample communities with more black residents.
-
-
-
6.1.1 Current StatusYes6.1.3 Current Use(s)6.1.4 Program DetailsAHCA/NCAL National Quality Award Program, https://www.ahcancal.org/Quality/National-Quality-Award-Program/Pages/default.aspx?utm_source=ahcancal_homepage&utm_medium=main_rotator&utm_campaign=QAITA, The AHCA/NCAL National Quality Awards Program is a progressive program that is based on the Baldrige Criteria for Performance Excellence. This nationa, The geographic area is the nation. Over 1700 entities or facilities have received an award.LTC Trend Tracker, https://www.ahcancal.org/Data-and-Research/LTC-Trend-Tracker/Pages/default.aspx, The program allows skilled nursing and assisted living organizations to benchmark personal metrics to those of their peers and examine ongoing quality, Skilled Nursing and Assisted living facilities across the United States utilize LTC Trend Tracker. About 15,266 Skilled Nursing Facilities and 9,280 A, The level of analysis is the facility-level. The care settings are skilled nursing and assisted living facilities.Residential Care Quality Metrics Program, https://www.oregon.gov/odhs/licensing/community-based-care/pages/quality-metrics.aspx#requirements, The purpose is to improve quality of service and give consumers and facilities a means of comparison., Oregon. There are 577 accountable entities who serve about 30,145 residents., State-level analysis. The care setting is assisted living facilities.Assisted Living Report Card/DHS Aging and Adult Services Division (AASD), https://mn.gov/dhs/partners-and-providers/news-initiatives-reports-workgroups/aging/assisted-living-report-card/assisted-living-reports.jsp, To measure and report on the quality of individual assisted living settings for housing and services paid for privately and through public programs, Minnesota. There are 156 accountable entities who serve about 5,164 residents., State-level analysis. The care setting is assisted living facilities.
-
6.2.1 Actions of Measured Entities to Improve Performance
Improving performance relies on the testing of change and benchmarking. Frequently collecting data is a necessary step to enhance and maximize quality improvement. Data collected during tests provides critical insight that is needed to determine the best path forward. Benchmarking is a process used to measure the quality and performance of your organization. Benchmarking plays a significant role in identifying patterns, providing context, and then guiding decision-making processes.
The CoreQ Resident Satisfaction measure allows assisted living facilities to measure the impact of tests of change and benchmark their performance relative to other facilities. Specifically, facilities can increase the number of staff and/or improve staff training and measure the impact using CoreQ. Similarly, improvements in reduced adverse events, such as falls and hospitalizations, increase resident rating of care received and increase satisfaction. Finally, facilities can understand and address the needs and wants of residents, like certain activities or food, to increase their willingness to recommend the facility and CoreQ performance.
The actions needed to improve performance are not difficult once a process or plan for improvement is developed (e.g. Quality Assurance/Performance Improvement (QAPI)). Measured entities can overcome difficulties by monitoring data and results. Monitoring data often ensures you preserve the advances of the quality improvement effort. Developing a feedback and monitoring system to sustain continuous improvement helps providers preserve the advances of the quality improvement effort.
6.2.2 Feedback on Measure PerformanceThe CoreQ measure for assisted living residents has elevated the resident and family voice as well as help guide consumer choices as another way for potential residents to review the quality of a care facility. Specifically, the CoreQ measure has been independently tested as a valid and reliable measure of customer satisfaction. The CoreQ is a short survey with three to four questions which reduces response burden on residents and allows organizations to benchmark their results with consistent questions and response scale. Satisfaction vendors and providers have particularly appreciated how easy it is to integrate the CoreQ questions to their satisfaction surveys. They believe the short length relative to other survey tools, like HCAHPS, helps increase and maintain high response rates.
AHCA/NCAL developed LTC Trend Tracker, a web-based tool that enables long term and post-acute care providers, including assisted living, to access key information that can help their organization succeed. The CoreQ report and upload feature within LTC Trend Tracker includes an API (application programming interface) for vendors performing the survey on behalf of ALs to upload data, so that the aggregate CoreQ results will be available to providers. Given that LTC Trend Tracker is the leading method for NCAL AL members to profile their quality and other data, the incorporation of CoreQ into LTC Trend Tracker means it will immediately become the de facto standard for customer satisfaction surveys for the AL industry. AHCA/NCAL continues to work with customer satisfaction vendors to promote CoreQ and receives requests for vendors to be added to the list of those incorporating CoreQ. Currently, there are over 40 vendors across the nation who can administer the CoreQ survey.
We also are working with states who require satisfaction measurement to incorporate CoreQ into their process. AHCA/NCAL has a presence in each state, and our state affiliates continue to promote the use of the CoreQ.
6.2.3 Consideration of Measure FeedbackAHCA/NCAL developed LTC Trend Tracker, a web-based tool that enables long term and post-acute care providers, including assisted living, to access key information that can help their organization succeed. The CoreQ report and upload feature within LTC Trend Tracker includes an API for vendors performing the survey on behalf of ALs to upload data, so that the aggregate CoreQ results will be available to providers. Given that LTC Trend Tracker is the leading method for NCAL AL members to profile their quality and other data, the incorporation of CoreQ into LTC Trend Tracker means it will immediately become the de facto standard for customer satisfaction surveys for the AL industry. AHCA/NCAL continues to work with customer satisfaction vendors to promote CoreQ and receives requests for vendors to be added to the list of those incorporating CoreQ.
Among providers and vendors, we receive feedback during committee and workgroup meetings. For feedback on LTC Trend Tracker, we scope out the cost and feasibility of suggested enhancements. For example, we added a more graphical user interface option for the API, in addition to the original command line interface that was more technical, based on feedback from vendors.
For some of the feedback we receive, we use it as an opportunity to educate about best practices in survey collection and administration. For example, some vendors and providers inquire about administering CoreQ over the phone or other mixed modes of collection. In this instance, we caution vendors and providers about possible response or interviewer bias and recommend using written surveys as the primary method because it has been tested and shown to be reliable and valid.
6.2.4 Progress on ImprovementLTC Trend Tracker is a web-based tool that enables long term and post-acute care providers, including assisted living, to access key information that can help their organization succeed. AL facilities report CoreQ performance results in LTC Trend Tracker for benchmarking and state comparisons. AHCA/NCAL monitored the impact of the COVID-19 pandemic on satisfaction trends among AL residents in the nation. The data shows:
- In 2020Q1, satisfaction rates were 86.3% which represented 255 AL facilities.
- In 2021Q1, satisfaction rates decreased to 80.3% which represented 140 AL facilities. By the end of 2021 satisfaction rates dropped to 76.4% which represented 227 AL facilities.
- In 2024Q3, satisfaction rates increased to 81.0% which represented 200 AL facilities.
Monitoring satisfaction rates during the pandemic and after helped facilities/operators benchmark and trend their COVID-19 related performance.
6.2.5 Unexpected FindingsThere were no negative consequences to individuals or populations identified during testing or evidence of unintended negative consequences to individuals or populations reported since the implementation of the CoreQ: AL Family Satisfaction questionnaire or the measure that is calculated using this questionnaire. This is consistent with satisfaction surveys in general in nursing facilities. Many other satisfaction surveys are used in AL facilities with no reported unintended consequences to patients or their families.
There are no potentially serious physical, psychological, social, legal, or other risks for patients. However, in some cases the satisfaction questionnaire can highlight poor care for some dissatisfied patients, and this may make them further dissatisfied.
-
Measure specifications:
Numerator and denominator: From recent personal experience, it matters who is included as family or responsible party. Hopefully there is a pragmatic way of identifying those with a legitimate reason to be included. Friends of those in care should not be included unless they are part of the care plan. I am assuming this is defined well enough in the exclusions.
Feasibility: However the survey is administered, it is important to differentiate in some way from facility generated surveys. Hopefully this is part of the protocol.