This intermediate outcome eCQM captures the proportion of visits for patients of all ages that experience emergency care access barriers during a one-year performance period.
-
-
1.5 Measure Type1.6 Composite MeasureNo1.7 Electronic Clinical Quality Measure (eCQM)1.8 Level Of Analysis1.9 Care Setting1.10 Measure Rationale
The Emergency Care Capacity and Quality (ECCQ) Electronic Clinical Quality Measure (eCQM) is a de novo intermediate clinical outcome measure that captures variation in the capacity and quality of emergency care to support hospital quality improvement and improve patient outcomes. The score will report the proportion of quality gaps in access at the facility level (emergency department) for intended use in an accountability program through which it may be publicly reported (i.e., the Hospital Outpatient Quality Reporting (HOQR) Program and the Rural Emergency Hospital Quality Reporting (REHQR) Program). Limitations in capacity and quality of emergency care have been shown to be associated with harm, such as increases in mortality, delays in care, preventable errors, poor patient experience and staff burnout.
The measure aims to reduce patient harm and improve outcomes for patients requiring emergency care in an emergency department (ED). Emergency care capacity is inclusive of several concepts pertaining to boarding and crowding in an ED. This measure aligns with incentives to promote improved care both in EDs and the broader health system to help identify where patients do not receive equitable access to emergency care.
The measure captures established outcome metrics already in use that quantify capacity and access of care in an ED, such as ED arrival and departure times. The ECCQ eCQM aims to positively impact millions of patients who seek treatment in the ED and help address long standing disparities in emergency care, including for patients with mental health diagnoses. Additional disparities in ED care are well documented for patients of older age, by race and ethnicity, primary language, and insurance status; such documented disparities include significantly longer ED wait times, higher left without being seen rates, longer boarding times, and longer total length of stay in the ED.
1.11 Measure Webpage1.20 Testing Data Sources1.25 Data SourcesSince ECCQ is an eCQM, it is specified in a standard electronic format and uses data electronically extracted from electronic health records (EHRs). Facilities are required to electronically report eCQMs using EHR data. All data elements in the measure are defined fields in electronic sources.
-
1.14 Numerator
The numerator is comprised of any ED visit in the denominator where the patient experiences any one of the following:
- The patient waited longer than 1 hour to be placed in a treatment room or dedicated treatment area that allows for audiovisual privacy during history-taking and physical examination, or
- The patient left the ED without being evaluated by a physician/advanced practice nurse/physician’s assistant, or
- The patient boarded in the ED for longer than 4 hours, or
- The patient had an ED length of stay (LOS) (time from ED arrival to ED physical departure as defined by the ED depart timestamp) of longer than 8 hours.
ED observation stays, defined as an observation encounter where the patient remains physically in an area under control of the emergency department and under the care of an emergency department clinician inclusive of observation in a hospital bed, are excluded from criteria #3 (boarding), and #4 (ED LOS).
1.14a Numerator DetailsSpecific codes required to calculate the numerator are outlined in the attached value set data dictionary and eCQM package (Quality Data Model - QDM output).We highlight that the numerator specifications for the measure slightly differ between the version of the ECCQ measure considered for use in the Hospital Outpatient Quality Reporting (HOQR) program and the version considered for use among rural emergency hospitals (REHs) in the REHQR program, specifically in relation to numerator criteria #3. Numerator criteria #3a (inpatient boarding) corresponds to specifications for hospitals captured in the HOQR program and numerator criteria #3b (transfer boarding) corresponds to specifications for rural emergency hospitals captured in the REHQR program.
The numerator is comprised of any ED visit in the denominator where the patient experiences any one of the following:
- The patient waited longer than 1 hour to be placed in a treatment room or dedicated treatment area that allows for audiovisual privacy during history-taking and physical examination. This measure component is calculated at the encounter level by subtracting “First ED Arrival Time” from “First ED Roomed Time” and then flagging as a numerator event if >60 minutes.
- The patient left the ED without being evaluated by a physician/advanced practice nurse/physician’s assistant. This measure component is calculated at the encounter level using “Left Without Being Seen”, usually identified through the ED Disposition.
The patient boarded in the ED for longer than 4 hours.
a. Boarding time is typically defined as the time from Decision to Admit (order) to ED departure for admitted patients. This measure component is calculated at the encounter level by subtracting “Decision to Admit” order time from “ED Departure Time” for visits with the ED disposition of “Admitted”, and then flagging as a numerator event if >240 minutes. ED observation stays (defined below) are excluded from this numerator component.
b. For rural emergency hospitals, boarding time is defined as the time from Decision to Transfer (order) to the ED departure for transferred patients since rural emergency hospitals do not provide inpatient services. This measure component is calculated at the encounter level by subtracting “Decision to Transfer” order time from “ED Departure Time” for visits with the ED disposition of “Transferred” (to an acute care hospital), and then flagging as a numerator event if >240 minutes. ED observation stays (defined below) are excluded from this numerator component
- The patient had an ED length of stay (LOS) (time from ED arrival to ED physical departure as defined by the ED depart timestamp) of longer than 8 hours. This measure component is calculated at the encounter level by subtracting time from ED arrival to ED departure and flagging as a numerator event if >480 minutes. ED observation stays (defined below) are excluded from this numerator component.
ED observation stays, defined as an observation encounter where the patient remains physically in an area under control of the emergency department and under the care of an emergency department clinician inclusive of observation in a hospital bed, are excluded from criteria #3 (boarding), and #4 (ED LOS). To clarify, patients who have a ‘decision to admit’ after an ED observation stay are excluded from criteria #3a (inpatient boarding) calculations.
If an encounter includes any one of the four numerator events, it is considered part of the numerator. Numerator events are not mutually exclusive, only contributing to the numerator once.
Numerator timing thresholds were determined using a combination of information from the literature, external benchmarks developed by third parties (e.g., The Joint Commission) and expert input from the technical expert panel (TEP).
-
1.15 Denominator
The denominator includes all ED visits associated with patients of all ages, for all-payers, during a 12-month performance period. Patients can have multiple visits during a performance period; each visit is eligible to contribute to the outcome.
1.15a Denominator DetailsThe denominator includes all ED encounters (visits) associated with patients of all ages, for all-payers, during the performance period. Patients can have multiple encounters during a performance period; each encounter is included in the denominator.
1.15d Age GroupChildren (0-17 years)Adults (18-64 years)Older Adults (65 years and older)
-
1.15b Denominator Exclusions
None
1.15c Denominator Exclusions DetailsNone
-
1.12 Attach MAT Output1.13 Attach Data Dictionary1.13a Data dictionary not attachedNo1.16 Type of Score1.17 Measure Score InterpretationBetter quality = Lower score1.18 Calculation of Measure Score
- At the ED level, identify all encounters that occur during the performance period.
- From the encounters in step 1, calculate the percent meeting any of the numerator criteria described in Section 1.14a.
- Divide the number of encounters that meet any numerator criterion by the number of encounters identified in step 1.
Apply volume standardization at the ED level using z-scores (Venkatesh et al. 2021) by ED volume strata. ED volume strata are defined in ED visit volume bands of 20,000 visits. Each ED will belong to only one volume stratum.
The volume-adjusted z score is calculated as:
(ED raw score – volume stratum mean) / volume stratum standard deviation
Multiply the volume-adjusted z-score from step 4 by the national standard deviation of ED outcome rates, then add national mean outcome rate:
Z-score * national standard deviation + national mean
- If a hospital (CCN) has more than one ED, combine individual ED scores as a weighted average.
For non-rural emergency hospitals, the measure uses volume standardization in units of 20,000 annual ED visits. These volume bands are based on prior literature and actual use within the ED measurement/quality community (such as the ED Benchmarking Alliance) (Augustine 2022). Volume standardization is used to address the case mix differences between EDs (Welch et al. 2012), and volume standardization offers the simplest approach (approved by the industry) without the complexities and unintended consequence of statistical modeling. This is aligned with American College of Emergency Physicians (ACEP) measure approach in the MIPS Program to measuring patient flow in the ED setting.
Given that rural emergency hospitals are low volume, volume standardization is not applied to calculate the ECCQ measure for rural emergency hospitals. The following steps are used to calculate the measure for rural emergency hospitals
- At the ED level, identify all encounters that occur during the performance period.
- From the encounters in #1, Identify which encounters meet at least one (any) numerator criteria described in 1.14a.
- Divide the number of encounters that meet the numerator criteria by the number of encounters identified in step 1.
- If a hospital (CCN) has more than one ED, combine individual ED scores as a weighted average.
References
Augustine, James J. 2022. “Data Registries in Emergency Care.” Clinical Emergency Data Registry (CEDR). ACEP. June 13, 2022. https://www.acep.org/cedr/newsroom/spring-2022/data_registries_in_emergency_care.
Venkatesh, Arjun, Shashank Ravi, Craig Rothenberg, Jeremiah Kinsman, Jean Sun, Pawan Goyal, James Augustine, and Stephen K. Epstein. 2021. “Fair Play: Application of Normalized Scoring to Emergency Department Throughput Quality Measures in a National Registry.” Annals of Emergency Medicine 77 (5): 501–10. https://doi.org/10.1016/j.annemergmed.2020.10.021.
Welch, Shari J., James J. Augustine, Li Dong, Lucy A. Savitz, Gregory Snow, and Brent C. James. 2012. “Volume-Related Differences in Emergency Department Performance.” The Joint Commission Journal on Quality and Patient Safety 38 (9): 395–402. https://doi.org/10.1016/s1553-7250(12)38050-1.
1.19 Measure Stratification DetailsValue sets required to calculate the stratified measure are available in the value set data dictionary and eCQM package attachment (QDM output). A description of the four different strata for this measure are described below.
Four strata of the measure will be calculated, stratified by age (18+/<18) and mental health diagnoses (with, and without).
The principal diagnosis (first listed diagnosis at ED discharge) will be used to define strata inclusion. For this measure's purpose, mental health diagnoses do not include substance use disorder diagnoses.
Stratification by age will be reported for patients less than 18 years of age and patients 18 years of age and older, for both mental health and non-mental health cohorts.
Total score and score for the following strata will be reported:
- Stratification 1: all encounters for patients aged less than 18 years seen in the ED who do not have an ED encounter principal diagnosis consistent with psychiatric/mental health diagnoses. Encounters for patients who have an ED encounter principal diagnosis consistent with substance use disorders will be included in this stratification.
- Stratification 2: all encounters for patients aged 18 years and older seen in the ED who do not have an ED encounter principal diagnosis consistent with psychiatric/mental health diagnoses. Encounters for patients who have an ED encounter principal diagnosis consistent with substance use disorders will be included in this stratification.
- Stratification 3: all encounters for patients aged less than 18 years seen in the ED who have an ED encounter principal diagnosis consistent with psychiatric/mental health diagnoses.
- Stratification 4: all encounters for patients aged 18 years and older seen in the ED who have an ED encounter principal diagnosis consistent with psychiatric/mental health diagnoses.
1.26 Minimum Sample SizeThis measure does not have a minimum sample size to calculate the performance score.
-
7.1 Supplemental Attachment
-
StewardCenters for Medicare & Medicaid ServicesSteward Organization POC EmailSteward Organization URLSteward Organization Copyright
Not applicable
Measure Developer Secondary Point Of ContactOscar Gonzalez
Acumen, LLC
500 Airport Blvd., Suite 100
Burlingame, CA 94010
United StatesMeasure Developer Secondary Point Of Contact Email
-
-
-
2.1 Attach Logic Model2.2 Evidence of Measure Importance
Please see the supplemental document in Section 7.1 (4625e-section-7.1-supplemental-attachment.docx) for all references presented throughout Section 2.2 and Section 2.3.
EDs in the United States play a crucial role in providing immediate medical care to individuals who require urgent attention for a wide range of injuries, illnesses, and medical emergencies. EDs also provide a safety-net for care in most communities, serving as an open door for a broad range of services, including trauma care, diagnostic services, procedures, coordination and referrals, public health and disaster response, and patient education and coordination of care. The ED is also a critical hub in the health system, connecting care and services between a broad array of non-hospital settings and other hospital settings, such as inpatient care and the transfer of patients to other facilities. Because of this larger health system role, focusing on variation in ED care has impacts beyond the ED itself.
There are long-standing and worldwide concerns about parameters that impact the quality and timeliness of care in the ED, including interactions between patients admitted to the hospital from the ED, care quality in the ED, and hospital capacity at large. For example, when a patient is deemed to require inpatient care but there are no inpatient beds available, that patient may remain in the ED until a bed becomes available (this patient is now “boarding” in the ED) (ACEP 2018). Additionally, when a patient is deemed to require a transfer, the patient may remain in the ED while waiting to be transferred (this patient is now “transfer boarding” in the ED (Mohr et al. 2021). ED and transfer boarding and crowding have been shown to be associated with poor patient outcomes, including increased mortality (in ED and non-ED patients) (Burgess, Ray‐Barruel, and Kynoch 2021; Hsuan et al. 2022; Kelen et al. 2021; Reznek et al. 2018; Roussel et al. 2023; Singer et al. 2011) delays in needed care (e.g., delivery of antibiotics) (Gaieski et al. 2017), and negative patient (Reznek et al. 2021) and staff experiences (impacting staff burnout and turnover) (Loke et al. 2023). Importantly, there are also disparities in boarding, with high acuity Black patients and patients with behavioral health diagnoses experiencing longer boarding times compared with White patients (Ruffo et al. 2022). Although ED boarding is widely reported as a crisis in the lay press (Wan 2022), by professional associations (ACEP, n.d.), and supported by data from benchmarking groups (ACEP 2023), there are currently no national measures available to assess ED boarding and transfer boarding; stakeholders have even appealed to the President of the United States for national action to address the problem (ACEP 2022). At the same time, many interventions have been shown to be effective in addressing ED and transfer boarding and crowding (see Table S1 in Section 7.1). There is evidence that each of the numerator components of the ECCQ eCQM are associated with patient harm. Each are described in more detail below
Wait times: Studies have shown that wait times, which represent delays in timely care, are associated with patient harm. One retrospective study across multiple urban EDs in Canada examined the association between wait times and harm (72-hour ED re-visits) and found that, among other input metrics, mean ED waiting time (defined as ED arrival to physician assessment) had the strongest association with harm (McRae et al. 2022). In addition, a single-site study using data gathered prior to the pandemic showed that the odds of a patient safety event (adverse event, preventable adverse event, and near miss) increased with each additional increase in ED waiting time (time from arrival to being seen by a triage nurse).
Leaving prior to evaluation: Based on 2022 Emergency Department Benchmarking Alliance data, if 5.0 percent of patients left the ED before their treatment was complete, that means that about 7 million patients did not receive the care they needed in the ED. Single ED studies have shown that about half of patients who leave the ED without being seen have a subsequent encounter with the healthcare system; and, of those, more than half (about 68 percent) return to an ED or are admitted to the hospital (Roby et al. 2021). In addition, one study (Hodgins, Moore, and Little 2023) found that across all patients, 12.6 percent left the ED without being seen; the rate was 30 percent for higher-acuity patients.
ED Boarding: ED (inpatient) boarding has been shown to be associated with a wide range of harm, from delays in treatment to increases in mortality, including patients already admitted to the hospital (Hsuan et al. 2022). ED boarding also negatively impacts patient experience, as patients are often boarded by being held in hallways in a bed which lacks privacy, and can contribute to staff burnout (Kelen et al. 2021). Because of the associated harms, the ECCQ eCQM TEP suggested that boarding itself should be seen as a “never event.” (Yale/CORE 2023) For example, studies have shown a positive association between boarding time and patient safety events, including adverse events, preventable events, and near misses, (Alsabri et al 2020) in addition to delays in care, such as antibiotic administration (Gaieski et al. 2017) While these and other studies have shown that longer boarding times are associated with harm, studies that focus on mortality have not consistently shown a significant association. For example, one systematic review found that only 4 of 11 studies showed a clear relationship between boarding and mortality (Burgess, Ray‐Barruel, and Kynoch 2022).
The evidence suggests there is an impact of boarding on outcomes in critically ill patients, but the evidence for an impact specifically on mortality is inconsistent.For example, a 2020 review article summarized studies related to ED boarding and critically ill patients and identified more than ten studies showing worse outcomes for boarded patients, including increased duration of mechanical ventilation, worsening organ dysfunction, and lower probability of neurologic recovery in stroke patients (Mohr et al. 2020). In addition, the review identified studies showing higher in-hospital mortality for patients boarded for more than six hours (Chaflin et al. 2007) and a positive association between Intensive Care Unit (ICU) mortality and boarding duration (Cardoso et al. 2011). Other studies, however, found impacts on delay in administration of medications without a clear impact on mortality (Lykins V et al. 2021). A 2022 study suggested that selection bias may have contributed to an overestimation of the impact of boarding on mortality (Gardner et al. 2022).
Transfer Boarding: While less well-studied than inpatient boarding, transfer boarding (patients remaining physically in the ED after the decision to transfer has been made) can have similar impacts on the originating hospital on ED crowding as inpatient boarding. For example, in one study in the Veteran’s Administration (VA) system, the number of patients without mental health conditions awaiting transfer delayed time to seeing a provider for new patients more than other discharge disposition types such as those admitted or discharged (Mohr et al. 2022). Each additional transfer patient also impacted other ED metrics, such as time to electrocardiogram, time to labs, time to radiography, and time to discharge, more than other disposition types. Rural and smaller hospitals would be expected to be more impacted by transfer boarding due to their higher rates of transfers compared with larger and non-rural hospitals. Gaps in quality of interhospital transfer have also been shown to impact patient outcomes (Usher et al. 2018).
ED length of stay: The association between ED length of stay and mortality is unclear. A 2022 systematic review identified 19 studies that examined the relationship between ED LOS and in-hospital mortality and found that 10 of the 19 studies did not find a significant relationship (Burgess, Ray‐Barruel, and Kynoch 2022); five studies showed an increased risk of mortality with longer ED length of stay (studies included a range of thresholds, including 4, 6, 8, 12, and 24 hours).
In terms of harms other than mortality, a 2021 systematic review (Jones, Mountain, and Forero 2021) concluded that ED length of stay (and total ED occupancy) had the strongest evidence for association with worse timeliness of care (e.g., pain relief, medication administration); and, likewise, a 2023 systematic review identified two studies that found that ED length of stay was the strongest predictor of delays in treatment in the ED (Darraj et al. 2023). A 2023 study that examined the impact of the United Kingdom 4-hour LOS standard (Nuffield Trust 2024) found that this policy resulted in a 14 percent relative decrease in 30-day all-cause mortality (Gruber, Hoe, and Stoye 2021).
A 2022 systematic review identified several studies that support an 8-hour threshold (Burgess, Ray‐Barruel, and Kynoch 2022). Akhtar et al (2015) found that patients with acute stroke were more likely to experience complications and more likely to die in the hospital if they spent more than 8 hours in the ED. Berg et al., (2019) found that lower-acuity patients (triage acuity levels 3 to 5) with an ED length of stay of at least 8 hours who were discharged from the ED had higher odds of 10-day mortality compared with patients who had a stay of less than 2 hours. Dinh et al., (2020) found a significantly higher risk of all-cause 30-day mortality for patients with an ED length of stay greater than 4 hours. Mitra et al., (2012) found higher odds of death for “general medical” patients with an ED length of stay greater than 8 hours after adjusting for age, gender, and acuity.
More information on the link between structure, process, the intermediate outcome, and the desired outcome, can be found in Table S1 (see Section 7.1), which describes interventions that measured entities can implement to improve the components of this measure score.
-
2.3 Anticipated Impact
Impact on Care and Health Outcomes
The ECCQ eCQM has the potential for a wide range of impacts both within the ED itself and across the larger health system. During measure development, the TEP emphasized that it is important for the measure to encourage the hospital system to work together to improve patient outcomes and satisfaction. The logic model (see Section 2.1) shows the potential pathways by which public reporting of the ECCQ eCQM could result in changes in inputs by EDs as well as hospital wide, resulting in improvements in the four components of the ECCQ eCQM. Hospital wide/system changes will likely play more of a role in improvements in the ED LOS and boarding components, while ED-level changes are more likely to result in improvement in the proportion of visits without a medical screening by a qualified professional and for time to be placed in a treatment space. However, there are opportunities for both EDs and the wider hospital system to contribute to improvement in all four components.
Table S1 (see supplemental attachment in Section 7.1) outlines the components of the measure, examples of changes in care shown in the literature to be effective, and the resulting anticipated changes in intermediate and long-term outcomes. Because the measure’s numerator includes one of any four outcomes, any changes in care will be based on which component of the measure a hospital chose to focus on following root-cause or other analyses. Specific impacts on care will likely differ based on the existing resources and patient mix at each hospital. Importantly, a study that examined hospital characteristics associated with high- and lower-performing hospitals found that better performance on ED metrics was associated with organizational characteristics, not with specific types of interventions, and that no specific strategies were used consistently across high-performing hospitals. Specific strategies were also not consistently underutilized within lower-performing hospitals (Chang et al. 2018). We also note that Table S1 provides only examples and is not a comprehensive collection of all potential changes to care or best practices. Because this measure combines multiple metrics, it attempts to encourage hospitals to make changes on all of the four components to improve patient outcomes by providing component-level and ED-level results.
Impact on Costs
A literature search performed during the development of the ECCQ eCQM uncovered few studies that examined the potential impact of improvement on these metrics with changes in cost. A 2020 systematic review (Canellas et al. 2021) identified only two relatively recent studies. A 2015 study estimated about $4 million in savings at a single hospital if mean boarding times were reduced to below 60 minutes (Dyas et al. 2015), and a 2017 single-site analysis showed that boarded patients cost the hospital two times the amount as an inpatient bed and five times the amount of another alternative (admissions holding-unit bed) (Schreyer and Marin 2017). These studies reached similar conclusions as an older 2010 study showing potential revenue gains of $2.6 to $3.7 million at a single institution by reducing boarding time through managing hospital capacity (reducing elective non-ED admissions during high demand) (Pines et al. 2011). A more recent 2022 analysis in a single institution (urban safety-net trauma center) estimated annual lost revenue of about $1.7 million due to boarding of patients in the ED (Straube et al. 2022). Revenue loss due to patients leaving before being seen or from ambulance diversion (which can both be caused by boarding) have also been quantified; an older study estimated losses of more than $3 million in one suburban teaching hospital. At the national level, researchers have shown there is a positive correlation between hospital spending as captured by the publicly reported Medicare Spending Per Beneficiary metric and high scores (worse performance) on measures of ED boarding and crowding (Baloescu et al. 2021).
Net benefit
When patients face quality gaps in access to emergency care, the resulting impacts are far reaching and can affect other patients and providers across the entire hospital system. The resulting outcomes include poor patient experience, delays in care, increased morbidity and mortality, disparities in care, and provider burnout. Any measure can result in unintended consequences and therefore there is a need to monitor such impacts following measure implementation. However, this ECCQ eCQM captures multiple facets of quality gaps in access across the ED using a threshold approach which addresses past measurement challenges and has the potential to positively impact patient outcomes and address long-standing disparities in care.
2.5 Health Care Quality LandscapeThere are two existing measures, Median Time from ED Arrival to ED Departure for Discharged ED Patients (OP-18) and Left Without Being Seen (OP-22), both in CMS’s HOQR and REHQR programs, that overlap with the ECCQ measure.
The ECCQ eCQM’s outcome is broader than both OP-18 and OP-22, neither of which capture ED boarding, which is a critical component of the ECCQ measure (please see Section 2.2, Importance). ED boarding is not captured by any other currently publicly reported measure. The ECCQ measure also captures the outcome of waiting (time from arrival to first room), which is also not captured by any currently publicly reported measure.
The outcome calculation for OP-18 is based on the median, whereas the overlapping ECCQ eCQM component is based on a threshold of 8 hours. Use of a median can mask poor performance when the distribution is skewed to the right. Furthermore, by capturing four different components in one measure, the ECCQ eCQM provides a window into broad aspects of ED quality access gaps with one measure. In addition, neither OP-18 nor OP-22 are eCQMs, therefore this measure improves upon these existing measures as the ECCQ measure is an eCQM, and hospitals will receive data around the individual components of the ECCQ eCQM.
2.6 Meaningfulness to Target PopulationBelow we cite information Yale/CORE (the previous measure developer) obtained directly from patients, in addition to information from the literature, the lay press, and from feedback gathered from patients by third parties.
Yale/CORE has gathered feedback from patients about the importance of this measure from two sources: public comment during measure development, and a formal patient workgroup. Through public comment, more than 300 of the 677 total comments received were detailed testimonies of patients and caregivers sharing stories of (near) harms, prolonged long wait times, and experiences that affected the quality of care they received in the ED. The themes expressed repeatedly by patients and caregivers reaffirm and support that this measure is meaningful and produces valuable information in making care decisions. The public is fearful of their need to seek emergency care, and fearful of the outcome when they inevitably need to. Nearly all patients and caregivers that commented support this measure.
Yale/CORE recruited an eight-member Patient and Family Engagement (PFE) Work Group to obtain feedback from patients and caregivers’ perspective on their experiences with emergency care, including what is most important and impactful to quality care. Members were recruited via CMS contractor, Rainmakers Strategic Solutions LLC, who identified candidates through their Person & Family Engagement Network. Recruitment started in April 2023 and ended in June 2023. All members signed conflict of interest agreements. Each member of the workgroup had personal emergency department healthcare experience, as either patients and/or caregivers. Patients and families in the PFE Work Group shared their experiences and frustrations with respect to long wait times to be seen by a provider (up to 10 hours in once case), long wait times to be transferred, and gaps in discharge processes. Overall patients and family caregivers were supportive of a measure that would capture the metrics within the ECCQ measure as those metrics aligned with their experiences and frustrations. They supported the measure as the criteria are important to them as patients and caregivers, and directly impact the quality of emergency care they may receive.
We asked PFE members to express their support for this measure through a poll that asked for their level of agreement with two statements:
- The Emergency Care Capacity and Quality eCQM is easy to understand and useful for decision making, and
- The Emergency Care Capacity and Quality eCQM is meaningful and produces information that is valuable in making care decisions.
Response choices were strongly agree, agree disagree, strongly disagree. Two PFE members responded to the survey; one agreed strongly, and the other agreed, with both statements.
-
2.4 Performance Gap
The two versions of the ECCQ measures differ in one component (#3, boarding) in that the HOQR version captures inpatient boarding, but the REH measure captures transfer boarding because REH facilities do not have inpatient capacity. Because a relatively low proportion of total encounters experience inpatient boarding or transfer boarding, we expect that the difference in the one numerator component will have a relatively small impact on the range of measure scores. Therefore, we are providing performance gap evidence from the HOQR version of the measure to also be applicable to the REHQR version of the measure.
We used data from two electronic health record (EHR) datasets (described in detail in sections 4.1.1-4.1.4) to examine the distribution of measure scores for the overall HOQR ECCQ eCQM and for each of the four strata.
Table 2.1 (see 4625e-section-2.4-performance-gap-results.pdf in Section 2.4a) shows that there is a wide range of unadjusted measure scores. For example, for Dataset A, 2-years (N=40 EDs), measure scores for the overall measure ranged from 2.91% to 55.91%, with a mean of 26.60% and a median of 30.36%; the 25th percentile was 10.36% and the 75th percentile was 39.96%. Measure score ranges are similar for the other strata but are slightly wider for the adult mental health strata, and somewhat smaller for the pediatric non mental health strata.
We also examined overall measure scores for four rural hospitals (Dataset A, 2023) (see Table 2.2 in Section 2.4a ). We found a wide range of unadjusted HOQR ECCQ eCQM scores among rural hospitals (2.9% to 10.6%), but overall rural hospitals had a narrower range of performance compared with non-rural hospitals. Rural hospitals also had lower (better) unadjusted and volume adjusted measure scores (noting the HOQR ECCQ eCQM is volume adjusted; the REH version is not because all REH hospitals are low volume.)
2.4a Attach Performance Gap Results
-
-
-
3.1 Feasibility Assessment
The data elements required for this measure are routinely generated in the EHR during care delivery.
Missing data: We examined the amount of missing data from Datasets A, B, and C, which cover a total of 58 EDs with over 3 million encounters. Please see Section 4.1 for a description of the datasets. Patient-level data elements (patient chart number, patient medical record number, and patient date of birth) are assumed to be captured on every patient, and encounter-level data elements (EHR ED disposition, first ED room time, documented decision to admit time, and patient left the ED time) may vary in systematic capture by site. We found that none of the patient-level data elements were missing across all datasets. None of the encounter-level data elements were missing from Datasets B and C. The proportion of missing encounter-level data elements in Dataset A was very low (first ED room time: 3.17% missing; documented decision to admit time: 6.28% missing; patient left the ED time: 0.25% missing)
All data elements required for measure calculation have less than 7% missing, which is considered very good. Although the breakdown by site is not shown, rural hospitals were not systematically missing data compared to non-rural sites. Dataset A systematically did not have access to race, ethnicity, and language at many sites at time of data extraction, due to problem with HL7 interface during that time span; this issue has since been resolved and is able to be extracted on all patients with those data recorded.
We undertook a feasibility study with one REH that demonstrates that all data elements for the REH version of the measure were feasible to collect (see the REH Scorecard tab in Section 3.2).
Measure implementation burden: Although efforts may require hospitals to initially invest resources to support measure reporting, we anticipate these investments will help them more fully utilize their EHRs to improve emergency department care for all patients, which is a shared goal among stakeholders. Using EHR data instead of administrative data allows for more patient-centric, potentially real-time measure results to support hospital quality improvement efforts. To reduce hospital burden, the ECCQ eCQM is built based on data in structured fields that are routinely and consistently captured during clinical care. We avoided data that might have required natural language processing or other data manipulation prior to measure calculation. Our goal was to build an eCQM that is easy to understand and implement.
Confidentiality: Facilities should be able to collect this information without violating patient confidentiality.
We did not identify any unintended consequences. We note that many facilities in the HOQR program routinely track these metrics internally.
3.2 Attach Feasibility Scorecard3.3 Feasibility Informed Final MeasureThe feasibility scorecard shows that all data elements were routinely available in the EHRs that were tested, and captured in structured fields. Since all data are from structured fields, this is fixed at the facility EHR-level; no changes to clinical workflow are required to capture data elements. No adjustments to the measure needed to be made in response to the feasibility test results.
-
3.4 Proprietary InformationNot a proprietary measure and no proprietary components
-
-
-
4.1.3 Characteristics of Measured Entities
Overall, our datasets included a mix of geographic regions, hospital size, teaching status, trauma level, and EHR vendor (See Tables S2 – S4 in Section 7.1).
Dataset A included a diverse array of 20 EDs across 11 health systems, Epic and Cerner EHR systems, four rural EDs, and a mix of geographic locations, bed size, teaching status, and trauma level. We used two years (2022 and 2023) from Dataset A, both combined and as separate performance periods, for different types of analyses. This allowed us to look at the measure score for each site over two years, creating 40 data points, to test volume standardization (comparing hospital scores with similar number of encounters), and it allowed us to see measure score changes year over year.
Results are labeled by the dataset name and year as follows:
- Dataset A (20 EDs):
- 2022: 1,077,773 encounters
- 2023: 1,118,941 encounters.
- Dataset A 2-years: 2022-2023, 20 EDs and 2,196,714 encounters
Dataset A included all required data elements to calculate measure scores, and patient characteristics such as date of birth, gender/sex, race, payer.
Dataset B consisted of 12 hospital-based EDs within one large health system, using Epic EHR system. EDs were in the South, ranging in bed size, teaching status, and trauma level.
Dataset B included 12 EDs, representing 832,056 encounters and included all required data elements to calculate measure scores, and patient characteristics such as date of birth, gender/sex, and race but not payer.
Dataset C consisted of 6 EDs in the Northeast region, representing 390,500 encounters.
Facility Volume
Table S5 in Section 7.1 shows the number of facilities in each volume band of 20,000 visits in one year, in Dataset A (2022, 2023, and 2-years) and Dataset B (2023). Using a cutoff of 60,000 encounters in one year, Dataset A 2-years and Dataset B combined have 26 facilities under the cutoff and 26 above the cutoff.
4.1.1 Data Used for TestingWe used electronic health records (EHR) data from multiple testing partners to test the ECCQ measure. Three datasets (referred to as “Dataset A”, “Dataset B” and “Dataset C”) were used for testing. Datasets A and B included data from 2022 and 2023; however, not all years were used for all analyses. Through the testing we clarify which specific years were used for which testing.
Because there was limited access to EHR testing data from REHs, we provide analyses across this entire dataset with results for rural hospitals recruited among non-REHs.
We note that all testing results presented in this document are based on specifications that, for component #4 of the numerator (ED LOS), do not include transfers as part of the numerator, which are about 2% of the denominator. This change in measure specifications to remove the numerator exclusion occurred after measure testing was completed. Measure results were not rerun due to limited resources and likely insignificant changes at the aggregate score level.
4.1.4 Characteristics of Units of the Eligible PopulationTable S6 in Section 7.1 describes the characteristics of patients within each dataset, for all encounters.
4.1.2 Differences in DataFor measure score reliability, we used the 2023 data from Datasets A and B. For construct validity, we used one year of data (2022) from Datasets A and B to align with the years of data in the other measures used for construct validity. For volume standardization (risk adjustment), we used two years of Dataset A (2022-2023).
- Dataset A (20 EDs):
-
4.2.1 Level(s) of Reliability Testing Conducted4.2.2 Method(s) of Reliability Testing
We provide facility-level measure score reliability for all EDs within Dataset A, 2023 combined with Dataset B, 2023 using the signal-to-noise method, using the formula presented by Adams and colleagues (Adams, J. L et al 2012). Specifically, for each facility we calculate the reliability as: Reliability=(σ_(facility-to-facility)^2)/(σ_(facility-to-facility)^2+ (σ_(facility error variance)^2)/n).
Where facility-to-facility variance is estimated from the hierarchical logistic regression model, n is equal to each facility’s observed case size, and the facility error variance is estimated using the variance of the logistic distribution (pi^2/3).
Signal-to-noise reliability scores can range from 0 to 1. A reliability of zero implies that all the variability in a measure is attributable to measurement error. A reliability of one implies that all the variability is attributable to real difference in performance.
We calculated the measure score reliability for all facilities in Dataset A 2023 and Dataset B (N= 32 EDs, 2,740,383 encounters).
4.2.3 Reliability Testing ResultsThe mean signal-to-noise reliability for the 32 EDs in Dataset A and Dataset B combined was 0.9999 (min-max: 0.9997-1.000, SD: 0). Due to the small number of EDs in the testing sample, it was not feasible to conduct reliability testing by decile.
4.2.4 Interpretation of Reliability ResultsThe mean reliability of 0.999 is very high. The “noise” in the measure is very small, because this is a proportion measure capturing a census, not a sample, of encounters; the high reliability is both a function of the size of the measure number of encounters) and the measure structure being a true score of the encounters in the numerator. The measure is not predicting the outcome, it’s accurately representing the outcome. Therefore, the wide range in measure scores are due to differences in provider quality.
-
4.3.1 Level(s) of Validity Testing Conducted4.3.2 Type of accountable entity-level validity testing conducted4.3.3 Method(s) of Validity Testing
Data Element Validity
For evidence in support of data element validity we provide encounter-level testing of the individual data elements in the final performance measure by characterizing the percent agreement between data elements in the ECCQ measure from the EHR vs manual reviewers, Specifically, we assessed data element validity through the raw match rate of each required EHR data element to its chart-abstracted data element. We validated numerator events, denominator-only encounters, the numerator exclusion (observation stays). We considered each data element “matched” if the electronically extracted value (from EHR) exactly matched the manual abstraction value (from the patient medical record).
Data element validity testing was conducted using a sample of 254 patient charts of ED encounters. This sample included 20 observation stays, 20 transfers, 20 admitted patients, 10 left without being seen (LWBS) cases, 50 denominator-only cases, and 130 numerator cases.
For information on missing data, please see Section 3.1.
Measure Score Validity
We provide two types of evidence of measure score validity: systematic assessment of face validity by the Technical Expert Panel, and empiric (construct) validity.
Face validity
Measure score validity was evaluated through TEP engagement for face validity. To conduct face validity, a standardized survey was sent to all participating TEP members to assess how stakeholders felt about the validity of the measure, using a 4-point Likert scale [Strongly agree, agree, disagree, strongly disagree], and to justify their responses. We asked them to express their level of agreement with the following statement:
The ECCQ eCQM can differentiate good from poor quality care among providers (or accountable entities).
List of TEP Members:
- JohnMarc Alban, MS, RN, CPHIMS, Associate Director, Quality Measurement & Informatics | The Joint Commission, Oakbrook Terrace, IL
- Kelly Bookman, MD, Professor and Vice Chair of Operations, Senior Medical Director of Informatics | University of Colorado School of Medicine, UCHealth, Boulder, CO
- Howard Bregman, MD, MS, FAAP Director, Clinical Informatics | Epic Systems Corporation, Verona, WI
- Teresa M. Breslin DeLellis, PharmD, BCPS, BCGP, Pharmacist | American Geriatrics Society, Fort Wayne, IN
- Isbelia Briceno, CSPO, Senior Product Manager, EHR Vendor| Oracle Cerner, Kansas City, MO
- Mustafa Mark Hamed, MD, MBA, FAAFP, FAEMS, Board Certified Family Physician and Emergency Medical Services Physician | American Academy of Family Physicians (AAFP), Novi, MI
- Jennifer Hoffmann, MD, MS, Assistant Professor of Pediatrics | Northwestern University and Lurie Children's Hospital of Chicago, Chicago, IL
- Charleen Hsuan, JD, PhD, Assistant Professor | Pennsylvania State University, University Park, PA
- David Levine, MD, FACEP, Group Senior Vice President, Advanced Analytics and Data Science | Vizient, Inc., Chicago, IL
- Kelly McGuire, MD, MPA, Medical Director, Behavioral Health | EmblemHealth, Katonah, NY
- Sofie Morgan, Patient Experience Professional, Emergency Physician | University of Arkansas for Medical Sciences, Little Rock, AR
- Deepti Pandita, MD, FACP, FAMIA, Associate Professor of Medicine, Chief Medical Information Officer | University of California, Irvine, Laguna Niguel, CA
- Anne-Marie Podgorski Dunn, MBA, BSN, RN, Senior Product Manager, Quality Reporting | Oracle Health
- Rupinder K Sandhu, BSN, MBA, MSHSA, Executive Director, Emergency Services | UC Davis Medical Center, Sacramento, CA
- Nathaniel Schlicher, MD, JD, MBA, FACEP | Gig Harbor, WA
- Jodi A. Schmidt, MBA, Executive Director, UKHS Care Collaborative Patient Safety Organization | University of Kansas Health System, Westwood, KS
- Jeremiah Schuur, MD, MHS | Cambridge, MA
- David P Sklar, MD | Arizona State University College of Health Solutions, Phoenix AZ
- Benjamin Sun, MD, MPP, FACEP, FACHE, Perelman Professor and Chair, Department of Emergency Medicine | University of Pennsylvania, Philadelphia, PA
- Patient/Caregiver Representative | Maryland
- Patient/Caregiver Representative | Tennessee
- Patient/Caregiver Representative | Michigan
- Patient/Caregiver Representative | South Carolina
Measure Score – Construct Validity
We conducted an analysis of construct validity to demonstrate the measure’s construct validity. Construct validity for a hospital quality measure refers to how accurately the measure reflects the actual quality of care provided by the hospital.
To assess construct validity, we first identified existing quality measures with publicly available data that we would consider in the same causal pathway as the ECCQ measure, based on the logic model presented in Section 2.1.
As described in our logic model and in the literature described in the evidence section (please Section 2.2), the numerator components of the ECCQ eCQM have been shown to be associated with hospital quality across a range of outcomes including mortality, patient experience, and cost. These quality domains are captured by the Overall Hospital Quality Star Rating measure, described in more detail below.
The Overall Hospital Quality Star Rating (hereafter referred to as “Star Rating”) methodology involves a seven-step approach to calculating the Overall Star Rating. In the first step, existing, publicly reported quality measures are selected from Care Compare (CMS’s website for public reporting of CMS-funded hospital quality measures) based on their relevance and importance as determined through stakeholder and expert feedback, and the included measures are standardized to be consistent in terms of direction and magnitude. Second, these standardized measures are organized into five groups according to measure type: Mortality, Safety of Care, Readmission, Patient Experience, and Timely and Effective Care. Third, for each hospital, group scores are generated by calculating the simple average of the measure scores within a group. In the fourth step, a hospital summary score is calculated using a weighted average of all available groups, based on predefined group weights have been chosen through stakeholder and expert feedback. In the fifth step, a reporting threshold is applied, where hospitals reporting too few measures or groups are excluded. In the sixth step, hospitals are grouped into three ‘peer groups’ based on the number of measure groups for which they report at least three measures. In the seventh and final step, a clustering algorithm is applied within each hospital peer group to organize summary scores into five ordered categories or stars. Additional details can be found in the publicly available methodology report (Overall Hospital Quality Star Rating on Care Compare Methodology Report, https://qualitynet.cms.gov/files/603966dda413b400224ddf50?filename=Star_Rtngs_CompMthdlgy_v4.1.pdf).
We tested the construct validity of ECCQ eCQM by examining the association between ECCQ eCQM measure score performance and components of hospital Star Rating, including:
- Overall Hospital Quality Star Rating;
- Hospital Quality Summary Score;
- Mortality Group Scores
- Readmission Group Scores; and
- Timely and Effective care Group Scores.
Dataset A 2022 and Dataset B 2022 overall measure scores were used to calculate the Spearman's rank correlation coefficients.
We hypothesized that the ECCQ eCQM would be moderately negatively correlated with each of the Star Rating components listed above, because for Star Rating better performance is indicated with a higher score, and for the ECCQ eCQM, worse performance is indicated with a higher score.
4.3.4 Validity Testing ResultsData Element Validity
Validation of ED encounters by disposition and data elements demonstrated high validity and high levels of agreement between electronic record review and manual chart review.
- 95% of admissions records were confirmed through manual chart review.
- 100% of transfer records, final ED dispositions, ED arrival time, and time placed in treatment room were confirmed through manual chart review.
- 37% of reviewed records (94 out of 254) had a documented admission time, indicating 94 patients were admitted to the hospital, and of those admitted, 100% of the records had an exact match of the inpatient admission timestamp.
- 96% of reviewed records had an exact match of ED departure timestamp (245 out of 254 records), for the 9 non-matching records:
- 7 records had a discrepancy in time of less than 90 minutes.
- 2 records were outliers, with a wide discrepancy in discharge time, and discrepancy caused by a readmission.
Further analyses explored the percent agreement of each timestamp used to calculate the data elements for each numerator component of the measure. The results below show over 99 percentage agreement of all the numerator components between eCQM and manual chart review.
Numerator Component Percent Agreement
Time to Placement in Waiting Room 100%
Left without being seen 100%
Boarding 100%
Transfer Boarding 99.2%
ED Length of Stay 100%
Any numerator 99.6%
An assessment of missing data is provided in Section 3.1.
Measure Score Validity
Face Validity
Overall, 75.0% of TEP members (12 of 16) agreed with the face validity statement (that the measure can differentiate good from poor quality of care) for the ECCQ eCQM. Specifically, there were 8 votes for strongly agree, 4 votes for agree, 4 votes for disagree, and 0 votes for strongly disagree. At the time of the face validity vote, the measure specifications included a numerator exclusion removing transfers to another facility from calculation in component #4 ED LOS. CMS does not believe this greatly impacts the face validity as the TEP members widely agreed upon the importance of transfers relative to the measure’s intent and importance.
Members who voted in agreement noted that the numerator components within the measure are correlated with patient outcomes, so it is a useful quality measure with good face validity and construct validity. The measure considers various components that are proxies for access to emergency care, noting that a key tenet of emergency care is that it is timely, and this measure can capture the data necessary to drive hospitals to improve care.
TEP members who disagreed with the face validity statement noted the following reasons for disagreement:
- Concern about the boarding and ED length of stay thresholds, because they felt that the drivers of those metrics are not exclusively within the facilities’ control.
- Disagreement with the definition of “private treatment space.”
- Noting that the measure is of time, organizational capacity, and efficiency but not quality of care.
- Concern the measure does not adjust for trauma levels designated to hospitals.
When voting on the face validity statement specifically for the rural hospitals, 68.8% of TEP members (11 of 16) agreed with the face validity statement (that the measure can differentiate good from poor quality of care) for the ECCQ eCQM. There were 4 votes for strongly agree, 7 votes for agree, 4 votes for disagree, and 1 vote for strongly disagree. Members who voted in agreement noted that focusing on transfers reflects hospital access and a rural hospital’s ability to create high quality transfer networks that directly impact patient care and outcomes, and that the timeliness of emergency care is particularly important in rural settings. Members who voted in disagreement expressed concern that that rural hospitals/EDs do not have control over transfer acceptance.
Measure Score – Construct Validity
As hypothesized, our results show a negative correlation with each of the Star Ratings components (Table 1 in Section 4.3.4a attachment). ECCQ eCQM scores were inversely associated with multiple measures of hospital quality, (as captured by Star Ratings) as would be conceptually expected; hospitals that performed well on Star Ratings and its components (where higher scores are better) also performed well on ECCQ eCQM (lower scores are better) lending support for the validity of the ECCQ eCQM. We note that this analysis used unadjusted ECCQ eCQM scores (not volume adjusted).
4.3.4a Attach Additional Validity Testing Results4.3.5 Interpretation of Validity ResultsEncounter level: Data element validity testing shows high validity and high levels of agreement between electronic record review and manual chart review.
Measure score: Our face validity assessment in addition to our construct validity testing provide strong evidence for the validity of the ECCQ measure score.
-
4.4.1 Methods used to address risk factors4.4.2 Conceptual Model Rationale
Our conceptual model is informed by a literature search and empiric analyses that were performed during measure development. Please see 4.4.2a for a diagram that outlines the risk factors discussed in this section.
Mental Health Diagnoses
Patients who are seen in the emergency department for a behavioral health condition or complaint are more likely to experience boarding and, when boarding, to experience long boarding times (Redinger, Gibb, and Redinger 2024). Pediatric psychiatric visits were somewhat more likely to be associated with boarding compared with adult visits (34 percent vs. 30 percent, respectively). Based on a Yale/CORE analysis using data from five EDs within a single health system, across all patients who are treated for a behavioral health concern, ED boarding of more than four hours occurred in between 2 to 41 percent of visits, compared with 5 percent to 19 percent for non-behavioral health patients.
For patients seen in the emergency department for a behavioral health condition or complaint, ED length of stay has been shown to be longer compared with patients with non-behavioral health diagnoses among patients who were discharged, admitted, or externally transferred (10.7, 11.4, and 52.6 hours; compared with 8.3, 7.3 and 29.3 hours, respectively) (Baia Medeiros et al. 2018). Based on empiric analyses using data from five EDs within a single health system, across all patients with a behavioral health condition or complaint, the proportion of visits with an ED LOS greater than 8 hours was much higher for behavioral health patients, ranging from 72 percent to 87 percent of visits, compared with non-behavioral health patients (5 percent to 19 percent).
Race, Ethnicity, Income
There are disparities in ED throughput metrics by race. For example, in one study, Black patients waited longer (arrival time to decision-to-admit time) than white patients even after adjusting for clinical, demographic, and socioeconomic variables (Aysola et al. 2021). Another study found that, while across all patients there was no difference in mean boarding time between Black and white patients, among those with higher acuity (ESI level 1), Black patients boarded significantly longer than white patients; and, for psychiatric admissions, Black patients also boarded significantly longer than white patients (Ruffo et al. 2022). Among trauma patients, ED length of stay was found to be longer in Black and Hispanic patients, who remained in the ED for about 40 minutes longer compared with white patients (Steren et al. 2020). Finally, a more recent 2023 study found that Black and Hispanic patients (as well as patients covered by Medicaid, which could be considered a proxy for income), were more likely to leave without being seen, or to be placed in hallway locations for treatment, even when controlling for factors such as acuity (Sangal et al. 2023). The association between income and ED throughput metrics can differ by diagnosis. For example, for patients with hypertensive emergency, patients with higher income had longer ED length of stay whereas patients with lower income were more likely to be admitted (Srivastava et al. 2022). In contrast, in patients with chest pain, lower income was associated with longer ED length of stay, and longer waiting times (Herlitz et al. 2023).
Finally, one study using a nationwide database found that low income and Medicaid insurance (or no insurance) was associated with higher rates of leaving the ED before being seen (Sheraton, Gooch, and Kashyap 2020).
Age
Older patients have been shown to experience longer ED input and throughput, as well as worse outcomes. For example, one study found that older patients who were eventually admitted to the medicine service had significantly longer ED wait times compared with younger patients, and another study found a strong association between patient age (65 or older) and longer ED wait times (time from ED arrival to seeing a provider) (Knapman and Bonner 2010; Morley et al. 2018). Older patients are more likely to experience worse outcomes from the same type of adverse event (e.g., missed medications) when compared with younger patients. In one study, older patients who stayed overnight in the ED had higher in-hospital mortality and higher odds of adverse events compared with patients admitted to an inpatient bed before midnight (Roussel et al. 2023).
While overall pediatric patients (age <18) are less complex and may experience fewer access barriers, literature has shown that pediatric patients with mental health diagnoses have longer ED length of stays (Nash et al. 2021), and are more likely to board and experience longer boarding times (McEnany et al. 2020).
References
Aysola, Jaya , Justin T. Clapp, Patricia Sullivan, Patrick J. Brennan, Eve J. Higginbotham, Matthew Kearney, Chang Xu, et al. 2021. “Understanding Contributors to Racial/Ethnic Disparities in Emergency Department Throughput Times: A Sequential Mixed Methods Analysis.” Journal of General Internal Medicine 37 (2): 341–50. https://doi.org/10.1007/s11606-021-07028-5.
Baia Medeiros, Deyvison T., Shoshana Hahn-Goldberg, Erin O’Connor, and Dionne M. Aleman. 2018. “Analysis of Emergency Department Length of Stay for Mental Health Visits: A Case Study of a Canadian Academic Hospital.” Canadian Journal of Emergency Medicine 21 (3): 374–83. https://doi.org/10.1017/cem.2018.417.
Herlitz, Sebastian, Joel Ohm, Henrike Häbel, Ulf Ekelund, Robin Hofmann, and Per Svensson. 2023. “Socioeconomic Status Is Associated with Process Times in the Emergency Department for Patients with Chest Pain.” Journal of the American College of Emergency Physicians Open 4 (4): e13005. https://doi.org/10.1002/emp2.13005.
Knapman, Mary, and Ann Bonner. 2010. “Overcrowding in Medium-Volume Emergency Departments: Effects of Aged Patients in Emergency Departments on Wait Times for Non-Emergent Triage-Level Patients.” International Journal of Nursing Practice 16 (3): 310–17. https://doi.org/10.1111/j.1440-172x.2010.01846.x.
McEnany, Fiona B., Olutosin Ojugbele, Julie R. Doherty, Jennifer L. McLaren, and JoAnna K. Leyenaar. 2020. “Pediatric Mental Health Boarding.” Pediatrics 146 (4): e20201174. https://doi.org/10.1542/peds.2020-1174.
Morley, Claire, Maria Unwin, Gregory M. Peterson, Jim Stankovich, and Leigh Kinsman. 2018. “Emergency Department Crowding: A Systematic Review of Causes, Consequences and Solutions.” Edited by Fernanda Bellolio. PLOS ONE 13 (8): e0203316. https://doi.org/10.1371/journal.pone.0203316.
Nash, Katherine A., Bonnie T. Zima, Craig Rothenberg, Jennifer Hoffmann, Claudia Moreno, Marjorie S. Rosenthal, and Arjun Venkatesh. 2021. “Prolonged Emergency Department Length of Stay for US Pediatric Mental Health Visits (2005–2015).” Pediatrics 147 (5): e2020030692. https://doi.org/10.1542/peds.2020-030692.
Redinger, Michael J, Tyler S Gibb, and Kathryn E Redinger. 2024. “New Developments in Psychiatric Boarding in Emergency Departments.” Mayo Clinic Proceedings 99 (5): 699–701. https://doi.org/10.1016/j.mayocp.2024.02.004.
Roussel, Melanie, Dorian Teissandier, Youri Yordanov, Frederic Balen, Marc Noizet, Karim Tazarourte, Ben Bloom, et al. 2023. “Overnight Stay in the Emergency Department and Mortality in Older Patients.” JAMA Internal Medicine 183 (12). https://doi.org/10.1001/jamainternmed.2023.5961.
Ruffo, Robert, Erin Shufflebarger, James Booth, and Lauren Walter. 2022. “Race and Other Disparate Demographic Variables Identified among Emergency Department Boarders.” Western Journal of Emergency Medicine 23 (5): 644–49. https://doi.org/10.5811/westjem.2022.5.55703.
Sangal, Rohit B., Huifeng Su, Hazar Khidir, Vivek Parwani, Beth Liebhardt, Edieal J. Pinker, Lesley Meng, Arjun K. Venkatesh, and Andrew Ulrich. 2023. “Sociodemographic Disparities in Queue Jumping for Emergency Department Care.” JAMA Network Open 6 (7): e2326338. https://doi.org/10.1001/jamanetworkopen.2023.26338.
Sheraton, Mack, Christopher Gooch, and Rahul Kashyap. 2020. “Patients Leaving without Being Seen from the Emergency Department: A Prediction Model Using Machine Learning on a Nationwide Database.” Journal of the American College of Emergency Physicians Open 1 (6): 1684–90. https://doi.org/10.1002/emp2.12266.
Srivastava, Shreya, Bhargav Vemulapalli, Alexis K. Okoh, and John Kassotis. 2022. “Disparity in Hospital Admissions and Length of Stay Based on Income Status for Emergency Department Hypertensive Crisis Visits.” Journal of Hypertension 40 (8): 1607–13. https://doi.org/10.1097/hjh.0000000000003193.
Steren, Benjamin, Matthew Fleming, Haoran Zhou, Yawei Zhang, and Kevin Y. Pei. 2020. “Predictors of Delayed Emergency Department Throughput among Blunt Trauma Patients.” Journal of Surgical Research 245 (January): 81–88. https://doi.org/10.1016/j.jss.2019.07.028.
4.4.2a Attach Conceptual Model4.4.3 Risk Factor Characteristics Across Measured EntitiesIt was not possible to extract some of the risk variables from the EHR systems that were used for testing. We did not have patient-identifiers, so we were not able to use zip code to use area-level income data, nor was race available in our testing dataset. Below we provide analyses related to age, and the presence of a mental health diagnosis.
This measure is not risk adjusted, but is stratified by age (<18, and 18 and older), and the presence of a mental health diagnosis.
Table S7 in the supplemental attachment (see Section 7.1) shows the distribution of stratification variables (Dataset A, 2 years) across measured entities (n=20) by percent of total encounters.
4.4.4 Risk Adjustment Modeling and/or Stratification ResultsFigure S1 in the supplemental attachment (see Section 7.1) provides an analysis of ECCQ numerator components by strata, using Dataset A, 2 years. These analyses show that there is variation across numerator criteria that support the approach to cohort stratification. Separately reporting results by adult/pediatric mental health allows visibility into disparities that exist in these populations that may not be visible without such stratification. Figure S2 (see Section 7.1) shows the stratified measure score, showing higher (worse) measure scores for the adult mental health strata compared with non-mental health, and higher (worse) measure scores for pediatric mental health patients compared with non-mental health, further supporting the stratification approach.
4.4.4a Attach Risk Adjustment Modeling and/or Stratification Specifications4.4.6 Interpretation of Risk Factor FindingsOur analyses, as well as how care is delivered in the case of pediatric patients, support the stratification of this measure by age, and by mental health diagnoses.
Stratification by age
The measure is stratified by age (<18, or 18 and older) due to differences in the distribution of patients across EDs, in addition to how care is delivered, which is reflected in our measure score testing. First, the distribution of pediatric patients differs across EDs, with some EDs seeing only pediatric patients, and then a wide range of pediatric patients in non-pediatric EDs Second, there are clinical differences in pediatric patients vs non-pediatric patients – for example, children may present with different symptoms/conditions vs adults (respiratory infections, asthma, and febrile illnesses are more common in children). Adults may present with a broader range of chronic conditions, such as hypertension, diabetes, etc. which would require different diagnostic and management approaches.
Operationally, pediatric EDs may differ from EDs that treat mainly adults. For example, pediatric EDs may employ specialized staff with training in pediatric subspecialities, which may impact throughput compared with adult EDs. Adult EDs on the other hand, need to manage high volumes of patients that may be more complex on average.
Stratification by mental health diagnoses
As noted in Section 4.4.2, patients with mental health diagnoses typically have longer boarding times and ED length of stay. As patients with mental health diagnoses are unevenly distributed among EDs, stratification by mental health diagnoses (with, vs without) allows for a fairer comparison between EDs. This approach is consistent with other publicly reported ED measures.
4.4.7 Final Approach to Address Risk FactorsRisk adjustment approachOffRisk adjustment approachOffConceptual model for risk adjustmentOffConceptual model for risk adjustmentOff
-
-
-
5.1 Contributions Towards Advancing Health Equity
We did not select that we would provide information on Equity in our intent to submit form.
-
-
-
6.1.1 Current StatusNo6.1.2 Current or Planned Use(s)6.1.4 Program Details
-
6.2.1 Actions of Measured Entities to Improve Performance
Please see Table S1 in supplemental attachment (Section 7.1) for evidence supporting different implementation approaches to improve the different numerator components of the ECCQ measure.
-
-
-
Enter a comment below
-