Abstract
Introduction
Accurate estimation of an expected discharge date (EDD) early during hospitalization impacts clinical operations and discharge planning.
Methods
We conducted a retrospective study of patients discharged from six general medicine units at an academic medical center in Boston, MA from January 2017 to June 2018. We retrieved all EDD entries and patient, encounter, unit, and provider data from the electronic health record (EHR), and public weather data. We excluded patients who expired, discharged against medical advice, or lacked an EDD within the first 24 h of hospitalization. We used generalized estimating equations in a multivariable logistic regression analysis to model early EDD accuracy (an accurate EDD entered within 24 h of admission), adjusting for all covariates and clustering by patient. We similarly constructed a secondary multivariable model using covariates present upon admission alone.
Results
Of 3917 eligible hospitalizations, 890 (22.7%) had at least one accurate early EDD entry. Factors significantly positively associated (OR > 1) with an accurate early EDD included clinician-entered EDD, admit day and discharge day during the work week, and teaching clinical units. Factors significantly negatively associated (OR < 1) with an accurate early EDD included Elixhauser Comorbidity Index ≥ 11 and length of stay of two or more days. C-statistics for the primary and secondary multivariable models were 0.75 and 0.60, respectively.
Conclusions
EDDs entered within the first 24 h of admission were often inaccurate. While several variables from the EHR were associated with accurate early EDD entries, few would be useful for prospective prediction.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
Determining when hospitalized patients will be discharged is a complex process that depends on a variety of factors and a mutual understanding of patients’ discharge readiness [1, 2]. Confidence in an accurate expected discharge date (EDD) early during hospitalization allows the interprofessional team to establish safe, timely, and comprehensive discharge plans, particularly for complex patients with multiple chronic conditions [1, 3,4,5]. Furthermore, it has important implications for macro- and unit-level operations [6, 7]. An accurate EDD enables operational and clinical leaders to forecast bed capacity, manage emergency room and surgical volume, optimize patient throughput, adjust staff allocation, and facilitate early day discharges [6, 8,9,10]. For nurses, nurse care coordinators (CCRNs), advanced practice providers (APPs), and physicians, having confidence in the EDD estimate prompts earlier identification of medical and non-medical discharge barriers, facilitates timely delivery of transitional care interventions such as discharge medication reconciliation and patient counseling by pharmacists, and helps to establish clear expectations for discharge with patients and their caregivers [4, 9, 11, 12]. From the patient’s perspective, transparent communication about the EDD encourages preparation for discharge earlier during the hospital course, thereby mitigating readmission risk while improving safety and satisfaction during transitions [13,14,15,16,17].
While estimating the EDD is a hospital priority, it is neither reliable nor efficient, and it is not well understood [4, 18,19,20]. The discharge planning process is often chaotic due to rapidly evolving patient conditions and plans of care, last minute medication changes, pending insurance authorizations, arrangement of home services, post-acute care bed availability, and other factors [3, 5]. Moreover, roles and responsibilities for entering the EDD in the electronic health record (EHR) and revising it throughout the hospitalization remain unclear. These inefficient workflows and unstructured data entries exacerbate informational siloes and increase the risk of miscommunication among the care team, unit leaders, and hospital administrators attempting to ascertain the “source of truth” for the EDD of individual patients [21,22,23].
While it is not surprising that EDD entries are increasingly accurate as the actual discharge date approaches [24], little is known about the types of factors that are associated with an accurate EDD entered early in the hospital encounter [14, 15, 25]. Knowledge of the types of patient, encounter, unit, and provider factors commonly found in the EHR that are associated with accurate early EDD entries, including factors external to the EHR, can spur quality improvement initiatives to improve confidence about its accuracy. The COVID-19 pandemic further highlighted the importance of having accurate EDDs as hospitals were required to provide sufficient bed capacity, streamline processes for post-acute care decision-making, adhere to infection control guidelines and physical distancing mandates, and ensure capacity to address recurrent surges while optimizing management of volume for elective and urgent procedures and surgeries [26].
In this study we sought to develop multivariable models using a variety of EHR and non-EHR based factors to discriminate an accurate EDD entered early during hospitalization. Understanding the types of factors that are associated with an accurate EDD entered early during hospitalization is the first step for understanding how best to use EHR and non-EHR data to build more sophisticated machine learning models to confidently forecast the discharge date.
Methods
Study Design, Setting, Participants
This manuscript was developed in adherence to the Strengthening the Reporting of Observational Studies in Epidemiology guidelines [27]. We conducted a retrospective cohort study, approved by the Mass-General Brigham (MGB) Human Research Committee, at Brigham and Women’s Hospital (BWH), a 793-bed academic medical center affiliated with MGB in Boston, MA. All MGB hospitals use a common commercial EHR (Epic Systems, Inc.). All patients discharged from six general medicine units between January 2017 and June 2018 were considered eligible if they were either admitted or transferred into these units. Five units were staffed by housestaff teams (two “intensive” teaching units, where residents were given reduced patient-loads and more time for educational activities [28], and three general teaching units), and the sixth unit was staffed by an APP team. Each general medicine team—composed of an attending and two APPs (non-teaching), an attending, resident, and two interns (general teaching), or two attendings, two residents, and three interns (“intensive” teaching)—was geographically localized, and thus, cared for the majority of patients admitted to the unit to which that team was assigned [29]. A minority of patients admitted to the study units were cared for by oncology, surgery, or other non-general medicine teams. We excluded encounters of patients who expired in the hospital, those who were discharged against medical advice, and those who did not have any EDD entries within the first 24 h of their hospitalization.
Interdisciplinary team huddles (ITHs) occurred on weekdays on each unit, at 8:30 AM for the APP team and at 11:30 AM for housestaff teams. During these huddles, physicians and APPs reviewed the discharge plans with the care coordinator, charge nurse, and unit clerk. The unit clerk was primarily responsible for entering and/or updating the EDD in the EHR during and after ITHs, and this workflow did not change during the study period. However, any team member (e.g., clinician, care coordinator) could enter and update the EDD using the “Huddle Note” report (Fig. 1) at any time during the hospital encounter.
Data Sources & Collection
Using our Enterprise Data Warehouse (EDW), which captures all data from our EHR, we retrieved all available EDD entries for eligible patients discharged from study units during the study period. We retrieved relevant patient-level administrative data associated with the hospital encounter, such as demographic information, admission-discharge-transfer (ADT) timestamps, insurance status, comorbidity indices, length of stay (LOS), discharge destination (home, skilled nursing facility, hospice facility, etc.), and the All Patients Refined Diagnosis-related Group (DRG) assigned to the encounter in claims data as well as its associated weight. Other EHR data included role of individual entering the EDD and daily census of the study units. Lastly, we retrieved non-EHR data, including daily snowfall in Boston, MA, from publicly available sources (National Centers for Environmental Information, National Oceanic and Atmospheric Administration) [30]. Data from our EDW and external sources were combined into a single database, linking individual EDD entries at the level of the hospital encounter.
Definition of Outcome & Independent Variables
We defined early EDD accuracy (dependent outcome) as instances in which any EDD entered during the first 24 h of hospitalization was equal to the patient’s actual discharge date (ADD). Our independent variables were derived from patient, encounter, unit, provider, and external data collected for eligible hospitalizations. Patient variables included age, sex, race, ethnicity, primary language, socioeconomic status (median income by zip code), insurance status, Elixhauser Comorbidity Index [31, 32], and the number of prior hospital admissions over the previous 12 months. Encounter variables included transfer from an outside hospital, DRG weight, LOS, admission on a weekend, discharge on a weekend, discharge destination, and discharging service (general medicine vs. non-general medicine). Our unit, provider, and external variables included the unit census at the beginning of the day of hospitalization, unit type (“intensive” teaching, general teaching, or non-teaching), role of the individual entering the EDD, and snowfall during the hospital encounter (primary model) or on day of admission (secondary model). In general, we identified explanatory variables within each category (patient, encounter, unit, etc.) that corresponded to structured data elements (i.e., that could be read or written to the EHR via an application programming interface or API) addressed known disparities or biases (e.g., racial inequities), had internal and external validity (e.g., discharge day on weekday or weekend), or correlated with the intended purpose of the clinical unit (e.g., teaching, non-teaching). Of note, for potential predictor variables that were likely co-linear with other variables (e.g., admission to observation status often associated with short LOS), we did not include both.
Multivariable Analyses
We conducted all analyses using SAS version 9.4 (SAS Institute, Cary, NC). We used descriptive statistics to compare demographic and administrative measures at the level of the hospital encounter in patients with vs. without accurate early EDDs as numbers and percentages, or means with standard deviations, as appropriate. In our primary multivariable logistic regression analysis, we employed generalized estimating equations (GEE) to model early EDD accuracy as the dependent outcome while adjusting for all other covariates and clustering by patient. For our secondary multivariable logistic regression analysis, we used GEE to model early EDD accuracy using only covariates present upon admission (e.g., demographic characteristics, prior hospitalizations, etc.), while also clustering by patient. We calculated unadjusted and adjusted odds ratios with 95% confidence intervals and p-values for each covariate. To assess the multivariable models’ abilities to accurately predict the EDD entered early during hospitalization, we calculated the c-statistic for each model, a measure of discrimination.
Results
Of the 8628 patient hospital encounters, 3917 (45.4%) were eligible for inclusion (Fig. 2). Of these 3917 encounters, 890 (22.7%) had at least one accurate EDD entered within the first 24 h of the patient’s admission to the hospital. Of the 890 accurate early EDD entries, 605 (68.0%) had only one EDD entry within the first 24 h. Encounters with an accurate early EDD entry tended to have a lower Elixhauser Comorbidity Index, were more likely to be admitted to observation or on a weekday, and were more likely to have an elective admission (Table 1).
In the primary analysis, the covariates significantly positively associated with early EDD accuracy (Table 2) included admit day during weekdays (OR 1.37, 95% CI 1.14–1.64, p < 0.001); discharge day during weekdays (OR 1.32, 95% CI 1.10–1.59, p = 0.003); general teaching units one, two, and three (OR 1.51, 95% CI 1.12–2.04, p = 0.007; OR 1.40, 95% CI 1.05–1.85, p = 0.02; OR 1.39, 95% CI 1.02–1.88, p = 0.04, respectively); and clinician-entered EDD (OR 1.81, 95% CI 1.26–2.62, p = 0.002). The covariates which were significantly negatively associated with early EDD accuracy (Table 2) included Elixhauser Comorbidity Index ≥ 11 (OR 0.82, 95% CI 0.68–0.99, p = 0.04); and LOS between 2 and 7 days (OR 0.39, 95% CI 0.32–0.48, p < 0.001) and 7 or more days (OR 0.01, 95% CI 0.005–0.02, p < 0.001). The c-statistic for the primary multivariable model was 0.75.
Of the 200 early EDDs assigned by clinicians, 94 (47.0%) were assigned by an APP (93 by a physician assistant, one by a nurse practitioner), 50 (25.0%) were assigned by an attending physician, 43 (21.5%) were assigned by a registered nurse (RN), and 13 (6.5%) were assigned by a physician trainee (nine by a fellow, four by a resident). Regarding rates of accurate early EDD entries by clinician type, attending physicians had 22 correct of 50 total entries (44.0%), APPs had 28 correct of 94 total entries (29.8%), RNs had 10 correct of 43 total entries (23.3%), and physician trainees had three correct of 13 total entries (23.1%).
In the secondary multivariable logistic regression model (Table 3), the covariate significantly positively associated with early EDD accuracy was admit day during weekdays (OR 1.22, 95% CI 1.03–1.45, p = 0.02). The covariates significantly negatively associated with early EDD accuracy included Elixhauser Comorbidity Index ≥ 11 (OR 0.59, 95% CI 0.50–0.70, p < 0.001), any prior admissions within the previous 12 months (OR 0.78, 95% CI 0.66–0.91, p = 0.002), and transfer from an outside hospital (OR 0.73, 95% CI 0.59–0.90, p = 0.003). The c-statistic for the secondary multivariable model was 0.60.
Discussion
We conducted a retrospective study in which we used multivariable logistic regression using a priori identified patient, encounter, unit, provider, and external factors to model an accurate early EDD, defined as instances in which any EDD entered during the first 24 h of hospitalization was equal to the ADD. Just 22.7% of encounters had an accurate EDD entered in the EHR within the first 24 h of hospitalization. In our primary multivariable model, admit day during weekdays, discharge day during weekdays, general teaching units, and clinician-entered EDD were positively associated with an accurate early EDD entry during the hospital encounter, whereas a higher Elixhauser Comorbidity Index and moderate or long LOS were inversely associated. Discrimination was acceptable for the primary model (c-statistic 0.75) but was poor (c-statistic 0.60) for the secondary model that used factors present only upon admission.
Our observations have several explanations. First, patients admitted or discharged during the weekdays had a higher likelihood of an accurate early EDD entry. This was likely because ITHs routinely occurred from Monday through Friday. Immediately after ITHs, unit clerks would routinely update the EDD. Second, planned weekend discharges were likely deferred for a variety of reasons: transient weekend staff opting to defer a discharge until the weekday; limited care coordinator availability; and lack of bed availability or appropriate weekend staffing at receiving facilities. These factors likely contributed to the accuracy by which the ultimate discharge date—which would be on a weekday—would differ from the EDD originally entered in the EHR. Third, while a small number of EDDs were entered by clinicians, when an EDD was entered by a clinician early during hospitalization, such entries were more likely to be accurate – 31.5% of clinician-entered early EDDs were accurate compared to 22.2% of non-clinician early EDD entries (the majority of which were by unit clerks). This was likely because of clinicians’ direct involvement in patient care at the bedside (nurse or responding clinician), or in the decision regarding discharge (attending). Fourth, on teaching units, ITHs took place at the conclusion of formal clinical rounds as opposed to before rounds on non-teaching units. This likely allowed clinicians to make more accurate forecasts of the EDD based on the updated plan. Finally, it is noteworthy that “intensive” teaching units were not associated with accurate early EDD entries and may be attributed to staffing models, often consisting of two subspecialist attendings, and prioritization of educational activities. General teaching units, on the other hand, were staffed by hospitalist attendings who typically prioritize hospital efficiency and operational throughput [33, 34].
Conversely, Elixhauser Comorbidity Index and LOS and were inversely associated with early EDD accuracy. These factors should not be surprising: sicker patients with longer LOS typically have more complex clinical courses, therapeutic management options, and dispositions, which make it increasingly difficult for both clinicians and non-clinicians to forecast an accurate EDD early during hospitalization [35, 36]. The presence of snowfall during hospitalization, a factor external to the EHR and chosen a priori to understand the effects of weather on early EDD accuracy, takes into consideration patients’ preference (clinical staff agreeing to delay discharge until caregivers could safely drive the patient home) or delays at receiving facilities (skilled nursing facilities unable to receive a patient transfer until current patients were safely discharged) that cannot always be anticipated. While not statistically significant in the primary multivariable model (OR 0.84, 95% CI 0.68–1.03, p = 0.09), we observed a trend towards a negative association with early EDD accuracy.
Similar to other studies, we found that estimating the discharge date during hospitalization is challenging [20, 37,38,39]. For our analysis, we defined patient, encounter, unit, and provider variables a priori based on data that were readily available in most EHRs. Of the many variables typically retrievable from most EHRs (and thus, could be used as generalizable predictors), only Elixhauser Comorbidity Index scores and LOS were significantly associated with lower early EDD accuracy after adjusting for variables associated with markers of care complexity (prior admissions, transfer from outside hospital, DRG weight, discharge destination). Moreover, in our secondary multivariable model, in which we analyzed only factors present upon admission, markers of care complexity including a higher Elixhauser Comorbidity Index, any prior hospital admissions, and transfer from an outside hospital were all significantly associated with lower early EDD accuracy. These findings suggest that initial LOS projections and EDD estimates at the time of admission are unlikely to be useful as predictors for patients with more complex care needs and explains why accuracy of EDD entries improve closer to the actual discharge data as observed by Henry et al. [24].
Our study also offers early insights into the challenges of using generalizable variables common to most EHRs to predict an accurate EDD early during hospitalization [20, 39]. While our primary multivariable model had acceptable discrimination of the outcome, few variables would be useful for prospective prediction, as suggested by the poor discriminative ability of our secondary model. Additionally, while external data sources (bed availability at skilled nursing facilities, weather data, etc.) and even patient-reported data (status of completion of outcome questionnaires and discharge checklists, etc.) [14, 25] may improve predictive utility, this would need to be further validated, ideally using machine learning techniques [38, 40]. Nonetheless, the rapid adoption of APIs provides an opportunity to use external data sources to improve predictions beyond EHR data alone [38,39,40,41]. Finally, our study highlights potential actionable targets for quality improvement initiatives which can be prioritized by hospital operational and clinical leaders. For example, given that more than half (51.5%) of all encounters analyzed did not have a single early EDD entry, hospital-level strategies could consider incentivizing EDDs entered within the first 24 h of hospitalization, potentially targeted towards attendings at the same time of attestations for level of care (inpatient vs. observation), as required by Medicare [42].
Limitations
Our study has several limitations. First, it was conducted on patients admitted to general medicine units at a single academic medical center; different services, clinical units, and hospitals may vary in their EDD documentation practices. Second, we defined an accurate early EDD as any EDD entry within the first 24 h of admission; while it is possible that inaccurate entries may have occurred before or after the accurate EDD entry, most encounters in our dataset had just one EDD entry. Third, while other types of EHR data (conditional discharge orders, provisional discharge status, social determinants of health, etc.) could have predictive value, many were not routinely entered or updated by clinical staff at the time of this analysis. Fourth, we lacked access to other external sources of data (bed availability at receiving facilities); however, this could be true for most hospitals. Fifth, our study did not address why EDD entries were incorrect – a variety of complex workflow factors could be implicated, and our analysis did not take consider the extent to which entries were inaccurate (e.g., one day early vs. ten days late). Anecdotally, the pattern of EDD updates (e.g., rolling forward EDD by 3 days) by non-clinical staff (unit clerks) on certain days of the week (Fridays) are likely arbitrary based upon knowledge that the patient would not be discharged over the weekend. We note that while the responsibility for updating the EDD at our institution lies primarily with unit clerks, the delineation of responsibility is not always clear and likely varies by institution.
Conclusions & Implications
In summary, we conducted multivariable analyses to understand the types of variables that have potential to discriminate an accurate EDD entered early during hospitalization. Our analyses provide new insights about the types of factors that could be used to develop clinically and operationally useful prediction models of early EDD accuracy. Such models will likely require use of data from EHRs and external sources that have potential to influence the ADD. Future efforts can be directed towards using, developing, and validating predictive models of EDD accuracy, which can instill confidence in patients, clinicians, and operational leaders.
References
Anthony D, Chetty VK, Kartha A, McKenna K, DePaoli MR, Jack B. Re-engineering the Hospital Discharge: An Example of a Multifaceted Process Evaluation. In: Henriksen K, Battles JB, Marks ES, Lewin DI, eds. Advances in Patient Safety: From Research to Implementation (Volume 2: Concepts and Methodology). Rockville (MD): Agency for Healthcare Research and Quality (US); February 2005.
Manges KA, Wallace AS, Groves PS, Schapira MM, Burke RE. Ready to Go Home? Assessment of Shared Mental Models of the Patient and Discharging Team Regarding Readiness for Hospital Discharge. J Hosp Med. 2020;15:E1-7.
Patel H, Mourad M. Demystifying discharge: Assessing discharge readiness to predict day of discharge. J Hosp Med. 2015;10(12):832–3.
Rohatgi N, Kane M, Winget M, Haji-Sheikhi F, Ahuja N. Factors Associated With Delayed Discharge on General Medicine Service at an Academic Medical Center. J Healthc Qual. 2018;40(6):329–35.
Carroll A, Dowling M. Discharge planning: communication, education and patient participation. Br J Nurs. 2007;16(14):882–6.
Patient Flow Initiative Eliminates Barriers to Discharge. Hosp Case Manag. 2016;24(12):171–2.
Anticipated discharge date improves throughput. Hosp Case Manag. 2009;17(10):155–6.
Khanna S, Sier D, Boyle J, Zeitz K. Discharge timeliness and its impact on hospital crowding and emergency department flow performance. Emerg Med Australas. 2016;28(2):164–70.
Destino L, Bennett D, Wood M, Acuna C, Goodman S, Asch SM, et al. Improving Patient Flow: Analysis of an Initiative to Improve Early Discharge. J Hosp Med. 2019;14(1):22–7.
Lees L, Holmes C. Estimating date of discharge at ward level: a pilot study. Nurs Stand. 2005;19(17):40–3.
Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS). Baltimore, MD: Centers for Medicare & Medicaid Services.
Schnipper JL, Kirwin JL, Cotugno MC, Wahlstrom SA, Brown BA, Tarvin E, et al. Role of pharmacist counseling in preventing adverse drug events after hospitalization. Archives of Internal Medicine. 2006;166(5):565–71.
Mixon AS, Goggins K, Bell SP, Vasilevskis EE, Nwosu S, Schildcrout JS, et al. Preparedness for hospital discharge and prediction of readmission. J Hosp Med. 2016;11(9):603–9.
Fuller TE, Pong DD, Piniella N, Pardo M, Bessa N, Yoon C, et al. Interactive Digital Health Tools to Engage Patients and Caregivers in Discharge Preparation: Implementation Study. J Med Internet Res. 2020;22(4):e15573.
New PW, McDougall KE, Scroggie CP. Improving discharge planning communication between hospitals and patients. Intern Med J. 2016;46(1):57–62.
Sumer T, Taylor DK, McDonald M, McKinney V, Gillard M, Grasel K, et al. The effect of anticipatory discharge orders on length of hospital stay in staff pediatric patients. Am J Med Qual. 1997;12(1):48–50.
Makaryus AN, Friedman EA. Patients’ understanding of their treatment plans and diagnosis at discharge. Mayo Clinic proceedings.Mayo Clinic. 2005;80(8):991-4.
Kane M, Rohatgi N, Heidenreich P, Thakur A, Winget M, Shum K, et al. Lean-Based Redesign of Multidisciplinary Rounds on General Medicine Service. J Hosp Med. 2018;13(7):482–5.
De Grood A, Blades K, Pendharkar SR. A Review of Discharge-Prediction Processes in Acute Care Hospitals. Healthc Policy. 2016;12(2):105–15.
Sullivan B, Ming D, Boggan JC, Schulteis RD, Thomas S, Choi J, et al. An evaluation of physician predictions of discharge on a general medicine service. J Hosp Med. 2015;10(12):808–10.
Parker J, Coiera E. Improving Clinical Communication. Journal of the American Medical Informatics Association. 2000;7(5):453–61.
Dalal AK, Schnipper J, Massaro A, Hanna J, Mlaver E, McNally K, et al. A web-based and mobile patient-centered ‘’microblog’’ messaging platform to improve care team communication in acute care. J Am Med Inform Assoc. 2017;24(e1):e178-e84.
Dalal AK, Schnipper JL. Care team identification in the electronic health record: A critical first step for patient-centered communication. J Hosp Med. 2016;11(5):381–5.
Henry OP, Li G, Freundlich RE, Sandberg WS, Wanderer JP. Understanding the Accuracy of Clinician Provided Estimated Discharge Dates. J Med Syst. 2021;46(1):2. Published 2021 Nov 16. doi:https://doi.org/10.1007/s10916-021-01793-w
Dalal AK, Piniella N, Fuller TE, Pong D, Pardo M, Bessa N, et al. Evaluation of electronic health record-integrated digital health tools to engage hospitalized patients in discharge preparation. J Am Med Inform Assoc. 2021.
Levin SR, Gitkind AI, Bartels MN. Effect of the COVID-19 Pandemic on Postacute Care Decision Making. Archives of physical medicine and rehabilitation. 2021;102(2):323–30.
von Elm E, Altman DG, Egger M, et al. Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. BMJ. 2007;335(7624):806–808.
McMahon GT, Katz JT, Thorndike ME, Levy BD, Loscalzo J. Evaluation of a redesign initiative in an internal-medicine residency. N Engl J Med. 2010;362(14):1304–1311.
Mueller SK, Schnipper JL, Giannelli K, Roy CL, Boxer R. Impact of regionalized care on concordance of plan and preventable adverse events on general medicine services. J Hosp Med. 2016;11(9):620–7.
National Centers for Environmental Information, National Oceanic and Atmospheric Administration. https://www.ncdc.noaa.gov/snow-and-ice/daily-snow/. Accessed April 25, 2019.
Elixhauser A, Steiner C, Harris DR, Coffey RM. Comorbidity measures for use with administrative data. Medical care. 1998;36(1):8–27.
van Walraven C, Austin PC, Jennings A, Quan H, Forster AJ. A modification of the Elixhauser comorbidity measures into a point system for hospital death using administrative data. Med Care. 2009;47(6):626–33.
Burden M, Keniston A, Gundareddy VP, Kauffman R, Keach JW, McBeth L, et al. Discharge in the a.m.: A randomized controlled trial of physician rounding styles to improve hospital throughput and length of stay. J Hosp Med. 2023;18(4):302–315.
Kravet SJ, Levine RB, Rubin HR, Wright SM. Discharging patients earlier in the day: a concept worth evaluating. Health Care Manag (Frederick). 2007;26(2):142–146.
Rose M, Pan H, Levinson MR, Staples M. Can frailty predict complicated care needs and length of stay?. Intern Med J. 2014;44(8):800–805.
Toh HJ, Lim ZY, Yap P, Tang T. Factors associated with prolonged length of stay in older patients. Singapore Med J. 2017;58(3):134–138.
van Walraven C, Forster AJ. The TEND (Tomorrow’s Expected Number of Discharges) Model Accurately Predicted the Number of Patients Who Were Discharged from the Hospital the Next Day. J Hosp Med. 2018;13(3):158–63.
Safavi KC, Khaniyev T, Copenhaver M, Seelen M, Zenteno Langle AC, Zanger J, et al. Development and Validation of a Machine Learning Model to Aid Discharge Processes for Inpatient Surgical Care. JAMA Netw Open. 2019;2(12):e1917221.
Barnes S, Hamrock E, Toerper M, Siddiqui S, Levin S. Real-time prediction of inpatient length of stay for discharge prioritization. J Am Med Inform Assoc. 2016;23(e1):e2-e10.
Hisham S, Rasheed SA, Dsouza B. Application of Predictive Modelling to Improve the Discharge Process in Hospitals. Healthc Inform Res. 2020;26(3):166–74.
Rajkomar A, Oren E, Chen K, Dai AM, Hajaj N, Hardt M, et al. Scalable and accurate deep learning with electronic health records. NPJ Digit Med. 2018;1:18.
Centers for Medicare & Medicaid Services (CMS), HHS. Medicare Program; Hospital Inpatient Prospective Payment Systems for Acute Care Hospitals and the Long-Term Care Hospital Prospective Payment System and Policy Changes and Fiscal Year 2019 Rates; Quality Reporting Requirements for Specific Providers; Medicare and Medicaid Electronic Health Record (EHR) Incentive Programs (Promoting Interoperability Programs) Requirements for Eligible Hospitals, Critical Access Hospitals, and Eligible Professionals; Medicare Cost Reporting Requirements; and Physician Certification and Recertification of Claims. Final rule. Fed Regist. 2018;83(160):41144–41784.
Acknowledgements
This work was supported by a grant from the Agency for Healthcare Research & Quality (AHRQ) (R21-HS024751). AHRQ had no role in the design or conduct of the study, the collection, analysis, or interpretation of data, or preparation or review of the manuscript. The conclusions in this report are those of the authors and do not necessarily represent the official position of AHRQ.
Funding
Agency for Healthcare Research and Quality (R21-HS024751).
Author information
Authors and Affiliations
Contributions
Study conception and design were performed by N.P., T.F., H.S., and A.D. Material preparation, data collection, and analysis were performed by N.P., T.F., C.Y., S.L., and A.D. The manuscript text was written by N.P., T.F., and A.D. The tables and figures were prepared by N.P, C.Y., and A.D. Editing and approval of the final manuscript were performed by N.P., T.F., L.S., H.S., C.Y., S.L., J.S., and A.D.
Corresponding author
Ethics declarations
Ethics Approval
This study was approved by the Mass General Brigham Human Research Committee, at Brigham and Women’s Hospital, Boston, MA. The procedures used in this study adhere to the tenets of the Declaration of Helsinki.
Consent to Participate
This retrospective observational study was approved by the Mass General Brigham Human Research Committee with a waiver of the requirement for informed consent.
Competing interests
The authors have no competing interests to declare that are relevant to the content of this article.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Piniella, N.R., Fuller, T.E., Smith, L. et al. Early Expected Discharge Date Accuracy During Hospitalization: A Multivariable Analysis. J Med Syst 47, 63 (2023). https://doi.org/10.1007/s10916-023-01952-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10916-023-01952-1