Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3584371.3613054acmconferencesArticle/Chapter ViewAbstractPublication PagesbcbConference Proceedingsconference-collections
abstract

Multimodality-aware Intra-dialytic Hypotension Classifier: A Bilingual NLP Approach to Classify Dialysis Records

Published: 04 October 2023 Publication History

Abstract

Nursing records, rich in free-text data, encapsulate valuable insights into patients' physiological status and physicians' diagnoses, offering more holistic perspectives than mere tabular data. The scrutiny of these free-text descriptions often enables physicians to rapidly assimilate the patients' historical and current health states, thereby facilitating accurate diagnosis and treatment. One particularly pressing issue is the identification of intra-dialytic hypotension (IDH) symptoms within these records. IDH, a sporadic yet potentially fatal complication during hemodialysis sessions for end-stage kidney disease patients, is diagnosed by a decrease of 20 mmHg in systolic blood pressure during hemodialysis or a drop of 10 mmHg in mean arterial pressure coupled with hypo-perfusion-related symptoms. In response to this, we have meticulously developed a Multi-modality-aware IDH Classifier (MMIDHC) using a natural language processing (NLP) model aimed at the classification of these records. Primarily, this model focuses on the classification of English-Chinese bilingual dialysis records concerning IDH. Composed of three distinct modules, the model's intricate processes are delineated in Figure 1.
Free-text Pre-processing: To investigate the influence of long sentences and noise on classification, we implemented sentence segmentation. Additionally, we enhanced the contextual relationships and augmented the sample size by merging adjacent segments in both antegrade and retrograde fashions, as context ordered-shuffling.
Free-text Feature Extraction: To expedite the process and avoid the need to rebuild NLP models, we adopted publicly available NLP models that support the Chinese language for feature extraction. In this context, we utilized the Multilingual Universal Sentence Encoder (mUSE) proposed by Yang et al. [3] and the Multilingual Sentence BERT (mSBERT) introduced by Reimers et al. [2]. The latter model builds upon Sentence BERT (SBERT) [1] and incorporates the Teacher-Student Model. Both these multilingual NLP models serve as encoders, parsing free-text, and converting natural language into feature vectors. The performance of these models, in terms of metric indicators, showed minimal differences at a fixed threshold. Nevertheless, the mSBERT encoder consistently outperformed mUSE in speed, approximately 20 times faster. Given the potential impact of encoder speed on the operation of the multimodule model, we selected mSBERT as the primary encoder.
Classification Learning: We employed a Multilayer Perceptron (MLP) for classifying the extracted features. In order to improve the reliability of our classification, we incorporated the MLP's Intradialytic Hypotension (IDH) results with the blood pressure readings from the ongoing dialysis session into an ensemble model. This method facilitates more nuanced decision-making. For example, if the MLP detects IDH symptoms (Yes), but the blood pressure reading is normal (False), we classify the result as False owing to lack of fulfilling definition of IDH. However, if both the MLP and the blood pressure readings suggest a normal state, we categorize the dialysis record as Non-IDH. Our baseline model has achieved encouraging results across all metrics: Accuracy 0.951, Recall 0.955, Precision 0.913, and F1-score 0.932. For more information, please visit our website: https://github.com/IlikeBB/Far-Eastern-Memorial-Hospital-IDH_RecommendationSystem_Project.

References

[1]
Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084 (2019).
[2]
Nils Reimers and Iryna Gurevych. 2020. Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation. arXiv preprint arXiv:2004.09813 (04 2020). http://arxiv.org/abs/2004.09813
[3]
Yinfei et al. Yang. 2019. Multilingual universal sentence encoder for semantic retrieval. arXiv preprint arXiv:1907.04307 (2019).

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
BCB '23: Proceedings of the 14th ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics
September 2023
626 pages
ISBN:9798400701269
DOI:10.1145/3584371
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 04 October 2023

Check for updates

Author Tags

  1. free-text
  2. nlp
  3. intradialytic hypotension
  4. multi-modules
  5. dialysis

Qualifiers

  • Abstract

Funding Sources

  • Far Eastern Memorial Hospital
  • National Science and Technology Council, Taiwan

Conference

BCB '23
Sponsor:

Acceptance Rates

Overall Acceptance Rate 254 of 885 submissions, 29%

Upcoming Conference

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 12
    Total Downloads
  • Downloads (Last 12 months)12
  • Downloads (Last 6 weeks)2
Reflects downloads up to 24 Sep 2024

Other Metrics

Citations

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media