Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 

Artificial Intelligence Approaches for Medical Diagnostics in Korea

A special issue of Diagnostics (ISSN 2075-4418). This special issue belongs to the section "Machine Learning and Artificial Intelligence in Diagnostics".

Deadline for manuscript submissions: closed (28 February 2022) | Viewed by 95819

Special Issue Editor


E-Mail Website
Guest Editor
Institute of Digital Anti-Aging Healthcare, Inje University, Gimhae 50834, Republic of Korea
Interests: aging science; applied artificial intelligence; digital healthcare; human computer interaction; software engineering
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The Korean government has established a plan to make it the world’s third-largest digital competitor and create an economic effect of about 400 billion dollars through AI-based businesses by 2030. In particular, in the healthcare sector, the government announced that it would use AI to foster the biotechnology and medical industries as next-generation core growth industries. With the country’s successful ICT-based response to COVID-19, more attention has also been focused on AI in medicine and its related industry. Due to this, the contribution to AI research in medicine in Korea is rapidly increasing. It seems meaningful at this point to understand and share South Korean research on artificial intelligence approaches for medical diagnostics, so that one could learn its progress, compare it with that of other countries, and anticipate future directions. This Special Issue aims to bring together scholars, professors, researchers, engineers, and administrators in Korea using state-of-the-art technologies and ideas to significantly improve the field of AI and machine learning in medical diagnostic technology, including but not limited to the following:

Diagnostics with smart IoT and mobile/wearable devices;

Intelligent in vitro diagnostic analysis;

Medical image analysis and diagnostics;

Text mining and natural language processing in medicine;

Intelligent voice transcription;

Knowledge engineering approaches in medical diagnostics;

Data analytics and mining for biomedical decision support;

AI with electronic medical records;

Artificial neural networks and deep learning in medicine;

Models and systems for AI-based public health.

Prof. Dr. Hee-Cheol Kim
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Diagnostics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Artificial intelligence
  • Biosensors and biochips
  • Clinical medicine
  • Diagnostics
  • Knowledge Engineering
  • Diagnostic robotics
  • Machine learning
  • Precision medicine

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (22 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

14 pages, 4292 KiB  
Article
Prediction of Two-Year Recurrence-Free Survival in Operable NSCLC Patients Using Radiomic Features from Intra- and Size-Variant Peri-Tumoral Regions on Chest CT Images
by Soomin Lee, Julip Jung, Helen Hong and Bong-Seog Kim
Diagnostics 2022, 12(6), 1313; https://doi.org/10.3390/diagnostics12061313 - 25 May 2022
Cited by 8 | Viewed by 2228
Abstract
To predict the two-year recurrence-free survival of patients with non-small cell lung cancer (NSCLC), we propose a prediction model using radiomic features of the inner and outer regions of the tumor. The intratumoral region and the peritumoral regions from the boundary to 3 [...] Read more.
To predict the two-year recurrence-free survival of patients with non-small cell lung cancer (NSCLC), we propose a prediction model using radiomic features of the inner and outer regions of the tumor. The intratumoral region and the peritumoral regions from the boundary to 3 cm were used to extract the radiomic features based on the intensity, texture, and shape features. Feature selection was performed to identify significant radiomic features to predict two-year recurrence-free survival, and patient classification was performed into recurrence and non-recurrence groups using SVM and random forest classifiers. The probability of two-year recurrence-free survival was estimated with the Kaplan–Meier curve. In the experiment, CT images of 217 non-small-cell lung cancer patients at stages I-IIIA who underwent surgical resection at the Veterans Health Service Medical Center (VHSMC) were used. Regarding the classification performance on whole tumors, the combined radiomic features for intratumoral and peritumoral regions of 6 mm and 9 mm showed improved performance (AUC 0.66, 0.66) compared to T stage and N stage (AUC 0.60), intratumoral (AUC 0.64) and peritumoral 6 mm and 9 mm classifiers (AUC 0.59, 0.62). In the assessment of the classification performance according to the tumor size, combined regions of 21 mm and 3 mm were significant when predicting outcomes compared to other regions of tumors under 3 cm (AUC 0.70) and 3 cm~5 cm (AUC 0.75), respectively. For tumors larger than 5 cm, the combined 3 mm region was significant in predictions compared to the other features (AUC 0.71). Through this experiment, it was confirmed that peritumoral and combined regions showed higher performance than the intratumoral region for tumors less than 5 cm in size and that intratumoral and combined regions showed more stable performance than the peritumoral region in tumors larger than 5 cm. Full article
(This article belongs to the Special Issue Artificial Intelligence Approaches for Medical Diagnostics in Korea)
Show Figures

Figure 1

Figure 1
<p>Examples of lung tumors with similar appearance between recurrence and non-recurrence. Non-recurrence tumors are in the first row, and recurrence tumors are in the second row.</p>
Full article ">Figure 2
<p>The pipeline of radiomic-based prediction model for 2-year recurrence-free survival prediction.</p>
Full article ">Figure 3
<p>The criteria of patient selection.</p>
Full article ">Figure 4
<p>Illustration of data preparation. (<b>a</b>) Preprocessing and (<b>b</b>–<b>d</b>) region definitions for CT images.</p>
Full article ">Figure 5
<p>ROC curves for each classifier. The blue line curve indicates the mean value of the 5-fold cross-validation results, and the gray band indicates the AUC variance of the 5-fold cross-validation: (<b>a</b>) T and N-stages; (<b>b</b>) Intratumoral radiomic classifier; (<b>c</b>) Peritumoral 3 mm radiomic classifier; (<b>d</b>) Peritumoral 12 mm radiomic classifier; (<b>e</b>) Combined 6 mm radiomic classifier; and (<b>f</b>) Combined 9 mm radiomic classifiers.</p>
Full article ">Figure 6
<p>Kaplan–Meier curves for 2-year recurrence-free survival: (<b>a</b>) Real curve; (<b>b</b>) T stage and N stage; (<b>c</b>) Intratumoral radiomic classifier; (<b>d</b>) Peritumoral 3 mm radiomic classifier; (<b>e</b>) Peritumoral 12 mm radiomic classifier; (<b>f</b>) Combined 6 mm radiomic classifier; (<b>g</b>) Combined 9 mm radiomic classifier.</p>
Full article ">Figure 7
<p>Appearance of lung tumors according to tumor size groups on CT images: (<b>a</b>) Group 1; (<b>b</b>) Group 2; (<b>c</b>) Group 3.</p>
Full article ">
20 pages, 27058 KiB  
Article
Deep Learning Segmentation in 2D X-ray Images and Non-Rigid Registration in Multi-Modality Images of Coronary Arteries
by Taeyong Park, Seungwoo Khang, Heeryeol Jeong, Kyoyeong Koo, Jeongjin Lee, Juneseuk Shin and Ho Chul Kang
Diagnostics 2022, 12(4), 778; https://doi.org/10.3390/diagnostics12040778 - 22 Mar 2022
Cited by 12 | Viewed by 3406
Abstract
X-ray angiography is commonly used in the diagnosis and treatment of coronary artery disease with the advantage of visualization of the inside of blood vessels in real-time. However, it has several disadvantages that occur in the acquisition process, which causes inconvenience and difficulty. [...] Read more.
X-ray angiography is commonly used in the diagnosis and treatment of coronary artery disease with the advantage of visualization of the inside of blood vessels in real-time. However, it has several disadvantages that occur in the acquisition process, which causes inconvenience and difficulty. Here, we propose a novel segmentation and nonrigid registration method to provide useful real-time assistive images and information. A convolutional neural network is used for the segmentation of coronary arteries in 2D X-ray angiography acquired from various angles in real-time. To compensate for errors that occur during the 2D X-ray angiography acquisition process, 3D CT angiography is used to analyze the topological structure. A novel energy function-based 3D deformation and optimization is utilized to implement real-time registration. We evaluated the proposed method for 50 series from 38 patients by comparing the ground truth. The proposed segmentation method showed that Precision, Recall, and F1 score were 0.7563, 0.6922, and 0.7176 for all vessels, 0.8542, 0.6003, and 0.7035 for markers, and 0.8897, 0.6389, and 0.7386 for bifurcation points, respectively. In the nonrigid registration method, the average distance of 0.8705, 1.06, and 1. 5706 mm for all vessels, markers, and bifurcation points was achieved. The overall process execution time was 0.179 s. Full article
(This article belongs to the Special Issue Artificial Intelligence Approaches for Medical Diagnostics in Korea)
Show Figures

Figure 1

Figure 1
<p>Process of proposed method.</p>
Full article ">Figure 2
<p>Overall network architecture of the proposed method.</p>
Full article ">Figure 3
<p>Creation of graph structure for vascular structure.</p>
Full article ">Figure 4
<p>Creation of all connected structures that can be combined.</p>
Full article ">Figure 5
<p>Comparison of point matching methods. (<b>a</b>) Distance-based methods. (<b>b</b>) Gradient-based methods. (<b>c</b>) Proposed method.</p>
Full article ">Figure 6
<p>XA Images with different angles for the same patient. (<b>a</b>) Primary angle: −29.3, secondary angle: −18.7. (<b>b</b>) Primary angle: 42.6, secondary angle: 15.8.</p>
Full article ">Figure 7
<p>Creation of simulation data.</p>
Full article ">Figure 8
<p>Error after nonrigid registration. (<b>a</b>) Parameter <span class="html-italic">μ</span>, (<b>b</b>) parameter <span class="html-italic">τ</span>, (<b>c</b>) parameter <span class="html-italic">φ</span>.</p>
Full article ">Figure 9
<p>Result of segmentation of simulation data. (<b>a</b>) Error measurement of centerline. (<b>b</b>) Error measurement of segment.</p>
Full article ">Figure 10
<p>Result of nonrigid registration of simulation data. (<b>a</b>) Error measurement of 2D data. (<b>b</b>) Error measure of 3D data.</p>
Full article ">Figure 11
<p>Results of segmentation. (<b>a</b>) Results of left coronary artery segmentation. (<b>b</b>) Results of right coronary artery segmentation.</p>
Full article ">Figure 12
<p>Results of nonrigid registration. (<b>a</b>) Results of left coronary artery registration. (<b>b</b>) Results of right coronary artery registration.</p>
Full article ">Figure 13
<p>Color representation according to the depth of blood vessels [<a href="#B21-diagnostics-12-00778" class="html-bibr">21</a>].</p>
Full article ">Figure 14
<p>Results of measurement of the nonrigid registration of clinical data. (<b>a</b>) ADD. (<b>b</b>) Marker. (<b>c</b>) Bifurcation.</p>
Full article ">Figure 15
<p>Comparison of the nonrigid registration results. (<b>a</b>) Proposed method. (<b>b</b>) Previous study [<a href="#B16-diagnostics-12-00778" class="html-bibr">16</a>].</p>
Full article ">
14 pages, 1897 KiB  
Article
Influence of the COVID-19 Pandemic on the Subjective Life Satisfaction of South Korean Adults: Bayesian Nomogram Approach
by Haewon Byeon
Diagnostics 2022, 12(3), 761; https://doi.org/10.3390/diagnostics12030761 - 21 Mar 2022
Cited by 6 | Viewed by 2665
Abstract
To understand the changes in the lives of adults living in local communities due to the COVID-19 pandemic, it is necessary to identify subjective life satisfaction and to understand key factors affecting life satisfaction. This study identified the effect on life satisfaction of [...] Read more.
To understand the changes in the lives of adults living in local communities due to the COVID-19 pandemic, it is necessary to identify subjective life satisfaction and to understand key factors affecting life satisfaction. This study identified the effect on life satisfaction of COVID-19 using epidemiological data representing adults in South Korean communities and developed a model for predicting the factors adversely affecting life satisfaction by applying a Bayesian nomogram. The subjects of this study were 227,808 adults who were 19 years old or older. Life satisfaction was measured in units of 10 points from 0 to 100: a score of 30 or less corresponding to −1 standard deviations was reclassified as dissatisfied, and a score of 40 or more was reclassified as satisfied. The nomogram developed in this study showed that “females who were between 30 and 39 years old, living in urban areas, with fewer meetings and sleeping hours, concerned about infection for themselves and the weak in the family due to the COVID-19 pandemic, concerned about death, with a mean household monthly income of KRW 3–5 million, who were non-smokers, with poor subjective health, and an education level of college graduation or above” would have a 66% chance of life dissatisfaction due to the COVID-19 pandemic. The results of this study suggest that the government needs not only to provide economic support but also to support education on infectious diseases and customized psychological counseling programs for those at high risk of life dissatisfaction after the COVID-19 pandemic. Full article
(This article belongs to the Special Issue Artificial Intelligence Approaches for Medical Diagnostics in Korea)
Show Figures

Figure 1

Figure 1
<p>Example of a Bayesian nomogram [<a href="#B11-diagnostics-12-00761" class="html-bibr">11</a>].</p>
Full article ">Figure 2
<p>A Bayesian nomogram for predicting the subjective life dissatisfaction of Korean adults in COVID-19 pandemic in Korea. covid_phy = number of meetings with friends or neighbors after the outbreak of COVID-19 (1 increased, 2 similar, or 3 decreased); covid_sleep = changes in sleeping hours after the COVID-19 pandemic (1 increased, 2 similar, or 3 decreased); age_N = (1 = 19–29, 2 = 30–39, 3 = 40–49, 4 = 50–59, 5 = 60+years); edu_N = education level (1 elementary school graduation or below, 2 middle school graduation, 3 high school graduation, 4 college graduation or above); anx_dis = concerns about COVID-19 infection (1 concerned, 2 indifferent, 3 not concerned); anx_econ = concerns about economic damage due to COVID-19 (1 concerned, 2 indifferent, 3 not concerned); dong_by_code = residential area type (1 urban, 2 rural); anx_sham = concerns about criticism from others due to COVID-19 infection (1 concerned, 2 indifferent, 3 not concerned); income_N = mean monthly household income (1 = less than KRW 1 million, 2 = KRW 1 to 3 million, 3 = KRW 3 to 5 million, 4 = KRW 5 million or more); sex = (1 male, 2 female); anx_death = fear of death due to COVID-19 infection (1 concerned, 2 indifferent, 3 not concerned); smoke_N = smoking (1 current smoker, 2 past smoker, 3 non-smoker); Sub_health_N = subjective health level (1 good, 2 average, 3 bad).</p>
Full article ">Figure 3
<p>General accuracy (10-fold validation) of Bayesian nomogram for predicting the subjective life dissatisfaction of Korean adults in COVID-19 pandemic in Korea.</p>
Full article ">Figure 4
<p>Calibration plot (10-fold validation) of Bayesian nomogram for predicting the subjective life dissatisfaction of Korean adults in COVID-19 pandemic in Korea.</p>
Full article ">
26 pages, 3395 KiB  
Article
Automated Differentiation of Atypical Parkinsonian Syndromes Using Brain Iron Patterns in Susceptibility Weighted Imaging
by Yun Soo Kim, Jae-Hyeok Lee and Jin Kyu Gahm
Diagnostics 2022, 12(3), 637; https://doi.org/10.3390/diagnostics12030637 - 5 Mar 2022
Cited by 4 | Viewed by 3541
Abstract
In recent studies, iron overload has been reported in atypical parkinsonian syndromes. The topographic patterns of iron distribution in deep brain nuclei vary by each subtype of parkinsonian syndrome, which is affected by underlying disease pathologies. In this study, we developed a novel [...] Read more.
In recent studies, iron overload has been reported in atypical parkinsonian syndromes. The topographic patterns of iron distribution in deep brain nuclei vary by each subtype of parkinsonian syndrome, which is affected by underlying disease pathologies. In this study, we developed a novel framework that automatically analyzes the disease-specific patterns of iron accumulation using susceptibility weighted imaging (SWI). We constructed various machine learning models that can classify diseases using radiomic features extracted from SWI, representing distinctive iron distribution patterns for each disorder. Since radiomic features are sensitive to the region of interest, we used a combination of T1-weighted MRI and SWI to improve the segmentation of deep brain nuclei. Radiomics was applied to SWI from 34 patients with a parkinsonian variant of multiple system atrophy, 21 patients with cerebellar variant multiple system atrophy, 17 patients with progressive supranuclear palsy, and 56 patients with Parkinson’s disease. The machine learning classifiers that learn the radiomic features extracted from iron-reflected segmentation results produced an average area under receiver operating characteristic curve (AUC) of 0.8607 on the training data and 0.8489 on the testing data, which is superior to the conventional classifier with segmentation using only T1-weighted images. Our radiomic model based on the hybrid images is a promising tool for automatically differentiating atypical parkinsonian syndromes. Full article
(This article belongs to the Special Issue Artificial Intelligence Approaches for Medical Diagnostics in Korea)
Show Figures

Figure 1

Figure 1
<p>SWI axial view of parkinsonian syndrome patients: parkinsonian variant multiple system atrophy (MSA-P), cerebellar variant multiple system atrophy (MSA-C), progressive supranuclear palsy (PSP), and Parkinson’s disease (PD). Increased iron-related signals in the anterior and medial aspects of the globus pallidus (open arrow) of SWI is a highly specific sign of PSP. For MSA-P, significant accumulation of iron in the lateral aspect of the globus pallidus adjacent to putamen, posterolateral putaminal hypointensity, (closed arrow) and lateral-to-medial gradient appear consistently.</p>
Full article ">Figure 2
<p>Overall flowchart of combining T1w and SWI, SWI segmentation, feature extraction and selection, and disease classification. We create a hybrid image combining T1w and SWI for iron-reflected DGM segmentation, extract texture representative features, and classify parkinsonian disorders with the significant features selected using various machine learning algorithms.</p>
Full article ">Figure 3
<p>Flowchart of making a deep gray matter (DGM) mask using the T1w and SWI images. T1w and SWI were preprocessed through normalization, bias correction, and registration. The merging weight coefficients were calculated from initial DGM mask obtained using only T1w segmentation, and a hybrid contrast image (HC) was created as a result. The DGM mask was obtained by registering the HC to the MNI atlas space using non-linear registration. The final mask was obtained by applying inverse warping to the original coordinates.</p>
Full article ">Figure 4
<p>Deep gray matter (DGM) axial slice in T1w, SWI, and HC images. HC has both a high contrast cortex, which is the advantage of T1w, and a more prominent DGM boundary, which is visible in the SWI.</p>
Full article ">Figure 5
<p>Putamen mask of segmentation result of using only T1-weighted image (FreeSurfer) and using T1w and SWI (proposed method) with SWI overlaid. The segmentation result using only T1w includes the part without iron accumulation when overlaid with the SWI (yellow). The proposed method reflects more of the iron deposition (red).</p>
Full article ">Figure A1
<p>Receiver operating characteristic (ROC) curves of the RBF SVM classifier for MSA-P vs. MSA-C.</p>
Full article ">Figure A2
<p>Receiver operating characteristic (ROC) curves of the RBF SVM classifier for MSA-P vs. PD.</p>
Full article ">Figure A3
<p>Receiver-operating characteristic (ROC) curves of the RBF SVM classifier for MSA-P vs. PSP.</p>
Full article ">Figure A4
<p>Receiver operating characteristic (ROC) curves of the RBF SVM classifier for MSA-C vs. PD.</p>
Full article ">Figure A5
<p>Receiver operating characteristic (ROC) curves of the RBF SVM classifier for MSA-C vs. PSP.</p>
Full article ">Figure A6
<p>Receiver operating characteristic (ROC) curves of the RBF SVM classifier for PD vs. PSP.</p>
Full article ">
15 pages, 2133 KiB  
Article
Automated Diagnosis of Cervical Intraepithelial Neoplasia in Histology Images via Deep Learning
by Bum-Joo Cho, Jeong-Won Kim, Jungkap Park, Gui-Young Kwon, Mineui Hong, Si-Hyong Jang, Heejin Bang, Gilhyang Kim and Sung-Taek Park
Diagnostics 2022, 12(2), 548; https://doi.org/10.3390/diagnostics12020548 - 21 Feb 2022
Cited by 14 | Viewed by 3435
Abstract
Artificial intelligence has enabled the automated diagnosis of several cancer types. We aimed to develop and validate deep learning models that automatically classify cervical intraepithelial neoplasia (CIN) based on histological images. Microscopic images of CIN3, CIN2, CIN1, and non-neoplasm were obtained. The performances [...] Read more.
Artificial intelligence has enabled the automated diagnosis of several cancer types. We aimed to develop and validate deep learning models that automatically classify cervical intraepithelial neoplasia (CIN) based on histological images. Microscopic images of CIN3, CIN2, CIN1, and non-neoplasm were obtained. The performances of two pre-trained convolutional neural network (CNN) models adopting DenseNet-161 and EfficientNet-B7 architectures were evaluated and compared with those of pathologists. The dataset comprised 1106 images from 588 patients; images of 10% of patients were included in the test dataset. The mean accuracies for the four-class classification were 88.5% (95% confidence interval [CI], 86.3–90.6%) by DenseNet-161 and 89.5% (95% CI, 83.3–95.7%) by EfficientNet-B7, which were similar to human performance (93.2% and 89.7%). The mean per-class area under the receiver operating characteristic curve values by EfficientNet-B7 were 0.996, 0.990, 0.971, and 0.956 in the non-neoplasm, CIN3, CIN1, and CIN2 groups, respectively. The class activation map detected the diagnostic area for CIN lesions. In the three-class classification of CIN2 and CIN3 as one group, the mean accuracies of DenseNet-161 and EfficientNet-B7 increased to 91.4% (95% CI, 88.8–94.0%), and 92.6% (95% CI, 90.4–94.9%), respectively. CNN-based deep learning is a promising tool for diagnosing CIN lesions on digital histological images. Full article
(This article belongs to the Special Issue Artificial Intelligence Approaches for Medical Diagnostics in Korea)
Show Figures

Figure 1

Figure 1
<p>A training curve for training and validation accuracies. The validation accuracy reached a plateau within 20 epochs during model training.</p>
Full article ">Figure 2
<p>Heatmaps for confusion matrix of the best-performing CNN models and human pathologists in the four-class classification. There were three false-negative cases in the best-performing DenseNet-161 (<b>a</b>) model, there was no false-negative or false-positive case with the best-performing EfficientNet-B7 (<b>b</b>). Pathologist 1 (<b>c</b>) classified CIN2 with higher sensitivity than pathologist 2 (<b>d</b>).</p>
Full article ">Figure 3
<p>Per-class ROC curves for four-class classification the best-performing CNN models. For DenseNet-161 (<b>a</b>) and EfficientNet-B7 (<b>b</b>) with best performance, AUC was higher in discriminating non-neoplasm and CIN3 rather than in classifying CIN2 and CIN1.</p>
Full article ">Figure 4
<p>Heatmaps for confusion matrix of the best-performing CNN models and human pathologists in the three-class classification. The overall accuracies increased up to 94.0% by DenseNet-161 (<b>a</b>) and 94.9% by EfficientNet-B7 (<b>b</b>), similar to those of human pathologists 1 and 2, 95.7% (<b>c</b>) and 92.3% (<b>d</b>), respectively.</p>
Full article ">Figure 5
<p>Grad-CAM images by EfficientNet-B7. Normal squamous epithelium was highlighted in Grad-CAM images (<b>a</b>–<b>d</b>). Images from cervix interpreted as non-neoplasm by the EfficientNet-B7 include exocervix (<b>a</b>), metaplastic muco-sa from transformation zone (<b>b</b>), cervicitis and erosion (<b>c</b>) and atrophic mucosa (<b>d</b>). In CIN1, layers with koilocytotic cells were mainly highlighted (<b>e</b>). The highlighted areas extended to the upper two-third of the epithelium in CIN2 (<b>f</b>) and full-thickness of the epithelium in CIN3 (<b>g</b>). Normal endocervical glands ((<b>g</b>), black arrows) were not highlighted.</p>
Full article ">Figure A1
<p>Histology of misclassified cases by CNN models. A case with scarce koilocytotic cells but basal atypia was false-negative (<b>a</b>). CIN3 showing basal/parabasal-type atypia throughout most of the epithelium but not all was downgraded to CIN2 (<b>b</b>). CIN2 (<b>c</b>) downgraded as CIN1 showed koilocytotic changes in the upper half and maturation in upper most layers but had atypia focally extending to the lower half of the epithelium (black arrow). In CIN1 upgraded as CIN2, the epithelium was disoriented (<b>d</b>). CIN2 with koilocytosis (<b>e</b>) and atrophic CIN2 (<b>f</b>) were upgraded as CIN3.</p>
Full article ">
18 pages, 2481 KiB  
Article
DWT-EMD Feature Level Fusion Based Approach over Multi and Single Channel EEG Signals for Seizure Detection
by Gopal Chandra Jana, Anupam Agrawal, Prasant Kumar Pattnaik and Mangal Sain
Diagnostics 2022, 12(2), 324; https://doi.org/10.3390/diagnostics12020324 - 27 Jan 2022
Cited by 17 | Viewed by 3359
Abstract
Brain Computer Interface technology enables a pathway for analyzing EEG signals for seizure detection. EEG signal decomposition, features extraction and machine learning techniques are more familiar in seizure detection. However, selecting decomposition technique and concatenation of their features for seizure detection is still [...] Read more.
Brain Computer Interface technology enables a pathway for analyzing EEG signals for seizure detection. EEG signal decomposition, features extraction and machine learning techniques are more familiar in seizure detection. However, selecting decomposition technique and concatenation of their features for seizure detection is still in the state-of-the-art phase. This work proposes DWT-EMD Feature level Fusion-based seizure detection approach over multi and single channel EEG signals and studied the usability of discrete wavelet transform (DWT) and empirical mode decomposition (EMD) feature fusion with respect to individual DWT and EMD features over classifiers SVM, SVM with RBF kernel, decision tree and bagging classifier for seizure detection. All classifiers achieved an improved performance over DWT-EMD feature level fusion for two benchmark seizure detection EEG datasets. Detailed quantification results have been mentioned in the Results section. Full article
(This article belongs to the Special Issue Artificial Intelligence Approaches for Medical Diagnostics in Korea)
Show Figures

Figure 1

Figure 1
<p>Shows all coefficients of five levels of DWT over experimental EEG signals. (<b>a</b>) shows the approximate coefficient, and (<b>b</b>–<b>f</b>) shows other detailed coefficients of DWT level 5.</p>
Full article ">Figure 1 Cont.
<p>Shows all coefficients of five levels of DWT over experimental EEG signals. (<b>a</b>) shows the approximate coefficient, and (<b>b</b>–<b>f</b>) shows other detailed coefficients of DWT level 5.</p>
Full article ">Figure 2
<p>Shows five IMFs of EMD applied on chb01_01 of Dataset2. IMF0, IMF1, IMF2, IMF3 and IMF4 are shown in (<b>a</b>–<b>e</b>) accordingly.</p>
Full article ">Figure 3
<p>Illustrative diagram of the proposed approach.</p>
Full article ">
12 pages, 1877 KiB  
Article
Diagnosis of Depressive Disorder Model on Facial Expression Based on Fast R-CNN
by Young-Shin Lee and Won-Hyung Park
Diagnostics 2022, 12(2), 317; https://doi.org/10.3390/diagnostics12020317 - 27 Jan 2022
Cited by 34 | Viewed by 5836
Abstract
This study examines related literature to propose a model based on artificial intelligence (AI), that can assist in the diagnosis of depressive disorder. Depressive disorder can be diagnosed through a self-report questionnaire, but it is necessary to check the mood and confirm the [...] Read more.
This study examines related literature to propose a model based on artificial intelligence (AI), that can assist in the diagnosis of depressive disorder. Depressive disorder can be diagnosed through a self-report questionnaire, but it is necessary to check the mood and confirm the consistency of subjective and objective descriptions. Smartphone-based assistance in diagnosing depressive disorders can quickly lead to their identification and provide data for intervention provision. Through fast region-based convolutional neural networks (R-CNN), a deep learning method that recognizes vector-based information, a model to assist in the diagnosis of depressive disorder can be devised by checking the position change of the eyes and lips, and guessing emotions based on accumulated photos of the participants who will repeatedly participate in the diagnosis of depressive disorder. Full article
(This article belongs to the Special Issue Artificial Intelligence Approaches for Medical Diagnostics in Korea)
Show Figures

Figure 1

Figure 1
<p>DSM-5 [<a href="#B44-diagnostics-12-00317" class="html-bibr">44</a>] and PHQ-9 [<a href="#B43-diagnostics-12-00317" class="html-bibr">43</a>].</p>
Full article ">Figure 2
<p>Examples of applications that provide self-diagnosis services for depression through PHQ-9: (<b>a</b>) National Mental Health Center [<a href="#B47-diagnostics-12-00317" class="html-bibr">47</a>], (<b>b</b>) Inquiry Health LLC [<a href="#B48-diagnostics-12-00317" class="html-bibr">48</a>].</p>
Full article ">Figure 3
<p>Wong–Baker’s facial pain measurement tool [<a href="#B49-diagnostics-12-00317" class="html-bibr">49</a>].</p>
Full article ">Figure 4
<p>Example of a facial expression that shows the user to select an expression close to his or her emotions [<a href="#B50-diagnostics-12-00317" class="html-bibr">50</a>].</p>
Full article ">Figure 5
<p>Conceptual diagram of service of the proposed system.</p>
Full article ">Figure 6
<p>Facial emotion recognition device and method for identifying emotions [<a href="#B33-diagnostics-12-00317" class="html-bibr">33</a>].</p>
Full article ">Figure 7
<p>Block diagram for detecting the position of eyes and lips proposed in the study of Lee Jeong-hwan (2018) [<a href="#B51-diagnostics-12-00317" class="html-bibr">51</a>].</p>
Full article ">Figure 8
<p>Suggestions to assist in diagnosing depression using a chatbot.</p>
Full article ">
19 pages, 5080 KiB  
Article
Design of a Diagnostic System for Patient Recovery Based on Deep Learning Image Processing: For the Prevention of Bedsores and Leg Rehabilitation
by Donggyu Choi and Jongwook Jang
Diagnostics 2022, 12(2), 273; https://doi.org/10.3390/diagnostics12020273 - 21 Jan 2022
Cited by 3 | Viewed by 3576
Abstract
Worldwide COVID-19 infections have caused various problems throughout different countries. In the case of Korea, problems related to the demand for medical care concerning wards and doctors are serious, which were already slowly worsening problems in Korea before the COVID-19 pandemic. In this [...] Read more.
Worldwide COVID-19 infections have caused various problems throughout different countries. In the case of Korea, problems related to the demand for medical care concerning wards and doctors are serious, which were already slowly worsening problems in Korea before the COVID-19 pandemic. In this paper, we propose the direction of developing a system by combining artificial intelligence technology with limited areas that do not require high expertise in the rehabilitation medical field that should be improved in Korea through the prevention of bedsores and leg rehabilitation methods. Regarding the introduction of artificial intelligence technology, medical and related laws and regulations were quite limited, so the actual needs of domestic rehabilitation doctors and advice on the hospital environment were obtained. Satisfaction with the test content was high, the degree of provision of important medical data was 95%, and the angular error was within 5 degrees and suitable for recovery confirmation. Full article
(This article belongs to the Special Issue Artificial Intelligence Approaches for Medical Diagnostics in Korea)
Show Figures

Figure 1

Figure 1
<p>The image of VUNO’s artificial intelligence diagnosis program.</p>
Full article ">Figure 2
<p>Various human pose estimation technology images.</p>
Full article ">Figure 3
<p>Accuracy by deep learning model using MPII human pose dataset [<a href="#B30-diagnostics-12-00273" class="html-bibr">30</a>].</p>
Full article ">Figure 4
<p>Image of how to measure joint angles and general methods.</p>
Full article ">Figure 5
<p>Flow chart of the system to prevent bedsores.</p>
Full article ">Figure 6
<p>An image indicating a method of detecting the rotation direction of the hip joint based on the spine.</p>
Full article ">Figure 7
<p>Flow chart image performed to check the joint angle of the leg.</p>
Full article ">Figure 8
<p>A flowchart of a method of stabilizing data when detecting an object.</p>
Full article ">Figure 9
<p>The image of the hospital room and patient in various positions from different directions.</p>
Full article ">Figure 10
<p>Measurement image of the sitting motion.</p>
Full article ">Figure 11
<p>The knee angle data of a normal person’s sitting motion.</p>
Full article ">Figure 12
<p>The hip bone angle data of a normal person’s sitting motion.</p>
Full article ">Figure 13
<p>The angle data of patient A sitting motion.</p>
Full article ">Figure 14
<p>The angle data of patient B’s sitting motion.</p>
Full article ">Figure 15
<p>Measurement image of the motion of rotating the body while lying down.</p>
Full article ">Figure 16
<p>The hip joint angle data of the motion that rotates the body of an average healthy person.</p>
Full article ">Figure 17
<p>The hip joint angle data of the motion that rotates the body of a patient.</p>
Full article ">Figure 18
<p>ROC curves and joint data learning loss curves for 50 joint angle data based on error values exceeding 5 degrees.</p>
Full article ">
19 pages, 1830 KiB  
Article
Development and Validation of an Insulin Resistance Predicting Model Using a Machine-Learning Approach in a Population-Based Cohort in Korea
by Sunmin Park, Chaeyeon Kim and Xuangao Wu
Diagnostics 2022, 12(1), 212; https://doi.org/10.3390/diagnostics12010212 - 16 Jan 2022
Cited by 24 | Viewed by 3523
Abstract
Background: Insulin resistance is a common etiology of metabolic syndrome, but receiver operating characteristic (ROC) curve analysis shows a weak association in Koreans. Using a machine learning (ML) approach, we aimed to generate the best model for predicting insulin resistance in Korean adults [...] Read more.
Background: Insulin resistance is a common etiology of metabolic syndrome, but receiver operating characteristic (ROC) curve analysis shows a weak association in Koreans. Using a machine learning (ML) approach, we aimed to generate the best model for predicting insulin resistance in Korean adults aged > 40 of the Ansan/Ansung cohort using a machine learning (ML) approach. Methods: The demographic, anthropometric, biochemical, genetic, nutrient, and lifestyle variables of 8842 participants were included. The polygenetic risk scores (PRS) generated by a genome-wide association study were added to represent the genetic impact of insulin resistance. They were divided randomly into the training (n = 7037) and test (n = 1769) sets. Potentially important features were selected in the highest area under the curve (AUC) of the ROC curve from 99 features using seven different ML algorithms. The AUC target was ≥0.85 for the best prediction of insulin resistance with the lowest number of features. Results: The cutoff of insulin resistance defined with HOMA-IR was 2.31 using logistic regression before conducting ML. XGBoost and logistic regression algorithms generated the highest AUC (0.86) of the prediction models using 99 features, while the random forest algorithm generated a model with 0.82 AUC. These models showed high accuracy and k-fold values (>0.85). The prediction model containing 15 features had the highest AUC of the ROC curve in XGBoost and random forest algorithms. PRS was one of 15 features. The final prediction models for insulin resistance were generated with the same nine features in the XGBoost (AUC = 0.86), random forest (AUC = 0.84), and artificial neural network (AUC = 0.86) algorithms. The model included the fasting serum glucose, ALT, total bilirubin, HDL concentrations, waist circumference, body fat, pulse, season to enroll in the study, and gender. Conclusion: The liver function, regular pulse checking, and seasonal variation in addition to metabolic syndrome components should be considered to predict insulin resistance in Koreans aged over 40 years. Full article
(This article belongs to the Special Issue Artificial Intelligence Approaches for Medical Diagnostics in Korea)
Show Figures

Figure 1

Figure 1
<p>Analysis processing to generate a prediction model in the participants. (<b>A</b>) A total of 8842 adults participated, and 99 features were selected manually from 1411 in the Ansan/Ansung cohort to predict the insulin resistance model using the seven machine learning (ML) approach. Missing data were filled with the mean values for continuous variables and the mode values for the categorical variables. Data were normalized using the z-score. HOMA-IR was used as an indirect insulin resistance index, and 2.31 was used as the cutoff for participants of both genders. The prediction models for insulin resistance were generated using seven ML algorithms. (<b>B</b>) The Ansan/Ansung cohort participants were randomly divided into a training set of 80% and a test set of 20% participants. The best model was selected with a random grid search after 1000 repetitions in seven different ML algorithms, including linear regression, support vector machines (SVM), XGBoost (XGB), decision tree, random forest, K-nearest neighbor (KNN), and artificial neural network (ANN). The best prediction model was selected using the AUC of the ROC. The accuracy and k-fold cross-validation of the predicted models were assessed in the test set.</p>
Full article ">Figure 2
<p>Receiver operating characteristic (ROC) curve with insulin resistance and metabolic syndrome components for the metabolic syndrome risk.</p>
Full article ">Figure 3
<p>The relative importance of the top 15 features for predicting insulin resistance (IR), as determined by the XGBoost and random forest algorithms. (<b>a</b>) IR prediction model by the XGBoost algorithm 3. (<b>b</b>) IR prediction model by the random forest algorithm. ALT, alanine aminotransferase; HbA1c, hemoglobin A1c; γ-GTP, γ-glutamyl transpeptidase; HDL, high-density lipoprotein; CRP, high-sensitive C-reactive protein; PRS, polygenetic risk scores.</p>
Full article ">Figure 4
<p>Positive and negative impact explanation of the top 15 features for predicting insulin resistance (IR) using SHAP values. (<b>a</b>) Explanation of each feature impact on the IR in the prediction model by the SHAP values in the XGBoost algorithm. (<b>b</b>) Explanation of each feature impact on the IR in the prediction model by the SHAP values in random forest algorithm. ALT, alanine aminotransferase; HbA1c, hemoglobin A1c; γ-GTP, γ-glutamyl transpeptidase; HDL, high-density lipoprotein; CRP, high-sensitive C-reactive protein; PRS, polygenetic risk scores.</p>
Full article ">Figure 4 Cont.
<p>Positive and negative impact explanation of the top 15 features for predicting insulin resistance (IR) using SHAP values. (<b>a</b>) Explanation of each feature impact on the IR in the prediction model by the SHAP values in the XGBoost algorithm. (<b>b</b>) Explanation of each feature impact on the IR in the prediction model by the SHAP values in random forest algorithm. ALT, alanine aminotransferase; HbA1c, hemoglobin A1c; γ-GTP, γ-glutamyl transpeptidase; HDL, high-density lipoprotein; CRP, high-sensitive C-reactive protein; PRS, polygenetic risk scores.</p>
Full article ">Figure 5
<p>The relative importance of the top nine features for the insulin resistance (IR) prediction, as determined by the XGBoost and random forest algorithms. (<b>a</b>) IR prediction model with top 9 features using the XGBoost algorithm. (<b>b</b>) IR prediction model with top 9 features using the random forest algorithm. (<b>c</b>) Explanation of each feature impact on the IR prediction model by SHAP values using the XGBoost algorithm.</p>
Full article ">Figure 5 Cont.
<p>The relative importance of the top nine features for the insulin resistance (IR) prediction, as determined by the XGBoost and random forest algorithms. (<b>a</b>) IR prediction model with top 9 features using the XGBoost algorithm. (<b>b</b>) IR prediction model with top 9 features using the random forest algorithm. (<b>c</b>) Explanation of each feature impact on the IR prediction model by SHAP values using the XGBoost algorithm.</p>
Full article ">Figure 6
<p>Summary of the main findings to explore the prediction model for insulin resistance. HOMA-IR, homeostasis model assessment of insulin resistance.</p>
Full article ">
14 pages, 3220 KiB  
Article
A Novel Multistage Transfer Learning for Ultrasound Breast Cancer Image Classification
by Gelan Ayana, Jinhyung Park, Jin-Woo Jeong and Se-woon Choe
Diagnostics 2022, 12(1), 135; https://doi.org/10.3390/diagnostics12010135 - 6 Jan 2022
Cited by 78 | Viewed by 6820
Abstract
Breast cancer diagnosis is one of the many areas that has taken advantage of artificial intelligence to achieve better performance, despite the fact that the availability of a large medical image dataset remains a challenge. Transfer learning (TL) is a phenomenon that enables [...] Read more.
Breast cancer diagnosis is one of the many areas that has taken advantage of artificial intelligence to achieve better performance, despite the fact that the availability of a large medical image dataset remains a challenge. Transfer learning (TL) is a phenomenon that enables deep learning algorithms to overcome the issue of shortage of training data in constructing an efficient model by transferring knowledge from a given source task to a target task. However, in most cases, ImageNet (natural images) pre-trained models that do not include medical images, are utilized for transfer learning to medical images. Considering the utilization of microscopic cancer cell line images that can be acquired in large amount, we argue that learning from both natural and medical datasets improves performance in ultrasound breast cancer image classification. The proposed multistage transfer learning (MSTL) algorithm was implemented using three pre-trained models: EfficientNetB2, InceptionV3, and ResNet50 with three optimizers: Adam, Adagrad, and stochastic gradient de-scent (SGD). Dataset sizes of 20,400 cancer cell images, 200 ultrasound images from Mendeley and 400 ultrasound images from the MT-Small-Dataset were used. ResNet50-Adagrad-based MSTL achieved a test accuracy of 99 ± 0.612% on the Mendeley dataset and 98.7 ± 1.1% on the MT-Small-Dataset, averaging over 5-fold cross validation. A p-value of 0.01191 was achieved when comparing MSTL against ImageNet based TL for the Mendeley dataset. The result is a significant improvement in the performance of artificial intelligence methods for ultrasound breast cancer classification compared to state-of-the-art methods and could remarkably improve the early diagnosis of breast cancer in young women. Full article
(This article belongs to the Special Issue Artificial Intelligence Approaches for Medical Diagnostics in Korea)
Show Figures

Figure 1

Figure 1
<p>Multistage transfer learning for early diagnosis of breast cancer using ultrasound.</p>
Full article ">Figure 2
<p>(<b>a</b>) Cancer cell images acquisition and pre-processing: (<b>i</b>) acquired HeLa cell image, (<b>ii</b>) binary image, (<b>iii</b>) image segmentation, and (<b>iv</b>) extracted image for training. (<b>b</b>) Representative Mendeley breast ultrasound images.</p>
Full article ">Figure 3
<p>CNN models at each stage of transfer learning. (<b>a</b>) Original ImageNet pre-trained model. (<b>b</b>) ImageNet pre-trained model that is transfer-learned to cell line images. (<b>c</b>) ImageNet followed by cell line images pre-trained model that is transfer-learned to ultrasound images. Conv: Convolution; TL: Transfer Learning; Norm: Normalization.</p>
Full article ">Figure 4
<p>ROC curves comparison. (<b>a</b>) Multistage transfer learning. (<b>b</b>) Conventional transfer learning. SGD: Stochastic gradient descent.</p>
Full article ">Figure 5
<p>(<b>Left</b>) The effect of optimizer choice on the performance of multistage transfer learning. (<b>Right</b>) The effect of CNN model choice on the performance of multistage transfer learning. SGD: stochastic gradient descent.</p>
Full article ">Figure 6
<p>Feature extraction comparison via feature visualization of the five convolution layers of ResNet50 with the Adagrad optimizer for MSTL and CTL. Conv: convolution; MSTL: multistage transfer learning; CTL: conventional transfer learning.</p>
Full article ">
13 pages, 1234 KiB  
Article
Prediction of Bacteremia Based on 12-Year Medical Data Using a Machine Learning Approach: Effect of Medical Data by Extraction Time
by Kyoung Hwa Lee, Jae June Dong, Subin Kim, Dayeong Kim, Jong Hoon Hyun, Myeong-Hun Chae, Byeong Soo Lee and Young Goo Song
Diagnostics 2022, 12(1), 102; https://doi.org/10.3390/diagnostics12010102 - 3 Jan 2022
Cited by 10 | Viewed by 2337
Abstract
Early detection of bacteremia is important to prevent antibiotic abuse. Therefore, we aimed to develop a clinically applicable bacteremia prediction model using machine learning technology. Data from two tertiary medical centers’ electronic medical records during a 12-year-period were extracted. Multi-layer perceptron (MLP), random [...] Read more.
Early detection of bacteremia is important to prevent antibiotic abuse. Therefore, we aimed to develop a clinically applicable bacteremia prediction model using machine learning technology. Data from two tertiary medical centers’ electronic medical records during a 12-year-period were extracted. Multi-layer perceptron (MLP), random forest, and gradient boosting algorithms were applied for machine learning analysis. Clinical data within 12 and 24 hours of blood culture were analyzed and compared. Out of 622,771 blood cultures, 38,752 episodes of bacteremia were identified. In MLP with 128 hidden layer nodes, the area under the receiver operating characteristic curve (AUROC) of the prediction performance in 12- and 24-h data models was 0.762 (95% confidence interval (CI); 0.7617–0.7623) and 0.753 (95% CI; 0.7520–0.7529), respectively. AUROC of causative-pathogen subgroup analysis predictive value for Acinetobacter baumannii bacteremia was the highest at 0.839 (95% CI; 0.8388–0.8394). Compared to primary bacteremia, AUROC of sepsis caused by pneumonia was highest. Predictive performance of bacteremia was superior in younger age groups. Bacteremia prediction using machine learning technology appeared possible for acute infectious diseases. This model was more suitable especially to pneumonia caused by Acinetobacter baumannii. From the 24-h blood culture data, bacteremia was predictable by substituting only the continuously variable values. Full article
(This article belongs to the Special Issue Artificial Intelligence Approaches for Medical Diagnostics in Korea)
Show Figures

Figure 1

Figure 1
<p>Flow chart of study population.</p>
Full article ">Figure 2
<p>Area under the receiver operating characteristic curve of the bacteremia prediction. (<b>A</b>) Type of pathogen (12-h vs. 24-h model); (<b>B</b>) Site of infection (12-h vs. 24-h model); (<b>C</b>) Age (12-h vs. 24-h model); (<b>D</b>) Sex (12-h vs. 24-h model); (<b>E</b>) Merge hour (24-h model).</p>
Full article ">Figure 2 Cont.
<p>Area under the receiver operating characteristic curve of the bacteremia prediction. (<b>A</b>) Type of pathogen (12-h vs. 24-h model); (<b>B</b>) Site of infection (12-h vs. 24-h model); (<b>C</b>) Age (12-h vs. 24-h model); (<b>D</b>) Sex (12-h vs. 24-h model); (<b>E</b>) Merge hour (24-h model).</p>
Full article ">
11 pages, 2058 KiB  
Article
Deep Learning-Based Four-Region Lung Segmentation in Chest Radiography for COVID-19 Diagnosis
by Young-Gon Kim, Kyungsang Kim, Dufan Wu, Hui Ren, Won Young Tak, Soo Young Park, Yu Rim Lee, Min Kyu Kang, Jung Gil Park, Byung Seok Kim, Woo Jin Chung, Mannudeep K. Kalra and Quanzheng Li
Diagnostics 2022, 12(1), 101; https://doi.org/10.3390/diagnostics12010101 - 3 Jan 2022
Cited by 15 | Viewed by 3659
Abstract
Imaging plays an important role in assessing the severity of COVID-19 pneumonia. Recent COVID-19 research indicates that the disease progress propagates from the bottom of the lungs to the top. However, chest radiography (CXR) cannot directly provide a quantitative metric of radiographic opacities, [...] Read more.
Imaging plays an important role in assessing the severity of COVID-19 pneumonia. Recent COVID-19 research indicates that the disease progress propagates from the bottom of the lungs to the top. However, chest radiography (CXR) cannot directly provide a quantitative metric of radiographic opacities, and existing AI-assisted CXR analysis methods do not quantify the regional severity. In this paper, to assist the regional analysis, we developed a fully automated framework using deep learning-based four-region segmentation and detection models to assist the quantification of COVID-19 pneumonia. Specifically, a segmentation model is first applied to separate left and right lungs, and then a detection network of the carina and left hilum is used to separate upper and lower lungs. To improve the segmentation performance, an ensemble strategy with five models is exploited. We evaluated the clinical relevance of the proposed method compared with the radiographic assessment of the quality of lung edema (RALE) annotated by physicians. Mean intensities of segmented four regions indicate a positive correlation to the regional extent and density scores of pulmonary opacities based on the RALE. Therefore, the proposed method can accurately assist the quantification of regional pulmonary opacities of COVID-19 pneumonia patients. Full article
(This article belongs to the Special Issue Artificial Intelligence Approaches for Medical Diagnostics in Korea)
Show Figures

Figure 1

Figure 1
<p>A flowchart of the proposed algorithm for segmentation of zones of the lung in CXR of COVID-19 patient. Right (R) and left (L) lung masks are generated by an ensemble method based on the majority voting from five lung masks predicted by models trained with different conditions. Then, left hilum and carina are detected and used to find a central point to split the whole lung into upper and lower regions. Finally, right upper lung (RUR), right lower lung (RLR), low upper lung (LUR), and left lower lung (LLR) are obtained.</p>
Full article ">Figure 2
<p>With the same density score as zero annotated by physician, mean intensities of lungs in CXRs are (<b>a</b>) 39.8, (<b>b</b>) 34.4, (<b>c</b>) 16.6, and (<b>d</b>) 13.2, respectively.</p>
Full article ">Figure 3
<p>An example of advantages of the ensemble method for different quality of CXRs. The first to last row in each column shows an input CXR (<b>a</b>,<b>b</b>,<b>c-1</b>), a ground truth mask (<b>a</b>,<b>b</b>,<b>c-2</b>), an ensemble result (<b>a</b>,<b>b</b>,<b>c-3</b>), and five results predicted by the first to fifth model. (<b>a-1</b>) A clear CXR that shows none of severe noise from a portable device and obstacles like medical devices, (<b>b</b>) a lung mask of (<b>a-1</b>,<b>a-3</b>) an ensemble mask from the first to the fifth masks (<b>a-4</b>–<b>a-8</b>). Dice coefficients of (<b>a-3</b>–<b>a-8</b>) are 0.955, 0.928, 0912, 0.948, 0.948, and 0.948, respectively. (i) An CXR showing severe blurry within both lung regions due to lung opacity, (<b>b-2</b>) a lung mask of (<b>b-1</b>,<b>b-3</b>) an ensemble mask from the first to the fifth masks (<b>b-4</b>–<b>b-8</b>). Dice coefficients of (<b>b-3</b>–<b>b-8</b>) are 0.955, 0.928, 0912, 0.948, 0.948, and 0.948, respectively. (<b>c-1</b>) An CXR showing sever noise generated from a portable device, (<b>c-2</b>) a lung mask of (<b>c-1</b>,<b>c-3</b>) an ensemble mask from the first to the fifth masks (<b>c-4</b>–<b>c-8</b>). Dice coefficients of (<b>c-3</b>–<b>c-8</b>) are 0.899, 0.783, 0.885, 0.883, 0.879, and 0.903, respectively.</p>
Full article ">Figure 4
<p>An example of detection results for the left hilum colored at red and carina colored at green and, dividing segmented lung mask into upper and lower lungs, i.e., RUR, LUR, RLR, and LLR with a reference point colored at while. (<b>a</b>) Detection results for the left hilum (confidence: 0.94) and the carina (0.98). (<b>b</b>) A center point of the detection box for the left hilum is used as the reference point to divide upper and lower lungs. (<b>c</b>) Detection results for the left hilum (0.56) and carina (0.95). (<b>d</b>) A location down to approximately 2 cm vertically from a center point of the detection box for the carina is used as the reference point to divide upper and lower lungs.</p>
Full article ">Figure 5
<p>Boxplots of mean intensities with extent scores (0–4) and density scores (0–3) of pulmonary opacities for four-regions. (<b>a</b>) and (<b>e</b>) RUR, (<b>b</b>) and (<b>f</b>) LUR, (<b>c</b>) and (<b>g</b>) RLR, (<b>d</b>) and (<b>h</b>) LLR. For each region, the mean intensity increased as the extent and density scores increased.</p>
Full article ">Figure 6
<p>Boxplots of mean intensities for four-regions. The mean intensity of LLR, where the heart is not included in the segmentation algorithm, is lower than the intensities of other regions.</p>
Full article ">
23 pages, 9887 KiB  
Article
Cluster Analysis of Cell Nuclei in H&E-Stained Histological Sections of Prostate Cancer and Classification Based on Traditional and Modern Artificial Intelligence Techniques
by Subrata Bhattacharjee, Kobiljon Ikromjanov, Kouayep Sonia Carole, Nuwan Madusanka, Nam-Hoon Cho, Yeong-Byn Hwang, Rashadul Islam Sumon, Hee-Cheol Kim and Heung-Kook Choi
Diagnostics 2022, 12(1), 15; https://doi.org/10.3390/diagnostics12010015 - 22 Dec 2021
Cited by 5 | Viewed by 4784
Abstract
Biomarker identification is very important to differentiate the grade groups in the histopathological sections of prostate cancer (PCa). Assessing the cluster of cell nuclei is essential for pathological investigation. In this study, we present a computer-based method for cluster analyses of cell nuclei [...] Read more.
Biomarker identification is very important to differentiate the grade groups in the histopathological sections of prostate cancer (PCa). Assessing the cluster of cell nuclei is essential for pathological investigation. In this study, we present a computer-based method for cluster analyses of cell nuclei and performed traditional (i.e., unsupervised method) and modern (i.e., supervised method) artificial intelligence (AI) techniques for distinguishing the grade groups of PCa. Two datasets on PCa were collected to carry out this research. Histopathology samples were obtained from whole slides stained with hematoxylin and eosin (H&E). In this research, state-of-the-art approaches were proposed for color normalization, cell nuclei segmentation, feature selection, and classification. A traditional minimum spanning tree (MST) algorithm was employed to identify the clusters and better capture the proliferation and community structure of cell nuclei. K-medoids clustering and stacked ensemble machine learning (ML) approaches were used to perform traditional and modern AI-based classification. The binary and multiclass classification was derived to compare the model quality and results between the grades of PCa. Furthermore, a comparative analysis was carried out between traditional and modern AI techniques using different performance metrics (i.e., statistical parameters). Cluster features of the cell nuclei can be useful information for cancer grading. However, further validation of cluster analysis is required to accomplish astounding classification results. Full article
(This article belongs to the Special Issue Artificial Intelligence Approaches for Medical Diagnostics in Korea)
Show Figures

Figure 1

Figure 1
<p>Histologic findings for each grade of prostate cancer. (<b>a</b>–<b>c</b>) Dataset 1: grade 3, grade 4, and grade 5, respectively. (<b>d</b>–<b>f</b>) Dataset 2: grade 3, grade 4, and grade 5, respectively.</p>
Full article ">Figure 2
<p>Analytical pipeline for the cluster analysis and AI classification of cancer grades observed in histological sections.</p>
Full article ">Figure 3
<p>Stain normalization. (<b>a</b>) Raw image. (<b>b</b>) Reference image. (<b>c</b>) Normalized image.</p>
Full article ">Figure 4
<p>Stain deconvolution. (<b>a</b>) Normalized image. (<b>b</b>) Hematoxylin channel. (<b>c</b>) Eosin channel.</p>
Full article ">Figure 5
<p>The complete process for nuclear segmentation of cancer cells. (<b>a</b>) Hematoxylin channel extracted after performing stain deconvolution. (<b>b</b>) HSI color space converted from (<b>a</b>). (<b>c</b>) Saturation channel extracted from (<b>b</b>). (<b>d</b>) Contrast adjusted image extracted from (<b>c</b>). (<b>e</b>) Binary image after applying global thresholding on (<b>d</b>). (<b>f</b>) Nuclei segmentation after applying the watershed algorithm on (<b>e</b>). Some small objects and artifacts were removed before and after applying the watershed algorithm.</p>
Full article ">Figure 6
<p>Examples of MST cluster analysis. (<b>a</b>) An MST is based on the minimum distances between vertex coordinates. The red dashed lines indicate the removal of inconsistent edges. (<b>b</b>) An intra-cluster MST was obtained after removal of the nine longest edges from (<b>a</b>); the red circles indicate inter- and intra-cluster similarity. (<b>c</b>) The inter-cluster MST was obtained from (<b>b</b>).</p>
Full article ">Figure 7
<p>Flow chart of MST construction.</p>
Full article ">Figure 8
<p>Machine learning stacking-based ensemble classification. The data were scaled before training and testing. The classification was carried out in two steps: initial and final predictions using base and meta classifiers, respectively.</p>
Full article ">Figure 9
<p>Confusion matrices of the supervised and unsupervised classification using test and whole datasets, respectively. (<b>a</b>,<b>b</b>) Confusion matrices of multiclass and binary classification using supervised ensemble technique based upon the test split 1 and 2 in <a href="#diagnostics-12-00015-t003" class="html-table">Table 3</a>A, respectively. (<b>c</b>,<b>d</b>) Confusion matrices of multiclass and binary classification using an unsupervised technique based upon the data split 2, respectively.</p>
Full article ">Figure 10
<p>Bar charts of the accuracy scores of unsupervised and supervised classifications. (<b>a</b>) Multiclass classification. (<b>b</b>) Binary classification. The performance of each PCa grade was obtained from the confusion matrices.</p>
Full article ">Figure 11
<p>Bar chart of the overall performance scores of supervised and unsupervised classifications.</p>
Full article ">Figure 12
<p>The visualization of intra- and inter-cluster MST graphs. (<b>a</b>–<b>c</b>) The intra-cluster MST of grade 3, grade 4, and grade 5, respectively. (<b>d</b>–<b>f</b>) The inter-cluster MST was generated from a, b, and c, respectively. The dotted red circle indicates the cluster of cell nuclei. Different color lines in a-c and d-f indicate intra- and inter-clusters, respectively.</p>
Full article ">Figure A1
<p>Prostate adenocarcinoma with Gleason scores 4 and 3 annotated with red and blue color, respectively.</p>
Full article ">Figure A2
<p>Prostate adenocarcinoma with Gleason scores 4 annotated with red color.</p>
Full article ">Figure A3
<p>Prostate adenocarcinoma with Gleason scores 5 and 4 annotated with orange and red color, respectively.</p>
Full article ">Figure A4
<p>The proliferation and community structure of cell nuclei in the annotated region of grade 3.</p>
Full article ">Figure A5
<p>The proliferation and community structure of cell nuclei in the annotated region of grade 4.</p>
Full article ">Figure A6
<p>The proliferation and community structure of cell nuclei in the annotated region of grade 5.</p>
Full article ">
11 pages, 951 KiB  
Article
Development of a Machine Learning Model to Distinguish between Ulcerative Colitis and Crohn’s Disease Using RNA Sequencing Data
by Soo-Kyung Park, Sangsoo Kim, Gi-Young Lee, Sung-Yoon Kim, Wan Kim, Chil-Woo Lee, Jong-Lyul Park, Chang-Hwan Choi, Sang-Bum Kang, Tae-Oh Kim, Ki-Bae Bang, Jaeyoung Chun, Jae-Myung Cha, Jong-Pil Im, Kwang-Sung Ahn, Seon-Young Kim and Dong-Il Park
Diagnostics 2021, 11(12), 2365; https://doi.org/10.3390/diagnostics11122365 - 15 Dec 2021
Cited by 13 | Viewed by 3361
Abstract
Crohn’s disease (CD) and ulcerative colitis (UC) can be difficult to differentiate. As differential diagnosis is important in establishing a long-term treatment plan for patients, we aimed to develop a machine learning model for the differential diagnosis of the two diseases using RNA [...] Read more.
Crohn’s disease (CD) and ulcerative colitis (UC) can be difficult to differentiate. As differential diagnosis is important in establishing a long-term treatment plan for patients, we aimed to develop a machine learning model for the differential diagnosis of the two diseases using RNA sequencing (RNA-seq) data from endoscopic biopsy tissue from patients with inflammatory bowel disease (n = 127; CD, 94; UC, 33). Biopsy samples were taken from inflammatory lesions or normal tissues. The RNA-seq dataset was processed via mapping to the human reference genome (GRCh38) and quantifying the corresponding gene models that comprised 19,596 protein-coding genes. An unsupervised learning model showed distinct clusters of four classes: CD inflammatory, CD normal, UC inflammatory, and UC normal. A supervised learning model based on partial least squares discriminant analysis was able to distinguish inflammatory CD from inflammatory UC after pruning the strong classifiers of normal CD vs. normal UC. The error rate was minimal and affected only two components: 20 and 50 genes for the first and second components, respectively. The corresponding overall error rate was 0.147. RNA-seq analysis of tissue and the two components revealed in this study may be helpful for distinguishing CD from UC. Full article
(This article belongs to the Special Issue Artificial Intelligence Approaches for Medical Diagnostics in Korea)
Show Figures

Figure 1

Figure 1
<p>Principal component analysis plot of IBD samples. Normalized log-transformed expression values from edgeR were used in the PCA up to 10 components. The first two components that explained 48% of the total variance are shown in the plot drawn with the Bioconductor R package <span class="html-italic">mixOmics</span>.</p>
Full article ">Figure 2
<p>KEGG pathways enriched in the lists of differentially expressed genes (DEGs). The four panels correspond to the DEGs of inflammatory CD vs. normal CD (labeled “CD”), inflammatory UC vs. normal UC (labeled “UC”), inflammatory CD vs. inflammatory UC (labeled “Inflamed”), and normal CD vs. normal UC (labeled “Normal”). The horizontal bars represent the -log10(FDR) values from the pathway enrichment analysis calculated with DAVID web service (red for upregulated DEGs, and blue for downregulated DEGs). Note that the lists are shown for FDR &lt; 0.05. There were no significantly enriched pathways for the downregulated genes of both “Inflammatory” and “Normal” panels. See <a href="#app1-diagnostics-11-02365" class="html-app">Supplementary Table S3</a> for details of DEGs categorized to each pathway.</p>
Full article ">Figure 3
<p>The clustering diagrams of the final sparse partial least-squares discriminant analysis (sPLS-DA) model that classifies inflammatory CD vs. inflammatory UC. (<b>a</b>) The PLS projection onto the subspace spanned by the two components. The ellipses for each class represent 95% confidence level of discrimination. (<b>b</b>) The heatmap hierarchical clustering of 70 genes (rows) and 49 samples (columns). The bottom 20 and top 30 genes belong to component 2, while the middle 20 genes belong to component 1. All plots were drawn with the Bioconductor R package <span class="html-italic">mixOmics</span>.</p>
Full article ">Figure 3 Cont.
<p>The clustering diagrams of the final sparse partial least-squares discriminant analysis (sPLS-DA) model that classifies inflammatory CD vs. inflammatory UC. (<b>a</b>) The PLS projection onto the subspace spanned by the two components. The ellipses for each class represent 95% confidence level of discrimination. (<b>b</b>) The heatmap hierarchical clustering of 70 genes (rows) and 49 samples (columns). The bottom 20 and top 30 genes belong to component 2, while the middle 20 genes belong to component 1. All plots were drawn with the Bioconductor R package <span class="html-italic">mixOmics</span>.</p>
Full article ">
12 pages, 2323 KiB  
Article
Intelligent Automatic Segmentation of Wrist Ganglion Cysts Using DBSCAN and Fuzzy C-Means
by Kwang Baek Kim, Doo Heon Song and Hyun Jun Park
Diagnostics 2021, 11(12), 2329; https://doi.org/10.3390/diagnostics11122329 - 10 Dec 2021
Cited by 4 | Viewed by 2322
Abstract
Ganglion cysts are common soft tissue masses of the hand and wrist, and small size cysts are often hypoechoic. Thus, identifying them from ultrasonography is not an easy problem. In this paper, we propose an automatic segmentation method using two artificial intelligence algorithms [...] Read more.
Ganglion cysts are common soft tissue masses of the hand and wrist, and small size cysts are often hypoechoic. Thus, identifying them from ultrasonography is not an easy problem. In this paper, we propose an automatic segmentation method using two artificial intelligence algorithms in sequence. A density based unsupervised learning algorithm called DBSCAN is performed as a front-end and its result determines the number of clusters used in the Fuzzy C-Means (FCM) clustering algorithm for quantification of ganglion cyst object. In an experiment using 120 images, the proposed method shows a higher extraction rate (89.2%) and lower false positive rate compared with FCM when the ground truth is set as the human expert’s decision. Such human-like behavior is more apparent when the size of the ganglion cyst is small that the quality of ultrasonography is often not very high. With this fully automatic segmentation method, the operator subjectivity that is highly dependent on the experience of the ultrasound examiner can be mitigated with high reliability. Full article
(This article belongs to the Special Issue Artificial Intelligence Approaches for Medical Diagnostics in Korea)
Show Figures

Figure 1

Figure 1
<p>Trapezoidal membership function.</p>
Full article ">Figure 2
<p>The effect of fuzzy stretching and noise reduction. (<b>a</b>) Input image, (<b>b</b>) after stretching, (<b>c</b>) noise reduction (yellow).</p>
Full article ">Figure 3
<p>Histogram analysis for DBSCAN algorithm.</p>
Full article ">Figure 4
<p>Cases of DBSCAN quantifications. (<b>a</b>) Input image 1; (<b>b</b>) input image 2; (<b>c</b>) input image 3; (<b>d</b>) #-of-clusters = 5; (<b>e</b>) dark cluster; (<b>f</b>) one cluster.</p>
Full article ">Figure 5
<p>Overall clustering and quantization process.</p>
Full article ">Figure 6
<p>Ganglion cyst extraction process. (<b>a</b>) Input image 1; (<b>b</b>) <span class="html-italic">Cluster</span> = 1 after DBSCAN; (<b>c</b>) FCM quantification; (<b>d</b>) noise by fuzzy stretching; (<b>e</b>) extraction by labeling.</p>
Full article ">
13 pages, 20951 KiB  
Article
Prediction of Neoadjuvant Chemotherapy Response in Osteosarcoma Using Convolutional Neural Network of Tumor Center 18F-FDG PET Images
by Jingyu Kim, Su Young Jeong, Byung-Chul Kim, Byung-Hyun Byun, Ilhan Lim, Chang-Bae Kong, Won Seok Song, Sang Moo Lim and Sang-Keun Woo
Diagnostics 2021, 11(11), 1976; https://doi.org/10.3390/diagnostics11111976 - 25 Oct 2021
Cited by 11 | Viewed by 2356
Abstract
We compared the accuracy of prediction of the response to neoadjuvant chemotherapy (NAC) in osteosarcoma patients between machine learning approaches of whole tumor utilizing fluorine−18fluorodeoxyglucose (18F-FDG) uptake heterogeneity features and a convolutional neural network of the intratumor image region. [...] Read more.
We compared the accuracy of prediction of the response to neoadjuvant chemotherapy (NAC) in osteosarcoma patients between machine learning approaches of whole tumor utilizing fluorine−18fluorodeoxyglucose (18F-FDG) uptake heterogeneity features and a convolutional neural network of the intratumor image region. In 105 patients with osteosarcoma, 18F-FDG positron emission tomography/computed tomography (PET/CT) images were acquired before (baseline PET0) and after NAC (PET1). Patients were divided into responders and non-responders about neoadjuvant chemotherapy. Quantitative 18F-FDG heterogeneity features were calculated using LIFEX version 4.0. Receiver operating characteristic (ROC) curve analysis of 18F-FDG uptake heterogeneity features was used to predict the response to NAC. Machine learning algorithms and 2-dimensional convolutional neural network (2D CNN) deep learning networks were estimated for predicting NAC response with the baseline PET0 images of the 105 patients. ML was performed using the entire tumor image. The accuracy of the 2D CNN prediction model was evaluated using total tumor slices, the center 20 slices, the center 10 slices, and center slice. A total number of 80 patients was used for k-fold validation by five groups with 16 patients. The CNN network test accuracy estimation was performed using 25 patients. The areas under the ROC curves (AUCs) for baseline PET maximum standardized uptake value (SUVmax), total lesion glycolysis (TLG), metabolic tumor volume (MTV), and gray level size zone matrix (GLSZM) were 0.532, 0.507, 0.510, and 0.626, respectively. The texture features test accuracy of machine learning by random forest and support vector machine were 0.55 and 0. 54, respectively. The k-fold validation accuracy and validation accuracy were 0.968 ± 0.01 and 0.610 ± 0.04, respectively. The test accuracy of total tumor slices, the center 20 slices, center 10 slices, and center slices were 0.625, 0.616, 0.628, and 0.760, respectively. The prediction model for NAC response with baseline PET0 texture features machine learning estimated a poor outcome, but the 2D CNN network using 18F-FDG baseline PET0 images could predict the treatment response before prior chemotherapy in osteosarcoma. Additionally, using the 2D CNN prediction model using a tumor center slice of 18F-FDG PET images before NAC can help decide whether to perform NAC to treat osteosarcoma patients. Full article
(This article belongs to the Special Issue Artificial Intelligence Approaches for Medical Diagnostics in Korea)
Show Figures

Figure 1

Figure 1
<p>The <sup>18</sup>F-FDG 2D CNN model for predicting the response to neoadjuvant chemotherapy. The 2D CNN model consisted of two convolution layers and two fully connected layers.</p>
Full article ">Figure 2
<p>Representative <sup>18</sup>F-FDG PET image of osteosarcoma in a responder and non-responder to neoadjuvant chemotherapy. Responder had SUVmax values of 11.33 and 4.43 at baseline PET0 and after neoadjuvant chemotherapy (PET1), respectively. Non-responder had SUVmax values of 5.62 and 3.21 at baseline PET0 and after neoadjuvant chemotherapy (PET1), respectively.</p>
Full article ">Figure 3
<p>T-SNE plot using image texture features of osteosarcoma patients. In the plot, 0 represents the chemotherapy non-responder and 1 represents the chemotherapy responder.</p>
Full article ">Figure 4
<p>Area under the receiver operating characteristic curves (AUC) for <sup>18</sup>F-FDG heterogeneity features in baseline PET0. Conventional parameters (i.e., maximum standardized uptake value (SUVmax), total lesion glycolysis (TLG), and metabolic tumor volume (MTV)), cannot predict the response to neoadjuvant chemotherapy before treatment. In contrast, the <sup>18</sup>F-FDG intensity size zone feature (gray-level size zone matrix: GLSZM) heterogeneity can predict this response.</p>
Full article ">Figure 5
<p>Area under the receiver operating characteristic curves (AUC) for <sup>18</sup>F-FDG heterogeneity features in PET1. Maximum standardized uptake value (SUVmax), total lesion glycolysis (TLG), and metabolic tumor volume (MTV) as well as <sup>18</sup>F-FDG uptake heterogeneity features such as image voxel alignment heterogeneity (GLRIM_HGHGE), image neighborhood intensity difference (NGLDM_SNE), and image intensity size zone (GLSZM) can predict the response to neoadjuvant chemotherapy.</p>
Full article ">Figure 6
<p>Deep features T-SNE plot using patients of osteosarcoma baseline PET0. In the plot, 0 represents the chemotherapy non-responder and 1 represents the chemotherapy responder.</p>
Full article ">
12 pages, 2189 KiB  
Article
Machine Learning-Based Three-Month Outcome Prediction in Acute Ischemic Stroke: A Single Cerebrovascular-Specialty Hospital Study in South Korea
by Dougho Park, Eunhwan Jeong, Haejong Kim, Hae Wook Pyun, Haemin Kim, Yeon-Ju Choi, Youngsoo Kim, Suntak Jin, Daeyoung Hong, Dong Woo Lee, Su Yun Lee and Mun-Chul Kim
Diagnostics 2021, 11(10), 1909; https://doi.org/10.3390/diagnostics11101909 - 15 Oct 2021
Cited by 19 | Viewed by 3126
Abstract
Background: Functional outcomes after acute ischemic stroke are of great concern to patients and their families, as well as physicians and surgeons who make the clinical decisions. We developed machine learning (ML)-based functional outcome prediction models in acute ischemic stroke. Methods: This retrospective [...] Read more.
Background: Functional outcomes after acute ischemic stroke are of great concern to patients and their families, as well as physicians and surgeons who make the clinical decisions. We developed machine learning (ML)-based functional outcome prediction models in acute ischemic stroke. Methods: This retrospective study used a prospective cohort database. A total of 1066 patients with acute ischemic stroke between January 2019 and March 2021 were included. Variables such as demographic factors, stroke-related factors, laboratory findings, and comorbidities were utilized at the time of admission. Five ML algorithms were applied to predict a favorable functional outcome (modified Rankin Scale 0 or 1) at 3 months after stroke onset. Results: Regularized logistic regression showed the best performance with an area under the receiver operating characteristic curve (AUC) of 0.86. Support vector machines represented the second-highest AUC of 0.85 with the highest F1-score of 0.86, and finally, all ML models applied achieved an AUC > 0.8. The National Institute of Health Stroke Scale at admission and age were consistently the top two important variables for generalized logistic regression, random forest, and extreme gradient boosting models. Conclusions: ML-based functional outcome prediction models for acute ischemic stroke were validated and proven to be readily applicable and useful. Full article
(This article belongs to the Special Issue Artificial Intelligence Approaches for Medical Diagnostics in Korea)
Show Figures

Figure 1

Figure 1
<p>Flow chart of patient inclusion and exclusion. LNT, last normal time; mRS, modified Rankin Scale.</p>
Full article ">Figure 2
<p>Entire machine learning modeling process for this study. SMOTE, synthetic minority oversampling technique; ADASYN, adaptive synthetic; RLR, regularized logistic regression; SVM, support vector machines; RF, random forest; KNN, k-nearest neighbors; XGB, extreme gradient boosting; AUC, area under the receiver operating characteristic curve; ACC, accuracy.</p>
Full article ">Figure 3
<p>Results of external validation of each machine learning algorithm. (<b>a</b>) Receiver operating characteristic curves and (<b>b</b>) calibration plots are represented. The regularized logistic regression model showed the best performance with an AUC of 0.86 (red line). Overall, all the ML models showed AUC &gt; 0.8. AUC, area under the receiver operating characteristic curve; RLR, regularized logistic regression; SVM, support vector machines; RF, random forest; KNN, k-nearest neighbors; XGB, extreme gradient boosting.</p>
Full article ">Figure 4
<p>Top ten important variables in (<b>a</b>) regularized logistic regression, (<b>b</b>) random forest, and (<b>c</b>) extreme gradient boosting models. The top two important features were consistent in all three models; NIHSS at admission, followed by age. Additionally, random glucose, hemoglobin, and triglyceride were also included in the top ten important variables in all three models. NIHSS, National Institute of Health Stroke Scale; IV, intravenous; IA, intraarterial.</p>
Full article ">Figure 4 Cont.
<p>Top ten important variables in (<b>a</b>) regularized logistic regression, (<b>b</b>) random forest, and (<b>c</b>) extreme gradient boosting models. The top two important features were consistent in all three models; NIHSS at admission, followed by age. Additionally, random glucose, hemoglobin, and triglyceride were also included in the top ten important variables in all three models. NIHSS, National Institute of Health Stroke Scale; IV, intravenous; IA, intraarterial.</p>
Full article ">
13 pages, 3568 KiB  
Article
Automatic Meniscus Segmentation Using Adversarial Learning-Based Segmentation Network with Object-Aware Map in Knee MR Images
by Uju Jeon, Hyeonjin Kim, Helen Hong and Joonho Wang
Diagnostics 2021, 11(9), 1612; https://doi.org/10.3390/diagnostics11091612 - 3 Sep 2021
Cited by 3 | Viewed by 2533
Abstract
Meniscus segmentation from knee MR images is an essential step when analyzing the length, width, height, cross-sectional area, surface area for meniscus allograft transplantation using a 3D reconstruction model based on the patient’s normal meniscus. In this paper, we propose a two-stage DCNN [...] Read more.
Meniscus segmentation from knee MR images is an essential step when analyzing the length, width, height, cross-sectional area, surface area for meniscus allograft transplantation using a 3D reconstruction model based on the patient’s normal meniscus. In this paper, we propose a two-stage DCNN that combines a 2D U-Net-based meniscus localization network with a conditional generative adversarial network-based segmentation network using an object-aware map. First, the 2D U-Net segments knee MR images into six classes including bone and cartilage with whole MR images at a resolution of 512 × 512 to localize the medial and lateral meniscus. Second, adversarial learning with a generator based on the 2D U-Net and a discriminator based on the 2D DCNN using an object-aware map segments the meniscus into localized regions-of-interest with a resolution of 64 × 64. The average Dice similarity coefficient of the meniscus was 85.18% at the medial meniscus and 84.33% at the lateral meniscus; these values were 10.79%p and 1.14%p, and 7.78%p and 1.12%p higher than the segmentation method without adversarial learning and without the use of an object-aware map with the Dice similarity coefficient at the medial meniscus and lateral meniscus, respectively. The proposed automatic meniscus localization through multi-class can prevent the class imbalance problem by focusing on local regions. The proposed adversarial learning using an object-aware map can prevent under-segmentation by repeatedly judging and improving the segmentation results, and over-segmentation by considering information only from the meniscus regions. Our method can be used to identify and analyze the shape of the meniscus for allograft transplantation using a 3D reconstruction model of the patient’s unruptured meniscus. Full article
(This article belongs to the Special Issue Artificial Intelligence Approaches for Medical Diagnostics in Korea)
Show Figures

Figure 1

Figure 1
<p>Characteristics of the meniscus in knee MR images: (<b>a</b>) Structures of the lateral and medial meniscus in the axial view; (<b>b</b>) Similar intensity of the medial meniscus and collateral ligaments in the coronal view; (<b>c</b>) Shape variances of anterior and posterior horns of the meniscus among patients; (<b>d</b>) Inhomogeneous intensity of lateral and medial meniscus (yellow arrow) in knee MR images.</p>
Full article ">Figure 2
<p>Pipeline of the proposed method.</p>
Full article ">Figure 3
<p>Automatic localization of the meniscus. (<b>a</b>) The medial meniscus localization. (<b>b</b>) Original image. (<b>c</b>) The lateral meniscus localization. (<b>d</b>) The 6-classes segmentation results (dark grey: femur, medium grey: femoral cartilage, grey: tibia, light grey: tibial cartilage, white: meniscus).</p>
Full article ">Figure 4
<p>Architecture of the discriminator of the conventional adversarial network and the proposed adversarial network using an object-aware map: (<b>a</b>) Conventional adversarial network; (<b>b</b>) Adversarial network using object-aware map.</p>
Full article ">Figure 5
<p>Segmentation results of before and after adversarial learning: (<b>a</b>) Original image; (<b>b</b>) Before adversarial learning; (<b>c</b>) After adversarial learning; (<b>d</b>) Ground truth.</p>
Full article ">Figure 6
<p>Segmentation results of conventional adversarial network and adversarial network using an object-aware map: (<b>a</b>) Original image; (<b>b</b>) Conventional adversarial network; (<b>c</b>) Adversarial network using object-aware map; (<b>d</b>) Ground truth.</p>
Full article ">Figure 6 Cont.
<p>Segmentation results of conventional adversarial network and adversarial network using an object-aware map: (<b>a</b>) Original image; (<b>b</b>) Conventional adversarial network; (<b>c</b>) Adversarial network using object-aware map; (<b>d</b>) Ground truth.</p>
Full article ">Figure 7
<p>Qualitative evaluation of meniscus segmentation results: (<b>a</b>) Test images; (<b>b</b>) Results of Method A; (<b>c</b>) Results of Method B; (<b>d</b>) Results of Method C; (<b>e</b>) Results of Method D. (Red: overlay area, Green and Blue: under- and over-segmentation areas, respectively).</p>
Full article ">
15 pages, 1364 KiB  
Article
Artificial Intelligence Is Reshaping Healthcare amid COVID-19: A Review in the Context of Diagnosis & Prognosis
by Rajnandini Saha, Satyabrata Aich, Sushanta Tripathy and Hee-Cheol Kim
Diagnostics 2021, 11(9), 1604; https://doi.org/10.3390/diagnostics11091604 - 2 Sep 2021
Cited by 9 | Viewed by 5870
Abstract
Preventing respiratory failure is crucial in a large proportion of COVID-19 patients infected with SARS-CoV-2 virus pneumonia termed as Novel Coronavirus Pneumonia (NCP). Rapid diagnosis and detection of high-risk patients for effective interventions have been shown to be troublesome. Using a large, computed [...] Read more.
Preventing respiratory failure is crucial in a large proportion of COVID-19 patients infected with SARS-CoV-2 virus pneumonia termed as Novel Coronavirus Pneumonia (NCP). Rapid diagnosis and detection of high-risk patients for effective interventions have been shown to be troublesome. Using a large, computed tomography (CT) database, we developed an artificial intelligence (AI) parameter to diagnose NCP and distinguish it from other kinds of pneumonia and traditional controls. The literature was studied and analyzed from diverse assets which include Scopus, Nature medicine, IEEE, Google scholar, Wiley Library, and PubMed. The search terms used were ‘COVID-19’, ‘AI’, ‘diagnosis’, and ‘prognosis’. To strengthen the overall performance of AI in COVID-19 diagnosis and prognosis, we segregated several components to perceive threats and opportunities, as well as their inter-dependencies that affect the healthcare sector. This paper seeks to pick out the crucial fulfillment of factors for AI with inside the healthcare sector in the Indian context. Using critical literature review and experts’ opinion, a total of 11 factors affecting COVID-19 diagnosis and prognosis were detected, and we eventually used an interpretive structural model (ISM) to build a framework of interrelationships among the identified factors. Finally, the matrice d’impacts croisés multiplication appliquée á un classment (MICMAC) analysis resulted the driving and dependence powers of these identified factors. Our analysis will help healthcare stakeholders to realize the requirements for successful implementation of AI. Full article
(This article belongs to the Special Issue Artificial Intelligence Approaches for Medical Diagnostics in Korea)
Show Figures

Figure 1

Figure 1
<p>Diagnostic and prognostic management.</p>
Full article ">Figure 2
<p>Framework of the paper.</p>
Full article ">Figure 3
<p>Flowchart of solution methodology.</p>
Full article ">Figure 4
<p>Driving power and dependence diagram.</p>
Full article ">Figure 5
<p>ISM Model.</p>
Full article ">
11 pages, 18315 KiB  
Article
Deep Learning-Based Prediction of Paresthesia after Third Molar Extraction: A Preliminary Study
by Byung Su Kim, Han Gyeol Yeom, Jong Hyun Lee, Woo Sang Shin, Jong Pil Yun, Seung Hyun Jeong, Jae Hyun Kang, See Woon Kim and Bong Chul Kim
Diagnostics 2021, 11(9), 1572; https://doi.org/10.3390/diagnostics11091572 - 30 Aug 2021
Cited by 26 | Viewed by 3368
Abstract
The purpose of this study was to determine whether convolutional neural networks (CNNs) can predict paresthesia of the inferior alveolar nerve using panoramic radiographic images before extraction of the mandibular third molar. The dataset consisted of a total of 300 preoperative panoramic radiographic [...] Read more.
The purpose of this study was to determine whether convolutional neural networks (CNNs) can predict paresthesia of the inferior alveolar nerve using panoramic radiographic images before extraction of the mandibular third molar. The dataset consisted of a total of 300 preoperative panoramic radiographic images of patients who had planned mandibular third molar extraction. A total of 100 images taken of patients who had paresthesia after tooth extraction were classified as Group 1, and 200 images taken of patients without paresthesia were classified as Group 2. The dataset was randomly divided into a training and validation set (n = 150 [50%]), and a test set (n = 150 [50%]). CNNs of SSD300 and ResNet-18 were used for deep learning. The average accuracy, sensitivity, specificity, and area under the curve were 0.827, 0.84, 0.82, and 0.917, respectively. This study revealed that CNNs can assist in the prediction of paresthesia of the inferior alveolar nerve after third molar extraction using panoramic radiographic images. Full article
(This article belongs to the Special Issue Artificial Intelligence Approaches for Medical Diagnostics in Korea)
Show Figures

Figure 1

Figure 1
<p>Overall scheme of this study. ROI: region of interest. Group 1: paresthesia after mandibular third extraction. Group 2: without paresthesia.</p>
Full article ">Figure 2
<p>SSD300 architecture. The red box means ROI.</p>
Full article ">Figure 3
<p>ResNet-18 architecture.</p>
Full article ">Figure 4
<p>Five-fold cross-validation method.</p>
Full article ">Figure 5
<p>Classification results. The ROC curve was created by plotting sensitivity against (1-specificity) at various threshold settings. AUC is area under the curve. The blue line is the average ROC curve. The light blue lines are the ROC curves of each classification model.</p>
Full article ">Figure 6
<p>Classification results—confusion matrix. Prediction: Expected paresthesia to appear after extraction as a result of deep learning. Group 1: Paresthesia of inferior alveolar nerve actually appeared after mandibular third extraction. Group 2: No paresthesia of inferior alveolar nerve actually appeared after mandibular third extraction. <span class="html-italic"><sup>a</sup></span> True positive. <span class="html-italic"><sup>b</sup></span> False negative. <span class="html-italic"><sup>c</sup></span> False positive. <span class="html-italic"><sup>d</sup></span> True negative.</p>
Full article ">Figure 7
<p>CNN visualization results.</p>
Full article ">
10 pages, 3509 KiB  
Article
Automated Mesiodens Classification System Using Deep Learning on Panoramic Radiographs of Children
by Younghyun Ahn, Jae Joon Hwang, Yun-Hoa Jung, Taesung Jeong and Jonghyun Shin
Diagnostics 2021, 11(8), 1477; https://doi.org/10.3390/diagnostics11081477 - 15 Aug 2021
Cited by 33 | Viewed by 4375
Abstract
In this study, we aimed to develop and evaluate the performance of deep-learning models that automatically classify mesiodens in primary or mixed dentition panoramic radiographs. Panoramic radiographs of 550 patients with mesiodens and 550 patients without mesiodens were used. Primary or mixed dentition [...] Read more.
In this study, we aimed to develop and evaluate the performance of deep-learning models that automatically classify mesiodens in primary or mixed dentition panoramic radiographs. Panoramic radiographs of 550 patients with mesiodens and 550 patients without mesiodens were used. Primary or mixed dentition patients were included. SqueezeNet, ResNet-18, ResNet-101, and Inception-ResNet-V2 were each used to create deep-learning models. The accuracy, precision, recall, and F1 score of ResNet-101 and Inception-ResNet-V2 were higher than 90%. SqueezeNet exhibited relatively inferior results. In addition, we attempted to visualize the models using a class activation map. In images with mesiodens, the deep-learning models focused on the actual locations of the mesiodens in many cases. Deep-learning technologies may help clinicians with insufficient clinical experience in more accurate and faster diagnosis. Full article
(This article belongs to the Special Issue Artificial Intelligence Approaches for Medical Diagnostics in Korea)
Show Figures

Figure 1

Figure 1
<p>Region of interest (ROI). Images were cropped, as shown in blue box, based on the distal and uppermost points of both permanent canine tooth germs and mandibular anterior alveolar bone level.</p>
Full article ">Figure 2
<p>Five-fold cross-validation. The data were randomly divided into 5 groups, each consisting of 200 images. Four of these groups were used as training data and the remaining group was used as validation data. The diagnostic performance for each cross-validation set was evaluated, and the average of the five models was regarded as the estimated performance.</p>
Full article ">Figure 3
<p>Receiver operating characteristic curves of deep-learning models on dataset 1. Numbers in parentheses show the area under the curve values.</p>
Full article ">Figure 4
<p>Example of the class activation maps of four deep-learning models.</p>
Full article ">Figure 5
<p>Boxplot of diagnostic performances among three groups. Kruskal–Wallis test was performed to analyze the statistical significance of specificity. GP: General Practitioners, PS: Pediatric Specialists, * <span class="html-italic">p</span> &lt; 0.05; Kruskal–Wallis test.</p>
Full article ">Figure 6
<p>Differences in answers between human evaluators and deep-learning models on dataset 2. Blue: true positive, Red: false negative, Yellow: true negative, and Green: false positive. GP: General Practitioners, PS: Pediatric Specialists.</p>
Full article ">

Review

Jump to: Research

11 pages, 223 KiB  
Review
Trends in the Approval and Quality Management of Artificial Intelligence Medical Devices in the Republic of Korea
by Kyoungtaek Lim, Tae-Young Heo and Jaesuk Yun
Diagnostics 2022, 12(2), 355; https://doi.org/10.3390/diagnostics12020355 - 30 Jan 2022
Cited by 5 | Viewed by 5091
Abstract
Artificial intelligence (AI) is being implemented in many areas of medicine, such as patient-customized diagnosis. Growth in the artificial intelligence medical device (AIMD) field is expected in the coming years. Major countries are currently establishing systems and policies to gain a leading position [...] Read more.
Artificial intelligence (AI) is being implemented in many areas of medicine, such as patient-customized diagnosis. Growth in the artificial intelligence medical device (AIMD) field is expected in the coming years. Major countries are currently establishing systems and policies to gain a leading position in the medical artificial intelligence market. The Republic of Korea has initiated the Act on Nurturing the Medical Devices Industry and Supporting Innovative Medical Devices for the development of AIMDs and is implementing it preemptively. As a result, the country has achieved an effective strategy for coping with the COVID-19 pandemic, an increase in the number of AIMD approvals (85 approved as of September 2021), and the creation of a document pertaining to internationally harmonized guidelines on AIMD-related terms and definitions. However, in order to develop and activate more AIMD products, it is necessary to improve post-market management such as product change and quality control in addition to approval. Here, we review the current regulatory status of AIMD in the Republic of Korea and what needs to be improved for AIMD to be more developed and activated. Full article
(This article belongs to the Special Issue Artificial Intelligence Approaches for Medical Diagnostics in Korea)
Back to TopTop