Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Validation of Inter-Reader Agreement/Consistency for Quantification of Ellipsoid Zone Integrity and Sub-RPE Compartmental Features Across Retinal Diseases
Previous Article in Journal
Applications of ChatGPT in Heart Failure Prevention, Diagnosis, Management, and Research: A Narrative Review
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Predictive and Explainable Artificial Intelligence for Neuroimaging Applications

1
Department of Physical Medicine and Rehabilitation, Anam Hospital, Korea University College of Medicine, Seoul 02841, Republic of Korea
2
AI Center, Anam Hospital, Korea University College of Medicine, Seoul 02841, Republic of Korea
*
Authors to whom correspondence should be addressed.
Diagnostics 2024, 14(21), 2394; https://doi.org/10.3390/diagnostics14212394
Submission received: 4 October 2024 / Revised: 24 October 2024 / Accepted: 25 October 2024 / Published: 27 October 2024

Abstract

:
Background: The aim of this review is to highlight the new advance of predictive and explainable artificial intelligence for neuroimaging applications. Methods: Data came from 30 original studies in PubMed with the following search terms: “neuroimaging” (title) together with “machine learning” (title) or ”deep learning” (title). The 30 original studies were eligible according to the following criteria: the participants with the dependent variable of brain image or associated disease; the interventions/comparisons of artificial intelligence; the outcomes of accuracy, the area under the curve (AUC), and/or variable importance; the publication year of 2019 or later; and the publication language of English. Results: The performance outcomes reported were within 58–96 for accuracy (%), 66–97 for sensitivity (%), 76–98 for specificity (%), and 70–98 for the AUC (%). The support vector machine and the convolutional neural network registered the best performance (AUC 98%) for the classifications of low- vs. high-grade glioma and brain conditions, respectively. Likewise, the random forest delivered the best performance (root mean square error 1) for the regression of brain conditions. The following factors were discovered to be major predictors of brain image or associated disease: (demographic) age, education, sex; (health-related) alpha desynchronization, Alzheimer’s disease stage, CD4, depression, distress, mild behavioral impairment, RNA sequencing; (neuroimaging) abnormal amyloid-β, amplitude of low-frequency fluctuation, cortical thickness, functional connectivity, fractal dimension measure, gray matter volume, left amygdala activity, left hippocampal volume, plasma neurofilament light, right cerebellum, regional homogeneity, right middle occipital gyrus, surface area, sub-cortical volume. Conclusion: Predictive and explainable artificial intelligence provide an effective, non-invasive decision support system for neuroimaging applications.

1. Introduction

1.1. Brain Disease

Brain disease represents a significant contributor to global disease burden [1,2,3]. In 2021, it was estimated that over three billion people globally were affected by neurological conditions [3]. Premature death and disability (disability-adjusted life years, DALYs) from neurological conditions has grown by 18% since 1990. More than 80% of this burden comes from low- and middle-income countries. Furthermore, there is considerable variation in access to treatment, i.e., there are almost 70 times more neurological professionals per 100,000 people in high-income countries compared to low- and middle-income countries. Stroke, neonatal encephalopathy, migraine, dementia, and diabetic neuropathy, as well as meningitis, epilepsy, neurological complications from preterm birth, autism spectrum disorder, and nervous system cancers, were the top 10 neurological conditions in 2021. The burden of brain disease is greater in men compared to women in general. However, there exist certain exceptions of female dominance, including migraine and dementia [3]. There are many types of brain disease, e.g., autoimmune brain diseases, epilepsy, infections, mental illness (i.e., anxiety, bipolar disorder, depression, post-traumatic stress disorder, schizophrenia), neurodegenerative brain diseases (Alzheimer’s disease, Parkinson’s disease, amyotrophic lateral sclerosis), neurodevelopmental disorders (attention deficit hyperactivity disorder, autism spectrum disorder, dyslexia), stroke, traumatic brain injuries, and tumors [4,5,6].
Autoimmune brain diseases are characterized by the body’s immune system attacking a part of the brain, which it identifies as an invader. Epilepsy is defined as a tendency to experience seizures, which are characterized by electrical disturbances in the brain. These seizures typically disrupt consciousness and manifest as convulsions, which are uncontrolled muscle movements. Infections occur when various types of pathogens invade the brain or its protective coverings. Mental, behavioral, and emotional disorders have the potential to impair an individual’s quality of life and their capacity to function effectively. The principal categories are as follows: Anxiety, Bipolar disorder, Depression, Post-traumatic stress disorder (PTSD), and Schizophrenia. The accumulation of abnormal proteins in the brain is a common underlying cause of neurodegenerative disorders. These include conditions such as Alzheimer’s disease, Parkinson’s disease, and ALS (amyotrophic lateral sclerosis), among numerous others. Neurodevelopmental disorders impact the growth and development of the brain, with care typically provided by pediatric neurologists. Medical geneticists may ascertain the likelihood of an inherited disorder. In the event that a genetic predisposition is identified, family counselling is provided. A considerable number of neurodevelopmental disorders exist, including Attention deficit hyperactivity disorder (ADHD), Autism spectrum disorder, and Dyslexia. A stroke is defined as the obstruction or rupture of a cerebral blood vessel, which results in the interruption of cerebral blood flow and subsequent injury to the brain parenchyma. Traumatic brain injuries encompass a range of conditions, from mild concussions to more severe injuries such as gunshot wounds. Additionally, brain tumors may result from the metastasis of malignant cells from other regions of the body, including the lungs, breasts, and colon. Alternatively, they may develop within the brain tissue itself or its coverings. Astrocytomas are a common type of tumors that originate from the brain itself. A meningioma is a common tumor that develops from the coverings of the brain [6].

1.2. Neuroimaging and Artificial Intelligence

The concepts of neuroimaging and artificial intelligence have recently attracted global interest. A brain imaging method can be defined as any experimental technique that allows for the study of the structure or function of the human (or animal) brain, preferably in vivo in the context of the present study [7]. The optimal method should yield precise temporal and spatial localization of cerebral function, structure, or alterations in these properties. The optimal method should involve minimum invasion and maximum replication for treatment monitoring and therapeutic development as well. Structural magnetic resonance imaging (MRI) meets these optimal requirements for structural imaging. However, there is no single optimal technique for functional imaging, even though electroencephalography (EEG), positron emission tomography (PET), and functional magnetic resonance imaging (fMRI) are very popular. EEG and PET have been available for 4 decades or more, whereas functional magnetic resonance imaging (fMRI) is the newest widely used technique. Arguable, PET is the most invasive with the administration of radioisotopes and EEG has poor spatial mapping properties. Given these limitations, fMRI has become as the most common functional brain-mapping approach [7].
On the other hand, artificial intelligence can be denoted as “the capability of a machine to imitate intelligent human behavior” (the Merriam–Webster dictionary). As a division of artificial intelligence, machine learning can be considered to be “extracting knowledge from large amounts of data” [8]. Popular machine learning approaches are the decision tree, the naïve Bayesian, the random forest, the support vector machine, and the neural network. (See more detailed explanations for [8,9].) In particular, a random forest is a group of decision trees that collectively makes a majority decision regarding the dependent variable, a process known as “bootstrap aggregation.” For the purposes of this discussion, we will consider a random forest comprising 1000 decision trees. For the purposes of this discussion, we shall assume that the original data set comprises 10,000 participants. Subsequently, the training and testing of this random forest is conducted in two stages. Initially, new data comprising 10,000 participants is generated through random sampling with replacement, upon which a decision tree is constructed. In this process, some participants from the original data set are excluded from the new data set, and these remaining participants are referred to as the “out-of-bag” data set. This process is repeated 1000 times, resulting in the creation of 1000 new data sets, 1000 decision trees, and 1000 out-of-bag data sets. Secondly, the 1000 decision trees make predictions regarding the dependent variable for each participant in the out-of-bag data. Then, the majority vote is taken as the final prediction for that participant, and the out-of-bag error is derived as the proportion of incorrect votes for all participants in the out-of-bag data sets. An artificial neural network is a network of neurons (information units) based on a set of weights. Typically, it has one input layer, one or more intermediate layers, and one output layer [9]. A deep neural network is an artificial neural network having a large number of intermediate layers, with the number of layers often being in the range of 5 to 1000 [9].
The current research paradigm has a limited scope in terms of the predictors considered for the early diagnosis of disease. This is due to the use of logistic regression, which assumes a rather unrealistic condition of ceteris paribus, or “all other variables remaining constant”. In light of the aforementioned limitations, the literature on the early diagnosis of disease is increasingly turning to artificial intelligence. This includes studies on arrhythmia [10], birth outcome [11], cancer [12,13], comorbidity [14], depression [15], liver transplantation [16], menopause [17,18], and temporomandibular disease [19]. It is not constrained by the unrealistic assumption of “all the other variables staying constant.” Furthermore, the concept of explainable artificial intelligence is currently experiencing a surge in popularity. The term “explainable artificial intelligence” is defined as “artificial intelligence to identify major predictors of the dependent variable”. At this point in time, three popular approaches to explainable artificial intelligence have been identified: namely, random forest impurity importance, random forest permutation importance, and machine learning permutation importance [20]. The random forest impurity importance metric quantifies the reduction in node impurity resulting from the creation of a branch on a specific predictor. The random forest permutation importance metric quantifies the overall reduction in accuracy resulting from the random permutation of data on a given predictor. An extension of random forest permutation importance, machine learning accuracy importance calculates the decrease in accuracy resulting from the permutation of data on the predictor [20]. However, more study is needed on the review of artificial intelligence for neuroimaging applications. This study reviews the recent progress of predictive and explainable artificial intelligence for neuroimaging applications.

2. Methods

Figure 1 shows the flow diagram of this study as a modified version of Preferred Reporting Items for Systematic Reviews and Meta-Analyses. The source of data was 30 original studies in PubMed. The search terms were “neuroimaging” (title) together with “machine learning” (title) or “deep learning” (title). The eligibility criteria were the participants with the dependent variable of brain image or associated disease, the interventions/comparisons of artificial intelligence, the outcomes of accuracy, the AUC and/or variable importance, the publication year of 2019 or later, and the publication language of English. Opinions, reports, and reviews were excluded. The following summary measures were adopted: (1) sample size (participants), baseline vs. innovation artificial intelligence methods (comparisons vs. interventions), dependent variable (participants), task type; (2) baseline vs. innovation performance outcomes; (3) major demographic, health-related, and neuroimaging predictors. Here, accuracy denotes the proportion of correct predictions over all observations. The area under the curve (AUC) represents the area under the plot of the true positive rate (sensitivity) against the false positive rate (1-specificity) at various threshold settings. The AUC is a major performance criterion in this study, given that it accommodates sensitivity and specificity.

3. Results

3.1. Summary

The summary of the review for the 30 original studies [21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50] is presented in Table 1, Table 2, Table 3 and Table 4. The “Study” column in the tables denotes the reference numbers of the 30 original studies. Also, abbreviations are listed in Table 5. The tables include (1) sample size, baseline vs. innovation artificial intelligence methods, dependent variable and task type (Table 1); (2) baseline vs. innovation performance outcomes (Table 2); (3) major demographic, health-related, and neuroimaging predictors (Table 3); (4) cross validation and major control variable (Table 4). The ranges of performance measures were reported to be 58–96 for accuracy (%), 66–97 for sensitivity (%), 76–98 for specificity (%), and 70–98 for the AUC (%). The support vector machine and the convolutional neural network registered the best performance (AUC 98%) for the classifications of low- vs. high-grade glioma [28] and brain image properties [44], respectively. Similarly, the random forest delivered the best performance (root mean square error 1) for the regression of brain image properties [43]. The following factors were discovered to be major predictors of brain image or associated disease: (demographic) age, education, sex; (health-related) alpha desynchronization, Alzheimer’s disease stage, CD4, depression, distress, mild behavioral impairment, RNA sequencing; (neuroimaging) abnormal amyloid-β, amplitude of low-frequency fluctuation, cortical thickness, functional connectivity, fractal dimension measure, gray matter volume, left amygdala activity, left hippocampal volume, plasma neuro-filament light, right cerebellum, regional homogeneity, right middle occipital gyrus, surface area, sub-cortical volume. Finally, 22 original studies included cross validation, and 14 studies matched control and experimental groups in age, sex, and/or education (defined as “major control variables” in Table 4). The differences between the control and experimental groups in terms of the major control variables were statistically insignificant in the 14 studies. Predictive and explainable artificial intelligence provide an effective, non-invasive decision support system for neuroimaging applications. However, artificial intelligence is a data-driven approach, and more research is needed for more general conclusions given that the findings of this study above were based on the 30 original studies published in 2019 or later.

3.2. Predictive Artificial Intelligence

This section summarizes original studies, which highlight the strengths of predictive artificial intelligence with the best performance metrics for neuroimaging applications [28,43,44]. As addressed above, the support vector machine registered the best performance (AUC 98%) for the classifications of low- vs. high-grade glioma in one study [28]. MRI data on texture and fractal dimension measures came from 28 glioma patients enrolled in a national medical institute. The dependent variable was the grade of glioma with 0 (low) vs. 1 (high). The independent variables were 25 texture and 15 fractal dimension indicators. The accuracy, sensitivity, specificity, and AUC of the support vector machine were 93%, 97%, 98%, and 98% for the general structure of the enhanced tumor, respectively. These best results were followed by those of the boundary of the whole tumor, i.e., the accuracy, sensitivity, specificity, and area under the curve of 83%, 100%, 60%, and 80%. These findings of multivariable machine learning were consistent with their univariate counterparts. The fractal dimension measures of high-grade glioma were significantly greater than those of low-grade glioma: 1.221 vs. 1.626 for the general structure of the enhanced tumor (p < 0.0001); 0.923 vs. 0.940 for the boundary of the whole tumor (p = 0.0105). This study suggests that the separate examination of the whole tumor and its elements is expected to present important insights regarding predictive artificial intelligence for neuroimaging applications.
Likewise, the convolutional neural network presented the best AUC of 98% for the classifications of brain conditions [44]. The source of MRI data was 59 study participants. The outcome variables are somatic pain and social rejection. The input variables were brain networks such as visual, somatomotor, dorsal attention, salience network, limbic, frontoparietal, and default. The convolutional neural network was a little better than the support vector machine as predictive artificial intelligence, i.e., 96%, 96%, 955, and 98% vs. 92%, 94%, 91%, and 97% in terms of accuracy, sensitivity, specificity, and AUC. In a similar context, the random forest delivered the best performance (root mean square error less than 1) for the regression of brain conditions [43]. Data consisted of 400 study participants. The dependent variable was cognitive ability (measured by the Global Cognitive Assessment Task), and the independent variable was the gray matter volume. The random forest outperformed the elastic net and ridge regression in terms of the root mean square error less than 1. The findings above demonstrate that the best predictive artificial intelligence models for neuroimaging applications vary depending on different outcome variables and different input variables. Little study was done, and more analysis is needed regarding which models serve as the best predictive artificial intelligence for varying brain conditions with varying analytic tasks.

3.3. Explainable Artificial Intelligence

This section summarizes original studies, which request due attention to the strengths of explainable artificial intelligence with multiple-domain data for brain disease applications [27,29,31,41]. The aim of a recent study was to develop explainable artificial intelligence for the classification of frailty in Human Immunodeficiency Virus (HIV) patients [27]. The source of MRI data was 105 study participants enrolled in a university medical center. The outcome variable was frailty in HIV patients. The input variables were demographic (sex), health-related (depression, CD4), and neuroimaging predictors. The sensitivity and F1 score of boosting were 66% and 71%, correspondingly. Based on boosting permutation variable importance, the top five predictors were reduced cerebral blood flow in the right pallidum region, reduced cerebral blood flow in the left occipital region, lower psychomotor performance, reduced volume of the right pericalcarine region, and lower resting-state functional connectivity between the frontal parietal and ventral attention networks. Likewise, another study attempted to highlight the strengths of boosting as explainable artificial intelligence for the regression of brain age [29]. Data consisted of 22,661 study participants enrolled in national projects. The dependent variable was brain age, and the independent variables were demographic (sex), health-related (Alzheimer’s disease stage), and neuroimaging (Abnormal Amyloid-β, APOE-ε4, and plasma neurofilament light). The root mean square error of boosting was 4.
In a similar vein, the purpose of a recent study centered on developing explainable artificial intelligence for the classification of glioblastoma survival [31]. The source of MRI data was 133 study participants enrolled in a university medical center. The outcome variable was glioblastoma survival. The input variables were demographic (age, sex) and neuroimaging (cortical thickness, functional connectivity). The accuracy of the artificial neural network was 91%. According to artificial neural network permutation variable importance, the top five predictors were functional connectivity for distance correlation 10, Bankstss cortex, age, sex, and functional connectivity for distance correlation 11. The success of these machine learning studies was extended to its deep learning counterpart, which endeavored to demonstrate the strengths of the residual convolutional neural network as explainable artificial intelligence for the classification of motor performance in stroke [41]. Data consisted of 41 study participants enrolled in previous studies. The dependent variable was motor performance in stroke, and the independent variables were demographic (age, sex) and neuroimaging (axial diffusivity, fractional anisotropy, mean diffusivity, radial diffusivity, white matter, gray matter). The performance measures of the support vector machine and the residual convolutional neural network were similar to each other, i.e., 91% and 91% vs. 92% and 92% in terms of accuracy and AUC.

4. Discussion

The existing literature on predictive and explainable artificial intelligence for neuroimaging applications has some limitations. Firstly, a majority of the studies reviewed here were characterized by single-centre data with relatively small sample sizes. The utilization of multi-centre data will facilitate further advancements in this field of research. Indeed, more analysis is needed regarding the effect of the sample size on model performance. One study reviewed here [33] made a rare attempt in this direction. As the sample size increased from 100 to 10,000, an accuracy gap between machine learning (support vector machine) and deep learning (convolutional neural network-Alex) increased to 7% (51% vs. 58%) for the prediction of 10 brain age groups in this study. But more examination is needed on this topic, given that both machine learning and deep learning registered low performance and their performance difference was not very large in this study. Secondly, the accuracies of some studies reviewed here (58%) may not yet meet the standards required for use as diagnostic tests. In addition, only seven studies reviewed here used test sets, and these test sets came from internal sources. Despite these limitations, these studies were included in this review, given that the further advance of predictive and explainable artificial intelligence for neuroimaging applications is not possible without trials and errors.
Thirdly, three common methods of explainable artificial intelligence (machine learning permutation importance, random forest permutation importance, and random forest impurity importance) may yield different outcomes on some occasions. The random forest impurity importance shows more variation from the categorization of variables. However, the random forest has a special quality of including sequential information, and this special quality is more apparent with the random forest impurity importance. In this context, extensive comparison for the three methods of explainable artificial intelligence would be a major achievement for this line of research. Fourthly, other types of explainable artificial intelligence and trade-offs between predictive power and explainable power were beyond the scope of this review, e.g., local interpretable model-agnostic explanations (LIME) [51]. Fifthly, 22 studies reviewed here employed cross validation, but only eight studies reported performance measures over each subset [26], their standard deviations across all subsets [36], or their confidence intervals across all subsets [41,44,45,46,47,48]. Especially, this was a significant drawback for five out of nine studies with deep learning models with cross validation. In other words, there can be found certain risks of detection, attrition, and reporting biases. This issue requests much more attention for the future studies on this topic.
Sixthly, hyper-parameter tuning was either absent or basic in the studies reviewed here. One possible explanation is that neuroimaging investigation itself requires significant time and energy besides hyper-parameter tuning. In spite of this reality, it is still a valid suggestion that advanced hyper-parameter tuning is expected to bring significant improvement in the performance of predictive and explainable artificial intelligence for neuroimaging applications. One plausible approach of advanced hyper-parameter tuning is the policy gradient approach [52] (within reinforcement learning to be addressed below). Here, the policy gradient can be defined as “the change of action to maximize the reward”, e.g., the change of hyper-parameter selection to maximize the performance of predictive and explainable artificial intelligence for neuroimaging applications. In other words, the policy gradient approach can be denoted as “systematic hyper-parameter selection”, i.e., finding the optimal values of hyper-parameters based on performance measures and major control variables [52]. These new approaches would expand the boundary of knowledge by a great extent. Seventhly, experts in the field of artificial intelligence focus on the performance of predictive and explainable artificial intelligence as the best indicator of study quality. We followed this convention.
Indeed, some suggestions for this line of research are presented here. Firstly, synthesizing various forms of explainable artificial intelligence with various forms of data in the field of brain disease would represent a significant advancement in the field. An increasing amount of artificial intelligence research is synthesizing genetic, image, and numeric methods for disease diagnosis, treatment, and management. This new approach is called “wide and deep learning”, and it includes a great variety of multi-input multi-output combinations. A recent study [53] serves as a good example, given that it presents a glaucoma prediction system combing convolutional neural networks and their recurrent neural network counterparts. Here, the former network draws key image characteristics from multiple image inputs, and the latter part predicts glaucoma results from the course of the key image characteristics over time. In a convolutional neural network, filters look around input data and detect certain characteristics based on their convolution operations. (This predicts the status of normal versus disease.) In a recurrent neural network, output in the present is determined in a recurrent pattern by input in the present and memory in the past (which is called “the hidden state in the past”) [8,9]. There is a paucity of literature on this topic, and further investigation is required to gain insight into the integration of diverse forms of explainable artificial intelligence for diverse data types in the context of brain disease applications.
Secondly, little examination has been done, and more investigation is needed on reinforcement learning. Reinforcement learning has three key components: the environment bringing a series of rewards, an agent taking a series of actions to maximize the cumulative reward, and the environment transitioning to the next period with given transition probabilities [54]. Here, artificial intelligence (e.g., Alpha-Go) begins in a manner similar to that of a human player, taking a series of actions and maximizing the cumulative reward (chance of victory) from the limited information available in limited periods only. Then, it is capable of surpassing the performance of the best human player ever, due to the immense power of big data covering all human players to date [54]. The popularity of reinforcement learning in finance and health can be attributed to its ability to achieve excellent results without the need for unrealistic assumptions, while offering a superior performance compared to conventional statistical models [55,56]. Nevertheless, there is a paucity of literature on the subject, and further investigation is required in order to gain a deeper understanding of explainable reinforcement learning. A recent review indicates that only a few studies have addressed this issue. These studies have employed simplified models with straightforward interpretations but have demonstrated inadequate performance and have given insufficient consideration to the psychological and social factors underlying optimization processes [57].
Thirdly, rigorous qualitative evaluation approaches need to be developed regarding systematic reviews of predictive and explainable artificial Intelligence for neuroimaging applications. The Enhancing the Quality and Transparency of Health Research Network recommends neuroimaging meta-analysis to include the following information: research question; eligibility and exclusion criteria; flow diagram; experimental characteristics such as sample size (participants), baseline vs. innovation methods (comparisons vs. interventions), dependent variable (participants), task type, baseline vs. innovation performance outcomes, and participant characteristics [58,59]. This study followed this recommendation with the following summary measures: research question (p. 003); eligibility and exclusion criteria (pp. 003–004); flow diagram (Figure 1); experimental characteristics such as (1) sample size, baseline vs. innovation artificial intelligence methods, dependent variable, and task type (Table 1), (2) baseline vs. innovation performance outcomes (Table 2), (3) major demographic, health-related, and neuroimaging predictors (Table 3), and (4) cross validation and major control variable (Table 4). However, more systematic qualitative evaluation methods can be designed, and this new guideline is expected to improve the reliability of reviews for predictive and explainable artificial intelligence for neuroimaging applications much more.

5. Conclusions

In summary, this study reviewed the recent progress of predictive and explainable artificial intelligence for neuroimaging applications. The ranges of performance measures were reported to be 58–96 for accuracy (%), 66–97 for sensitivity (%),76–98 for specificity (%), and 70–98 for the AUC (%). The support vector machine and the convolutional neural network registered the best performance (AUC 98%) for the classifications of low- vs. high-grade glioma and brain conditions, respectively. Similarly, the random forest delivered the best performance (root mean square error 1) for the regression of brain conditions. The following factors were discovered to be major predictors of brain image or associated disease: (demographic) age, education, sex; (health-related) alpha desynchronization, Alzheimer’s disease stage, CD4, depression, distress, mild behavioral impairment, RNA sequencing; (neuroimaging) abnormal amyloid-β, amplitude of low-frequency fluctuation, cortical thickness, functional connectivity, fractal dimension measure, gray matter volume, left amygdala activity, left hippocampal volume, plasma neuro-filament light, right cerebellum, regional homogeneity, right middle occipital gyrus, surface area, sub-cortical volume. Combining various types of explainable artificial intelligence with various types of information in the field of brain disease would bring significant progress in the field. Little research has been done, and more study is needed on reinforcement learning. In spite of these limitations, predictive and explainable artificial intelligence provide an effective, non-invasive decision support system for neuroimaging applications.

Author Contributions

S.L. and K.-S.L. designed the study, collected, analyzed, and interpreted the data, as well as wrote and reviewed the manuscript. S.L. and K.-S.L. approved the final version of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Korea Health Industry Development Institute grant (No. HI22C1302 (Korea Health Technology R&D Project)), funded by the Ministry of Health and Welfare of South Korea. The funders had no role in the design of the study, in the collection, analysis, and interpretation of the data, or in the writing and review of the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. World Health Organization. Over 1 in 3 People Affected by Neurological Conditions, the Leading Cause of Illness and Disability Worldwide. 14 March 2024 News Release. Available online: https://www.who.int/news/item/14-03-2024-over-1-in-3-people-affected-by-neurological-conditions--the-leading-cause-of-illness-and-disability-worldwide (accessed on 19 August 2024).
  2. Huang, Y.; Li, Y.; Pan, H.; Han, L. Global, regional, and national burden of neurological disorders in 204 countries and territories worldwide. J. Glob. Health 2023, 13, 04160. [Google Scholar] [CrossRef] [PubMed]
  3. GBD 2021 Nervous System Disorders Collaborators. Global, regional, and national burden of disorders affecting the nervous system, 1990-2021: A systematic analysis for the Global Burden of Disease Study 2021. Lancet Neurol. 2024, 23, 344–381. [Google Scholar] [CrossRef]
  4. Johns Hopkins Medicine. Neurological Disorders. 2024. Available online: https://www.hopkinsmedicine.org/health/conditions-and-diseases/neurological-disorders (accessed on 19 August 2024).
  5. Mayor Clinic. Neurology. 2024. Available online: https://www.mayoclinic.org/departments-centers/neurology/sections/conditions-treated/orc-20117075 (accessed on 19 August 2024).
  6. Cleveland Clinic. Brain Diseases. 2024. Available online: https://my.clevelandclinic.org/health/diseases/22934-brain-diseases (accessed on 19 August 2024).
  7. Brammer, M. The role of neuroimaging in diagnosis and personalized medicine-current position and likely future directions. Dialogues Clin. Neurosci. 2009, 11, 389–396. [Google Scholar] [CrossRef]
  8. Han, J.; Micheline, K. Data Mining: Concepts and Techniques, 2nd ed.; Elsevier: San Francisco, CA, USA, 2006. [Google Scholar]
  9. Kufel, J.; Bargieł-Łączek, K.; Kocot, S.; Koźlik, M.; Bartnikowska, W.; Janik, M.; Czogalik, Ł.; Dudek, P.; Magiera, M.; Lis, A.; et al. What is machine learning, artificial neural networks and deep learning?-Examples of practical applications in medicine. Diagnostics 2023, 13, 2582. [Google Scholar] [CrossRef] [PubMed]
  10. Wang, S.; Li, J.; Sun, L.; Cai, J.; Wang, S.; Zeng, L.; Sun, S. Application of machine learning to predict the occurrence of arrhythmia after acute myocardial infarction. BMC Med. Inform. Decis. Mak. 2021, 21, 301. [Google Scholar] [CrossRef] [PubMed]
  11. Zhang, Y.; Du, S.; Hu, T.; Xu, S.; Lu, H.; Xu, C.; Li, J.; Zhu, X. Establishment of a model for predicting preterm birth based on the machine learning algorithm. BMC Pregnancy Childbirth 2023, 23, 779. [Google Scholar] [CrossRef]
  12. Guerra, A.; Orton, M.R.; Wang, H.; Konidari, M.; Maes, K.; Papanikolaou, N.K.; Koh, D.M. Clinical application of machine learning models in patients with prostate cancer before prostatectomy. Cancer Imaging 2024, 24, 24. [Google Scholar] [CrossRef]
  13. Lee, K.-S.; Jang, J.-Y.; Yu, Y.-D.; Heo, J.S.; Han, H.-S.; Yoon, Y.-S.; Kang, C.M.; Hwang, H.K.; Kang, S. Usefulness of artificial intelligence for predicting recurrence following surgery for pancreatic cancer: Retrospective cohort study. Int. J. Surg. 2021, 93, 106050. [Google Scholar] [CrossRef]
  14. Lee, K.S.; Park, K.W. Social determinants of association among cerebrovascular disease, hearing loss and cognitive impair-ment in a middle-aged or old population: Recurrent-neural-network analysis of the Korean Longitudinal Study of Aging (2014–2016). Geriatr. Gerontol. Int. 2019, 19, 711–716. [Google Scholar] [CrossRef]
  15. Shaha, T.R.; Begum, M.; Uddin, J.; Torres, V.Y.; Iturriaga, J.A.; Ashraf, I.; Samad, M.A. Feature group partitioning: An approach for depression severity prediction with class balancing using machine learning algorithms. BMC Med. Res. Methodol. 2024, 24, 123. [Google Scholar] [CrossRef]
  16. Yu, Y.D.; Lee, K.S.; Kim, J.; Ryu, J.H.; Lee, J.G.; Lee, K.W.; Kim, B.W.; Kim, D.S.; Korean Organ Transplantation Registry Study Group. Artificial intelligence for predicting survival following deceased donor liver transplantation: Retrospective multi-center study. Int. J. Surg. 2022, 105, 106838. [Google Scholar] [CrossRef]
  17. Ryu, K.-J.; Yi, K.W.; Kim, Y.J.; Shin, J.H.; Hur, J.Y.; Kim, T.; Seo, J.B.; Lee, K.-S.; Park, H. Machine learning approaches to identify factors associated with women’s vasomotor symptoms using general hospital data. J. Korean Med. Sci. 2021, 36, e122. [Google Scholar] [CrossRef] [PubMed]
  18. Ryu, K.-J.; Yi, K.W.; Kim, Y.J.; Shin, J.H.; Hur, J.Y.; Kim, T.; Seo, J.B.; Lee, K.S.; Park, H. Artificial intelligence approaches to the determinants of women’s vaginal dryness using general hospital data. J. Obstet. Gynaecol. 2022, 42, 1518–1523. [Google Scholar] [CrossRef]
  19. Lee, Y.H.; Jeon, S.; Won, J.H.; Auh, Q.S.; Noh, Y.K. Automatic detection and visualization of temporomandibular joint effusion with deep neural network. Sci. Rep. 2024, 14, 18865. [Google Scholar] [CrossRef] [PubMed]
  20. Python package sklearn.ensemble. Random Forest Classifier. Available online: https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html (accessed on 19 August 2024).
  21. Shi, D.; Li, Y.; Zhang, H.; Yao, X.; Wang, S.; Wang, G.; Ren, K. Machine learning of schizophrenia detection with structural and functional neuroimaging. Dis. Markers 2021, 2021, 9963824. [Google Scholar] [CrossRef] [PubMed]
  22. Mellema, C.J.; Nguyen, K.P.; Treacher, A. Montillo A Reproducible neuroimaging features for diagnosis of autism spectrum disorder with machine learning. Sci. Rep. 2022, 12, 3057. [Google Scholar] [CrossRef]
  23. Chaudhary, S.; Moon, S.; Lu, H. Fast efficient and accurate neuro-imaging denoising via supervised deep learning. Nat. Commun. 2022, 13, 5165. [Google Scholar] [CrossRef]
  24. Oppenheimer, C.W.; Bertocci, M.; Greenberg, T.; Chase, H.W.; Stiffler, R.; Aslam, H.A.; Lockovich, J.; Graur, S.; Bebko, G.; Phillips, M.L. Informing the study of suicidal thoughts and behaviors in distressed young adults: The use of a machine learning approach to identify neuroimaging psychiatric behavioral and demographic correlates. Psychiatry Res. Neuroimaging 2021, 317, 111386. [Google Scholar] [CrossRef]
  25. Zhou, E.; Wang, W.; Ma, S.; Xie, X.; Kang, L.; Xu, S.; Deng, Z.; Gong, Q.; Nie, Z.; Yao, L.; et al. Prediction of anxious depression using multimodal neuroimaging and machine learning. Neuroimage 2024, 285, 120499. [Google Scholar] [CrossRef]
  26. Borkar, K.; Chaturvedi, A.; Vinod, P.K.; Bapi, R.S. Ayu-Characterization of healthy aging from neuroimaging data with deep learning and rsfMRI. Front. Comput. Neurosci. 2022, 16, 940922. [Google Scholar] [CrossRef]
  27. Paul, R.H.; Cho, K.S.; Luckett, P.; Strain, J.F.; Belden, A.C.; Bolzenius, J.D.; Navid, J.; Garcia-Egan, P.M.; Cooley, S.A.; Wisch, J.K.; et al. Machine learning analysis reveals novel neuroimaging and clinical signatures of frailty in HIV. J. Acquir. Immune Defic. Syndr. 2020, 84, 414–421. [Google Scholar] [CrossRef] [PubMed]
  28. Battalapalli, D.; Vidyadharan, S.; Prabhakar Rao, B.V.V.S.N.; Yogeeswari, P.; Kesavadas, C.; Rajagopalan, V. Fractal dimension: Analyzing its potential as a neuroimaging biomarker for brain tumor diagnosis using machine learning. Front. Physiol. 2023, 14, 1201617. [Google Scholar] [CrossRef] [PubMed]
  29. Cumplido-Mayoral, I.; García-Prat, M.; Operto, G.; Falcon, C.; Shekari, M.; Cacciaglia, R.; Milà-Alomà, M.; Lorenzini, L.; Ingala, S.; Meije Wink, A.; et al. Biological brain age prediction using machine learning on structural neuroimaging data: Multi-cohort validation against biomarkers of Alzheimer’s disease and neurodegeneration stratified by sex. eLife 2023, 12, e81067. [Google Scholar] [CrossRef] [PubMed]
  30. Wei, X.; Wang, L.; Yu, F.; Lee, C.; Liu, N.; Ren, M.; Tu, J.; Zhou, H.; Shi, G.; Wang, X.; et al. Identifying the neural marker of chronic sciatica using multimodal neuroimaging and machine learning analyses. Front. Neurosci. 2022, 16, 1036487. [Google Scholar] [CrossRef]
  31. Luckett, P.H.; Olufawo, M.; Lamichhane, B.; Park, K.Y.; Dierker, D.; Verastegui, G.T.; Yang, P.; Kim, A.H.; Chheda, M.G.; Snyder, A.Z.; et al. Predicting survival in glioblastoma with multimodal neuroimaging and machine learning. J. Neurooncol. 2023, 164, 309–320. [Google Scholar] [CrossRef]
  32. Ieong, H.F.; Gao, F.; Yuan, Z. Machine learning: Assessing neurovascular signals in the prefrontal cortex with non-invasive bimodal electro-optical neuroimaging in opiate addiction. Sci. Rep. 2019, 9, 18262. [Google Scholar] [CrossRef]
  33. Abrol, A.; Fu, Z.; Salman, M.; Silva, R.; Du, Y.; Plis, S.; Calhoun, V. Deep learning encodes robust discriminative neuroimaging representations to outperform standard machine learning. Nat. Commun. 2021, 12, 353. [Google Scholar] [CrossRef]
  34. Henschel, L.; Conjeti, S.; Estrada, S.; Diers, K.; Fischl, B.; Reuter, M. FastSurfer—A fast and accurate deep learning based neuroimaging pipeline. Neuroimage 2020, 219, 117012. [Google Scholar] [CrossRef]
  35. Wen, Z.Y.; Zhang, Y.; Feng, M.H.; Wu, Y.C.; Fu, C.W.; Deng, K.; Lin, Q.Z.; Liu, B. Identification of discriminative neuroimaging markers for patients on hemodialysis with insomnia: A fractional amplitude of low frequency fluctuation-based machine learning analysis. BMC Psychiatry 2023, 23, 9. [Google Scholar] [CrossRef]
  36. Moguilner, S.; Whelan, R.; Adams, H.; Valcour, V.; Tagliazucchi, E.; Ibanez, A. Visual deep learning of unprocessed neuroimaging characterises dementia subtypes and generalises across non-stereotypic samples. EBioMedicine 2023, 90, 104540. [Google Scholar] [CrossRef]
  37. Kim, E.; Cho, H.H.; Cho, S.H.; Park, B.; Hong, J.; Shin, K.M.; Hwang, M.J.; You, S.K.; Lee, S.M. Accelerated synthetic MRI with deep learning-based reconstruction for pediatric neuroimaging. AJNR Am. J. Neuroradiol. 2022, 43, 1653–1659. [Google Scholar] [CrossRef] [PubMed]
  38. Yassin, W.; Nakatani, H.; Zhu, Y.; Kojima, M.; Owada, K.; Kuwabara, H.; Gonoi, W.; Aoki, Y.; Takao, H.; Natsubori, T.; et al. Machine-learning classification using neuroimaging data in schizophrenia autism ultra-high risk and first-episode psychosis. Transl. Psychiatry 2020, 10, 278. [Google Scholar] [CrossRef] [PubMed]
  39. Guan, S.; Khan, A.A.; Sikdar, S.; Chitnis, P.V. Limited-view and sparse photoacoustic tomography for neuroimaging with deep learning. Sci. Rep. 2020, 10, 8510. [Google Scholar] [CrossRef] [PubMed]
  40. Wang, M.; Zhao, S.W.; Wu, D.; Zhang, Y.H.; Han, Y.K.; Zhao, K.; Qi, T.; Liu, Y.; Cui, L.B.; Wei, Y. Transcriptomic and neuroimaging data integration enhances machine learning classification of schizophrenia. Psychoradiology 2024, 4, kkae005. [Google Scholar] [CrossRef]
  41. Karakis, R.; Gurkahraman, K.; Mitsis, G.D.; Boudrias, M.H. Deep learning prediction of motor performance in stroke individuals using neuroimaging data. J. Biomed. Inform. 2023, 141, 104357. [Google Scholar] [CrossRef]
  42. Tian, D.; Zeng, Z.; Sun, X.; Tong, Q.; Li, H.; He, H.; Gao, J.H.; He, Y.; Xia, M. A deep learning-based multisite neuroimage harmonization framework established with a traveling-subject dataset. Neuroimage 2022, 257, 119297. [Google Scholar] [CrossRef]
  43. Jollans, L.; Boyle, R.; Artiges, E.; Banaschewski, T.; Desrivieres, S.; Grigis, A.; Martinot, J.L.; Paus, T.; Smolka, M.N.; Walter, H.; et al. Quantifying performance of machine learning methods for neuroimaging data. Neuroimage 2019, 199, 351–365. [Google Scholar] [CrossRef]
  44. Kohoutova, L.; Heo, J.; Cha, S.; Lee, S.; Moon, T.; Wager, T.D.; Woo, C.W. Toward a unified framework for interpreting machine-learning models in neuroimaging. Nat. Protoc. 2020, 15, 1399–1435. [Google Scholar] [CrossRef] [PubMed]
  45. Gill, S.; Mouches, P.; Hu, S.; Rajashekar, D.; MacMaster, F.P.; Smith, E.E.; Forkert, N.D.; Ismail, Z.; Alzheimer’s Disease Neuroimaging Initiative. Using machine learning to predict dementia from neuropsychiatric symptom and neuroimaging data. J. Alzheimers Dis. 2020, 75, 277–288. [Google Scholar] [CrossRef]
  46. Taipale, M.; Tiihonen, J.; Korhonen, J.; Popovic, D.; Vaurio, O.; Lähteenvuo, M.; Lieslehto, J. Effects of substance use and antisocial personality on neuroimaging-based machine learning prediction of schizophrenia. Schizophr. Bull. 2023, 49, 1568–1578. [Google Scholar] [CrossRef]
  47. Sunil, G.; Gowtham, S.; Bose, A.; Harish, S.; Srinivasa, G. Graph neural network and machine learning analysis of functional neuroimaging for understanding schizophrenia. BMC Neurosci. 2024, 25, 2. [Google Scholar] [CrossRef] [PubMed]
  48. Arabi, H.; Bortolin, K.; Ginovart, N.; Garibotto, V.; Zaidi, H. Deep learning-guided joint attenuation and scatter correction in multitracer neuroimaging studies. Hum. Brain Mapp. 2020, 41, 3667–3679. [Google Scholar] [CrossRef] [PubMed]
  49. Vieira, S.; Gong, Q.Y.; Pinaya, W.H.L.; Scarpazza, C.; Tognin, S.; Crespo-Facorro, B.; Tordesillas-Gutierrez, D.; Ortiz-Garcia, V.; Setien-Suero, E.; Scheepers, F.E.; et al. Using machine learning and structural neuroimaging to detect first episode psychosis: Reconsidering the evidence. Schizophr. Bull. 2020, 46, 17–26. [Google Scholar] [CrossRef]
  50. Erdaş, Ç.B.; Sümer, E. A fully automated approach involving neuroimaging and deep learning for Parkinson’s disease detection and severity prediction. PeerJ Comput. Sci. 2023, 9, e1485. [Google Scholar] [CrossRef]
  51. Ribeiro, M.T.; Singh, S.; Guestrin, C. Why should I trust you? Explaining the predictions of any classifier. arXiv 2016, arXiv:1602.04938. [Google Scholar]
  52. Wang, Y.; Zou, S. Policy gradient method for robust reinforcement learning. arXiv 2022, arXiv:2205.07344. [Google Scholar]
  53. Gheisari, S.; Shariflou, S.; Phu, J.; Kennedy, P.J.; Agar, A.; Kalloniatis, M.; Golzan, S.M. A combined convolutional and recurrent neural network for enhanced glaucoma detection. Sci. Rep. 2021, 11, 1945. [Google Scholar] [CrossRef]
  54. Silver, D.; Huang, A.; Maddison, C.J.; Guez, A.; Sifre, L.; van den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; et al. Mastering the game of Go with deep neural networks and tree search. Nature 2016, 529, 484–489. [Google Scholar] [CrossRef] [PubMed]
  55. Hambly, B.; Xu, R.; Yang, H. Recent advances in reinforcement learning in finance. arXiv 2022, arXiv:2112.04553. [Google Scholar]
  56. Yu, C.; Liu, J.; Nemati, S. Reinforcement learning in healthcare: A survey. arXiv 2020, arXiv:1908.08796. [Google Scholar] [CrossRef]
  57. Puiutta, E. Veith EMSP. Explainable reinforcement learning: A survey. arXiv 2020, arXiv:2005.06247. [Google Scholar]
  58. Enhancing the Quality and Transparency of Health Research Network. Reporting Guidelines. 2024. Available online: https://www.equator-network.org/reporting-guidelines/ten-simple-rules-for-neuroimaging-meta-analysis/ (accessed on 19 August 2024).
  59. Müller, V.I.; Cieslik, E.C.; Laird, A.R.; Fox, P.T.; Radua, J.; Mataix-Cols, D.; Tench, C.R.; Yarkoni, T.; Nichols, T.E.; Turkeltaub, P.E.; et al. Ten simple rules for neuroimaging meta-analysis. Neurosci. Biobehav. Rev. 2018, 84, 151–161. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Flow diagram.
Figure 1. Flow diagram.
Diagnostics 14 02394 g001
Table 1. Summary—Sample Size Method and Dependent Variable.
Table 1. Summary—Sample Size Method and Dependent Variable.
StudySample SizeMethod-BaselineMethod-InnovationDependent VariableType
21109 Global Signal RegressionSchizophreniaClassification
22915CNN-DenseCNN-Dense SDAASDClassification
23500 UnetBrain ImageGeneration
2478 LASSOSuicidal ThoughtRegression
25387 RFAnxiety in MDDClassification
26638 CNN-VGGFour Brain Age GroupsClassification
27105 BoostingFrailty in HIVClassification
2842 SVMGliomaClassification
2922,661 BoostingBrain AgeRegression
3070 SVMChronic SciaticaClassification
31133 ANNGlioblastoma SurvivalClassification
3219 LDAOpiate AddictionCorrelation
3310,000LDA LR SVM *CNN-Alex10 Brain Age GroupsClassification
34160 CNN-FastSurferBrain ConditionClassification
3584 SVMInsomnia in HemodialysisClassification
363000 CNN-DenseDementiaClassification
3747 CNNPediatric Brain TissuesGeneration
38206 DT LR RF * SVMASD and SchizophreniaClassification
39500 UnetBrain VascularGeneration
40103ANN Uni-ModalANN Multi-ModalSchizophreniaClassification
41154DT KN NB RF SVM *CNN-ResidualPost-Stroke MotorClassification
4281 CNN-ResidualBrain ImageGeneration
43400 EN RF * RRBrain ConditionRegression
4459SVMCNNBrain ConditionClassification
45341 DTDementiaClassification
46688 SVMSchizophreniaClassification
47172DT * KN LR SVMGraph Neural NetworkSchizophreniaClassification
48180 CNNBrain ImageGeneration
49956 CNNPsychosisClassification
501130 CNN-AlexParkinson’s DiseaseClassification
Note: * Best Model.
Table 2. Summary—Model Performance.
Table 2. Summary—Model Performance.
StudyPerformance-Baseline Performance-Comparison
AccSenSpeAUC **AccSenSpeAUC **
21 83699485
22 86 93
23 70
24 NA
25 80
26 73
27 66 71
28 93979898
29 4
30 90
31 91
32 83
3351 58
34 9696
35 82 82
36 95969595
37 90
38 76 83
39 93
4055 6971 92
4191 9192 92
42 NA
43 1
449294919796969598
45 84 86
46 60 84
477883727980847680
48 97
49 70
50 96 95
Min 58667670
Max 96979898
Note: ** Correlation (Correlation) R-Square (Regression) or Structural Similarity Index Measure (Generation). Diagnostics 14 02394 i001 F1 (Classification) Root Mean Square Error (Regression) or R-Square (Generation).
Table 3. Summary—Major Predictor.
Table 3. Summary—Major Predictor.
StudyPredictor DemographicPredictor HealthPredictor Neuroimaging
21
22
23
24Age EducationDepression DistressLAA
25 GMV ALFF RH FC
26 FC
27SexDepression CD4Neuroimaging
28 FDM
29SexAlzheimer’s Disease StageAAB APOE-ε4 PNL
30 FC ALFF SA Combination
31Age Sex CT FC
32 Alpha DesynchronizationFC
33
34
35 ALFF RMCG RC
36
37
38 CT SCV
39
40 RNA SequencingNeuroimaging
41Age Sex Neuroimaging
42 GMV
43 GMV
44
45 Mild Behavioral ImpairmentLHV
46
47
48
49 GMV CT
50
Table 4. Summary—Cross Validation and Major Control Variable.
Table 4. Summary—Cross Validation and Major Control Variable.
StudySample SizeTrainingValidationTestN-Fold CV *Major Control Variable
21102979989141 Age Sex
229154882441833Sex
23500500500
2478708 10Emotion Physiology
2538734839 10
266384081021285Age Sex
271058421 5
2842236135
2924,97520,3952266231410Age
3016,10015,870230 70Age Sex Education Occupation
311331321 133
321919 Age Education IQ
3312,31410,00011571157 Age Gender
3416014020 Age Gender
3584831 84Age Sex Education
3630002400600 5
37474747
3820616541 5Age Sex
39500500500
401038320 5Age Sex
4115412430 5
42818181
4340036040 10
4459527 8
4534030634 10Age Education
4668861969 10Age Sex
4717215517 10
4818012832205
4995686096 10
5011301020110 10
Note: * N-Fold Cross Validation for Training-Validation Sets. Diagnostics 14 02394 i001 Leave-One-Out Cross Validation (Every Single Element Serves as the Validation Set).
Table 5. Abbreviation.
Table 5. Abbreviation.
Method
ANNArtificial Neural Network
CNNConvolutional Neural Network
DTDecision Tree
ENElastic Net
KNK-Nearest Neighbor
LASSOLeast Absolute Shrinkage and Selection Operator
LDALinear Discriminate Analysis
LRLogistic Regression
NBNaïve Bayes
RFRandom Forest
RRRidge Regression
SDASupervised Domain Adaptation
SVMSupport Vector Machine
VGGVirtual Geometry Group
Dependent Variable
ASCAttenuation-Scatter Correction
ASDAutism Spectrum Disorder
HIVHuman Immunodeficiency Virus
MDDMajor Depressive Disorder
Model Performance
AccAccuracy
SenSensitivity
SpeSpecificity
AUCArea Under the Curve
Predictor Neuroimaging
AABAbnormal Amyloid-β
ALFFAmplitude of Low-Frequency Fluctuation
CTCortical Thickness
FCFunctional Connectivity
FDMFractal Dimension Measure
GMVGray Matter Volume
LAALeft Amygdala Activity
LHVLeft Hippocampal Volume
PNLPlasma Neurofilament Light
RCRight Cerebellum
RHRegional Homogeneity
RMCGRight Middle Occipital Gyrus
SASurface Area
SCVSub-Cortical Volume
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lee, S.; Lee, K.-S. Predictive and Explainable Artificial Intelligence for Neuroimaging Applications. Diagnostics 2024, 14, 2394. https://doi.org/10.3390/diagnostics14212394

AMA Style

Lee S, Lee K-S. Predictive and Explainable Artificial Intelligence for Neuroimaging Applications. Diagnostics. 2024; 14(21):2394. https://doi.org/10.3390/diagnostics14212394

Chicago/Turabian Style

Lee, Sekwang, and Kwang-Sig Lee. 2024. "Predictive and Explainable Artificial Intelligence for Neuroimaging Applications" Diagnostics 14, no. 21: 2394. https://doi.org/10.3390/diagnostics14212394

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop