Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (6,953)

Search Parameters:
Keywords = classification problem

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 3865 KiB  
Article
Diagnostic Model for Transformer Core Loosening Faults Based on the Gram Angle Field and Multi-Head Attention Mechanism
by Junyu Chen, Nana Duan, Xikun Zhou and Ziyu Wang
Appl. Sci. 2024, 14(23), 10906; https://doi.org/10.3390/app142310906 - 25 Nov 2024
Viewed by 136
Abstract
Aiming to address the problems of difficulty in selecting characteristic quantities and the reliance on manual experience in the diagnosis of transformer core loosening faults, a diagnosis method for transformer core looseness based on the Gram angle field (GAF), residual network (ResNet), and [...] Read more.
Aiming to address the problems of difficulty in selecting characteristic quantities and the reliance on manual experience in the diagnosis of transformer core loosening faults, a diagnosis method for transformer core looseness based on the Gram angle field (GAF), residual network (ResNet), and multi-head attention mechanism (MA) is proposed. This method automatically learns effective fault features directly from GAF images without the need for manual feature extraction. Firstly, the vibration signal is denoised using ensemble empirical mode decomposition (EEMD), and the one-dimensional temporal signal is converted into a two-dimensional image using Gram angle field to generate an image dataset. Subsequently, the image set is input into ResNet to train the model, and the output of ResNet is weighted and summed using a multi-head attention module to obtain the deep feature representation of the image signal. Finally, the classification probabilities of different iron-core loosening states of the transformer are output through fully connected layers and Softmax layers. The experimental results show that the diagnostic model proposed in this paper has an accuracy of 99.52% in identifying loose iron cores in transformers, and can effectively identify loose iron cores in different positions. It is suitable for the identification and diagnosis of loose iron cores in transformers. Compared with traditional methods, this method has better fault classification performance and noise resistance. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

Figure 1
<p>GAF coding schematic diagram.</p>
Full article ">Figure 2
<p>Residual module.</p>
Full article ">Figure 3
<p>The GAF-ResNet-MA diagnostic model.</p>
Full article ">Figure 4
<p>The GAF–ResNet–MA diagnostic process.</p>
Full article ">Figure 5
<p>The transformer vibration test platform.</p>
Full article ">Figure 6
<p>Time-domain waveforms and spectra before and after reconstruction. (<b>a</b>) Time-domain signal before reconstruction; (<b>b</b>) Time−domain signal after reconstruction; (<b>c</b>) Spectral signal before reconstruction; (<b>d</b>) Spectral signal after reconstruction.</p>
Full article ">Figure 7
<p>GASF and GADF images of iron-core loosening at different points. (<b>a</b>) GASF under normal operation; (<b>b</b>) GADF under normal operation; (<b>c</b>) GASF with loose yoke in phase A; (<b>d</b>) GADF with loose yoke in phase A; (<b>e</b>) GASF with loose lower yoke in phase A; (<b>f</b>) GADF with loose lower yoke in phase A.</p>
Full article ">Figure 8
<p>t-SNE visualization.</p>
Full article ">Figure 9
<p>Confusion matrix.</p>
Full article ">Figure 10
<p>Fault recognition accuracy of different models.</p>
Full article ">
26 pages, 5179 KiB  
Article
A Study of Potential Applications of Student Emotion Recognition in Primary and Secondary Classrooms
by Yimei Huang, Wei Deng and Taojie Xu
Appl. Sci. 2024, 14(23), 10875; https://doi.org/10.3390/app142310875 - 24 Nov 2024
Viewed by 239
Abstract
Emotion recognition is critical to understanding students’ emotional states. However, problems such as crowded classroom environments, changing light, and occlusion often affect the accuracy of recognition. This study proposes an emotion recognition algorithm specifically for classroom environments. Firstly, the study adds the self-made [...] Read more.
Emotion recognition is critical to understanding students’ emotional states. However, problems such as crowded classroom environments, changing light, and occlusion often affect the accuracy of recognition. This study proposes an emotion recognition algorithm specifically for classroom environments. Firstly, the study adds the self-made MCC module and the Wise-IoU loss function to make object detection in the YOLOv8 model more accurate and efficient. Compared with the native YOL0v8x, it reduces the parameters by 16% and accelerates the inference speed by 20%. Secondly, in order to address the intricacies of the classroom setting and the specific requirements of the emotion recognition task, a multi-channel emotion recognition network (MultiEmoNet) has been developed. This network fuses skeletal, environmental, and facial information, and introduces a central loss function and an attention module AAM to enhance the feature extraction capability. The experimental results show that MultiEmoNet achieves a classification accuracy of 91.4% on a homemade classroom student emotion dataset, which is a 10% improvement over the single-channel classification algorithm. In addition, this study also demonstrates the dynamic changes in students’ emotions in the classroom through visual analysis, which helps teachers grasp students’ emotional states in real time. This paper validates the potential of multi-channel information-fusion deep learning techniques for classroom teaching analysis and provides new ideas and tools for future improvements to emotion recognition techniques. Full article
25 pages, 7898 KiB  
Article
Rolling Bearing Fault Diagnosis Based on Optimized VMD Combining Signal Features and Improved CNN
by Yingyong Zou, Xingkui Zhang, Wenzhuo Zhao and Tao Liu
World Electr. Veh. J. 2024, 15(12), 544; https://doi.org/10.3390/wevj15120544 - 22 Nov 2024
Viewed by 295
Abstract
Aiming at the problem that the vibration signals of rolling bearings in high-speed rail traction motors are often affected by noise when they are in a fault state, which makes it very difficult to extract the fault features during fault diagnosis and causes [...] Read more.
Aiming at the problem that the vibration signals of rolling bearings in high-speed rail traction motors are often affected by noise when they are in a fault state, which makes it very difficult to extract the fault features during fault diagnosis and causes obstruction in fault classification. The article proposes a rolling bearing fault diagnosis based on optimized variational mode decomposition (VMD) combined with signal features and an improved convolutional neural network (CNN). The golden jackal optimization (GJO) algorithm is employed to optimize the key parameters of the VMD, enabling effective signal decomposition. The decomposed signals are then filtered and reconstructed using criteria based on kurtosis and interrelationship measures. The time-domain features of the reconstructed signals are computed, and the feature vectors are constructed, which are used as inputs to the deep learning network; the CNN combined with the support vector machine (SVM) network model is used for the extraction of the features and the classification of the faults. The experimental results show that the method can effectively extract fault features in noise-covered signals, and the accuracy is also significantly improved compared with traditional methods. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of GJO-optimized VMD.</p>
Full article ">Figure 2
<p>CNN-SVM structure diagram.</p>
Full article ">Figure 3
<p>Diagnostic flowchart.</p>
Full article ">Figure 4
<p>Comparison of algorithms.</p>
Full article ">Figure 5
<p>(<b>a</b>) Time domain diagram. (<b>b</b>) Frequency domain plot.</p>
Full article ">Figure 6
<p>Confusion matrix.</p>
Full article ">Figure 7
<p>Sample classification results.</p>
Full article ">Figure 8
<p>(<b>a1</b>) CNN. (<b>b1</b>) CNN. (<b>a2</b>) LSTM. (<b>b2</b>) LSTM. (<b>a3</b>) BILSTM. (<b>b3</b>) BILSTM. (<b>a4</b>) CNN-SVM. (<b>b4</b>) CNN-SVM. (<b>a5</b>) VMD-CNN. (<b>b5</b>) VMD-CNN. (<b>a6</b>) VMD-CNN-SVM. (<b>b6</b>) VMD-CNN-SVM. (<b>a7</b>) GJO-VMD-CNN-SVM. (<b>b7</b>) GJO-VMD-CNN-SVM.</p>
Full article ">Figure 8 Cont.
<p>(<b>a1</b>) CNN. (<b>b1</b>) CNN. (<b>a2</b>) LSTM. (<b>b2</b>) LSTM. (<b>a3</b>) BILSTM. (<b>b3</b>) BILSTM. (<b>a4</b>) CNN-SVM. (<b>b4</b>) CNN-SVM. (<b>a5</b>) VMD-CNN. (<b>b5</b>) VMD-CNN. (<b>a6</b>) VMD-CNN-SVM. (<b>b6</b>) VMD-CNN-SVM. (<b>a7</b>) GJO-VMD-CNN-SVM. (<b>b7</b>) GJO-VMD-CNN-SVM.</p>
Full article ">Figure 8 Cont.
<p>(<b>a1</b>) CNN. (<b>b1</b>) CNN. (<b>a2</b>) LSTM. (<b>b2</b>) LSTM. (<b>a3</b>) BILSTM. (<b>b3</b>) BILSTM. (<b>a4</b>) CNN-SVM. (<b>b4</b>) CNN-SVM. (<b>a5</b>) VMD-CNN. (<b>b5</b>) VMD-CNN. (<b>a6</b>) VMD-CNN-SVM. (<b>b6</b>) VMD-CNN-SVM. (<b>a7</b>) GJO-VMD-CNN-SVM. (<b>b7</b>) GJO-VMD-CNN-SVM.</p>
Full article ">Figure 9
<p>Diagram of experimental equipment.</p>
Full article ">Figure 10
<p>Visualization of training results.</p>
Full article ">Figure 11
<p>Confusion matrix.</p>
Full article ">Figure 12
<p>Sample classification results.</p>
Full article ">Figure 13
<p>(<b>a1</b>) CNN. (<b>b1</b>) CNN. (<b>a2</b>) LSTM. (<b>b2</b>) LSTM. (<b>a3</b>) BILSTM. (<b>b3</b>) BILSTM. (<b>a4</b>) CNN-SVM. (<b>b4</b>) CNN-SVM. (<b>a5</b>) VMD-CNN. (<b>b5</b>) VMD-CNN. (<b>a6</b>) VMD-CNN-SVM. (<b>b6</b>) VMD-CNN-SVM. (<b>a7</b>) GJO-VMD-CNN-SVM. (<b>b7</b>) GJO-VMD-CNN-SVM.</p>
Full article ">Figure 13 Cont.
<p>(<b>a1</b>) CNN. (<b>b1</b>) CNN. (<b>a2</b>) LSTM. (<b>b2</b>) LSTM. (<b>a3</b>) BILSTM. (<b>b3</b>) BILSTM. (<b>a4</b>) CNN-SVM. (<b>b4</b>) CNN-SVM. (<b>a5</b>) VMD-CNN. (<b>b5</b>) VMD-CNN. (<b>a6</b>) VMD-CNN-SVM. (<b>b6</b>) VMD-CNN-SVM. (<b>a7</b>) GJO-VMD-CNN-SVM. (<b>b7</b>) GJO-VMD-CNN-SVM.</p>
Full article ">Figure 13 Cont.
<p>(<b>a1</b>) CNN. (<b>b1</b>) CNN. (<b>a2</b>) LSTM. (<b>b2</b>) LSTM. (<b>a3</b>) BILSTM. (<b>b3</b>) BILSTM. (<b>a4</b>) CNN-SVM. (<b>b4</b>) CNN-SVM. (<b>a5</b>) VMD-CNN. (<b>b5</b>) VMD-CNN. (<b>a6</b>) VMD-CNN-SVM. (<b>b6</b>) VMD-CNN-SVM. (<b>a7</b>) GJO-VMD-CNN-SVM. (<b>b7</b>) GJO-VMD-CNN-SVM.</p>
Full article ">Figure 14
<p>(<b>a</b>) The time-domain waveform of the bearing under normal conditions. (<b>b</b>) The time-domain waveform of the bearing in an inner race fault condition. (<b>c</b>) The time-domain waveform of the bearing in an outer race fault condition. (<b>d</b>) The time-domain waveform of the bearing in a rolling element fault condition.</p>
Full article ">Figure 14 Cont.
<p>(<b>a</b>) The time-domain waveform of the bearing under normal conditions. (<b>b</b>) The time-domain waveform of the bearing in an inner race fault condition. (<b>c</b>) The time-domain waveform of the bearing in an outer race fault condition. (<b>d</b>) The time-domain waveform of the bearing in a rolling element fault condition.</p>
Full article ">Figure 14 Cont.
<p>(<b>a</b>) The time-domain waveform of the bearing under normal conditions. (<b>b</b>) The time-domain waveform of the bearing in an inner race fault condition. (<b>c</b>) The time-domain waveform of the bearing in an outer race fault condition. (<b>d</b>) The time-domain waveform of the bearing in a rolling element fault condition.</p>
Full article ">Figure 14 Cont.
<p>(<b>a</b>) The time-domain waveform of the bearing under normal conditions. (<b>b</b>) The time-domain waveform of the bearing in an inner race fault condition. (<b>c</b>) The time-domain waveform of the bearing in an outer race fault condition. (<b>d</b>) The time-domain waveform of the bearing in a rolling element fault condition.</p>
Full article ">Figure 15
<p>(<b>a</b>) The confusion matrix for SNR = −4. (<b>b</b>) The confusion matrix for SNR = −2. (<b>c</b>) The confusion matrix for SNR = 2. (<b>d</b>) The confusion matrix for SNR = 4.</p>
Full article ">
11 pages, 724 KiB  
Article
Dermatology-Related Emergency Department Visits in Tertiary Care Center in Riyadh, Saudi Arabia: A Descriptive Study
by Abdullah Alshibani, Saif Osama Alagha, Abdulmohsen Jameel Alshammari, Khaled Jameel Alshammari, Abdulelah Saeed Alghamdi and Khalid Nabil Nagshabandi
Healthcare 2024, 12(23), 2332; https://doi.org/10.3390/healthcare12232332 - 22 Nov 2024
Viewed by 297
Abstract
Background/Objectives: Dermatological complaints are commonly seen in the emergency department (ED) setting and may be attributed to infectious, inflammatory, allergic, hypersensitivity, or traumatic processes, yet few studies have been carried out in Saudi Arabia addressing this topic. This study, therefore, aimed to [...] Read more.
Background/Objectives: Dermatological complaints are commonly seen in the emergency department (ED) setting and may be attributed to infectious, inflammatory, allergic, hypersensitivity, or traumatic processes, yet few studies have been carried out in Saudi Arabia addressing this topic. This study, therefore, aimed to explore this issue by investigating the most common dermatology-related ED encounters in a large tertiary care center in Riyadh, Saudi Arabia, and estimating the incidence of these encounters. Methods: This was a retrospective cohort study conducted in the ED of King Abdulaziz Medical City, a tertiary care center in Riyadh, Saudi Arabia. Data included all patients with dermatology-related ED visits during the period of 2022–2023. Demographic information including, for example, age and sex was collected. The International Classification of Diseases, 10th Revision (ICD-10) was used for the classification of diagnoses. Results: A total of 11,443 patients were included in the study, with male patients making up the majority (54.9%). The mean age upon diagnosis was 22.4 ± 23.2 years. More than half of the patients (55.3%) were diagnosed during childhood (<18), while proportions of older ages declined gradually. Average monthly presentations ranged from 400 to 560. Rash and non-specific skin eruptions (16%), cellulitis (13.6%), and urticaria (12.2%) were the most frequent dermatological emergencies. Conclusions: This study examined the dermatological conditions commonly seen in the emergency department. The findings highlighted a range of dermatology diseases that are typically seen in the ED. Addressing these prevalent disorders in the future will enhance ER physicians’ understanding and management of such common dermatological problems. Full article
Show Figures

Figure 1

Figure 1
<p>The distribution of patients’ age upon the diagnosis of dermatological emergencies (years).</p>
Full article ">Figure 2
<p>The number of dermatology-related emergency department visits by month.</p>
Full article ">
17 pages, 3618 KiB  
Article
Umbilical Cord Mesenchymal Stem Cell Secretome: A Potential Regulator of B Cells in Systemic Lupus Erythematosus
by Adelina Yordanova, Mariana Ivanova, Kalina Tumangelova-Yuzeir, Alexander Angelov, Stanimir Kyurkchiev, Kalina Belemezova, Ekaterina Kurteva, Dobroslav Kyurkchiev and Ekaterina Ivanova-Todorova
Int. J. Mol. Sci. 2024, 25(23), 12515; https://doi.org/10.3390/ijms252312515 - 21 Nov 2024
Viewed by 252
Abstract
Autoimmune diseases represent a severe personal and healthcare problem that seeks novel therapeutic solutions. Mesenchymal stem cells (MSCs) are multipotent cells with interesting cell biology and promising therapeutic potential. The immunoregulatory effects of secretory factors produced by umbilical cord mesenchymal stem cells (UC-MSCs) [...] Read more.
Autoimmune diseases represent a severe personal and healthcare problem that seeks novel therapeutic solutions. Mesenchymal stem cells (MSCs) are multipotent cells with interesting cell biology and promising therapeutic potential. The immunoregulatory effects of secretory factors produced by umbilical cord mesenchymal stem cells (UC-MSCs) were assessed on B lymphocytes from 17 patients with systemic lupus erythematosus (SLE), as defined by the 2019 European Alliance of Associations for Rheumatology (EULAR)/American College of Rheumatology (ACR) classification criteria for SLE, and 10 healthy volunteers (HVs). Peripheral blood mononuclear cells (PBMCs) from patients and HVs were cultured in a UC-MSC-conditioned medium (UC-MSCcm) and a control medium. Flow cytometry was used to detect the surface expression of CD80, CD86, BR3, CD40, PD-1, and HLA-DR on CD19+ B cells and assess the percentage of B cells in early and late apoptosis. An enzyme-linked immunosorbent assay (ELISA) quantified the production of BAFF, IDO, and PGE2 in PBMCs and UC-MSCs. Under UC-MSCcm influence, the percentage and mean fluorescence intensity (MFI) of CD19+BR3+ cells were reduced in both SLE patients and HVs. Regarding the effects of the MSC secretome on B cells in lupus patients, we observed a decrease in CD40 MFI and a reduced percentage of CD19+PD-1+ and CD19+HLA-DR+ cells. In contrast, in the B cells of healthy participants, we found an increased percentage of CD19+CD80+ cells and decreased CD80 MFI, along with a decrease in CD40 MFI and the percentage of CD19+PD-1+ cells. The UC-MSCcm had a minimal effect on B-cell apoptosis. The incubation of patients’ PBMCs with the UC-MSCcm increased PGE2 levels compared to the control medium. This study provides new insights into the impact of the MSC secretome on the key molecules involved in B-cell activation and antigen presentation and survival, potentially guiding the development of future SLE treatments. Full article
Show Figures

Figure 1

Figure 1
<p>Characterization of the isolated umbilical cord mesenchymal stem cells (UC-MSCs). MSCs isolated from the umbilical cord on day 1 (<b>A</b>) show visible fibroblast-like morphology (400×, phase contrast). After reaching 80% confluency (<b>B</b>), the conditioned medium was successfully received (400×, phase contrast). Flow cytometric histograms representing the expression of markers CD90, CD73, and CD105, as well as the absence of markers characteristic of hematopoietic lineage, are shown (<b>C</b>) together with light microscopic photo (100×) that shows the degree of confluency (100%) of the cells at which the analysis was performed (<b>D</b>). The graph represents the mean values of the percentages (Mean ± SD) for each marker examined (data are mean of seven experiments) (<b>E</b>). Representative light microscopic images of adipogenic (<b>F</b>) and osteogenic (Von Kossa staining (<b>G</b>) and Alizarin red S staining (<b>H</b>)) differentiated cells are shown. Lowercase letters on the microscopic images represent cells cultured in adipogenic differentiation medium ((<b>a</b>) (400×)), osteogenic differentiation medium ((<b>c</b>,<b>e</b>) (50×)), and in control medium ((<b>b</b>,<b>d</b>,<b>f</b>) (50×)).</p>
Full article ">Figure 2
<p>Phase contrast images (100×) of systemic lupus erythematosus (SLE) patients’ peripheral blood mononuclear cells (PBMCs) after 72 h of culture. (<b>A</b>) PBMCs, cultured in a control medium and (<b>B</b>) PBMCs cultured in a conditioned medium of umbilical cord MSCs (UC-MSCcm), with a significantly higher degree of cell cluster formation.</p>
Full article ">Figure 3
<p>Scatter plots displaying the changes in percentage values (<b>A</b>) of CD19+CD80+ B cells ((<b>A</b>)(<b>a</b>)), CD19+CD86+ B cells ((<b>A</b>)(<b>b</b>)), and CD19+CD268+ (BR3) B cells ((<b>A</b>)(<b>c</b>)) and MFI (<b>B</b>) of CD80 ((<b>B</b>)(<b>a</b>)), CD86 ((<b>B</b>)(<b>b</b>)), and CD268 ((<b>B</b>)(<b>c</b>)) on the membrane of B lymphocytes of SLE patients (<span class="html-italic">n</span> = 17) and healthy volunteers (HVs) (<span class="html-italic">n</span> = 10). Data are expressed as mean ± SD, and significant differences are presented after performing the Wilcoxon signed-rank test and Mann–Whitney U test (** <span class="html-italic">p</span> ≤ 0.01; **** <span class="html-italic">p</span> ≤ 0.0001).</p>
Full article ">Figure 4
<p>Flow cytometric dot plots of CD19+ B lymphocytes expressing the BR3 receptor from the pool of PBMCs cultured in (<b>A</b>) control medium and (<b>B</b>) UC-MSCcm. The red represents the formation of a homogeneous population of B lymphocytes with reduced expression of the BR3 receptor, influenced by the secretome of MSCs. A representative patient with SLE is shown in the figure.</p>
Full article ">Figure 5
<p>Scatter plots displaying the changes in percentage values (<b>A</b>) of CD19+CD40+ B cells ((<b>A</b>)(<b>a</b>)), CD19+CD279 (PD-1)+ B cells ((<b>A</b>)(<b>b</b>)), and CD19+HLA-DR+ B cells ((<b>A</b>)(<b>c</b>)) and MFI (<b>B</b>) of CD40 ((<b>B</b>)(<b>a</b>)), PD-1 ((<b>B</b>)(<b>b</b>)), and HLA-DR ((<b>B</b>)(<b>c</b>)) on the membrane of B lymphocytes of SLE patients (<span class="html-italic">n</span> = 17) and HVs (<span class="html-italic">n</span> = 10). Data are expressed as mean ± SD, and significant differences are presented after performing Wilcoxon test and Mann–Whitney U test (* <span class="html-italic">p</span> ≤ 0.05; ** <span class="html-italic">p</span> ≤ 0.01; *** <span class="html-italic">p</span> ≤ 0.001). Black lines represent significant differences between the two groups of participants.</p>
Full article ">Figure 6
<p>Scatter plots display the percentage value alterations of CD19+Annexin V+ cells (<b>A</b>) and CD19+PI+ cells (<b>B</b>). Data are expressed as mean ± SD, and significant differences are presented after performing Wilcoxon sign-rank test (** <span class="html-italic">p</span> ≤ 0.01; *** <span class="html-italic">p</span> ≤ 0.001).</p>
Full article ">Figure 7
<p>A scatter plot displays the changes in PGE2 levels (pg/mL) after culturing PBMCs in a UC-MSCcm and control medium. Data are expressed as mean ± SD, and significant differences are presented using Wilcoxon sign-rank test (* <span class="html-italic">p</span> ≤ 0.05).</p>
Full article ">
15 pages, 772 KiB  
Article
MFAN: Multi-Feature Attention Network for Breast Cancer Classification
by Inzamam Mashood Nasir, Masad A. Alrasheedi and Nasser Aedh Alreshidi
Mathematics 2024, 12(23), 3639; https://doi.org/10.3390/math12233639 - 21 Nov 2024
Viewed by 298
Abstract
Cancer-related diseases are some of the major health hazards affecting individuals globally, especially breast cancer. Cases of breast cancer among women persist, and the early indicators of the diseases go unnoticed in many cases. Breast cancer can therefore be treated effectively if the [...] Read more.
Cancer-related diseases are some of the major health hazards affecting individuals globally, especially breast cancer. Cases of breast cancer among women persist, and the early indicators of the diseases go unnoticed in many cases. Breast cancer can therefore be treated effectively if the detection is correctly conducted, and the cancer is classified at the preliminary stages. Yet, direct mammogram and ultrasound image diagnosis is a very intricate, time-consuming process, which can be best accomplished with the help of a professional. Manual diagnosis based on mammogram images can be cumbersome, and this often requires the input of professionals. Despite various AI-based strategies in the literature, similarity in cancer and non-cancer regions, irrelevant feature extraction, and poorly trained models are persistent problems. This paper presents a new Multi-Feature Attention Network (MFAN) for breast cancer classification that works well for small lesions and similar contexts. MFAN has two important modules: the McSCAM and the GLAM for Feature Fusion. During channel fusion, McSCAM can preserve the spatial characteristics and extract high-order statistical information, while the GLAM helps reduce the scale differences among the fused features. The global and local attention branches also help the network to effectively identify small lesion regions by obtaining global and local information. Based on the experimental results, the proposed MFAN is a powerful classification model that can classify breast cancer subtypes while providing a solution to the current problems in breast cancer diagnosis on two public datasets. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Decision Making)
Show Figures

Figure 1

Figure 1
<p>Simplified architecture of the proposed model for breast cancer classification.</p>
Full article ">Figure 2
<p>Architecture of proposed MFAN model for multi-feature and multi-scale classification.</p>
Full article ">Figure 3
<p>Architecture of the proposed McSCAM module.</p>
Full article ">Figure 4
<p>Architecture of the GLAM with global and local attention branches.</p>
Full article ">Figure 5
<p>Comparative analysis of evaluation metrics for selected pretrained model and proposed model on D1.</p>
Full article ">Figure 6
<p>Comparative analysis of evaluation metrics for selected pretrained model and proposed model on D2.</p>
Full article ">
13 pages, 275 KiB  
Article
Text-Mining-Based Non-Face-to-Face Counseling Data Classification and Management System
by Woncheol Park, Seungmin Oh and Seonghyun Park
Appl. Sci. 2024, 14(22), 10747; https://doi.org/10.3390/app142210747 - 20 Nov 2024
Viewed by 356
Abstract
This study proposes a system for analyzing non-face-to-face counseling data using text-mining techniques to assess psychological states and automatically classify them into predefined categories. The system addresses the challenge of understanding internal issues that may be difficult to express in traditional face-to-face counseling. [...] Read more.
This study proposes a system for analyzing non-face-to-face counseling data using text-mining techniques to assess psychological states and automatically classify them into predefined categories. The system addresses the challenge of understanding internal issues that may be difficult to express in traditional face-to-face counseling. To solve this problem, a counseling management system based on text mining was developed. In the experiment, we combined TF-IDF and Word Embedding techniques to process and classify client counseling data into five major categories: school, friends, personality, appearance, and family. The classification performance achieved high accuracy and F1-Score, demonstrating the system’s effectiveness in understanding and categorizing clients’ emotions and psychological states. This system offers a structured approach to analyzing counseling data, providing counselors with a foundation for recommending personalized counseling treatments. The findings of this study suggest that in-depth analysis and classification of counseling data can enhance the quality of counseling, even in non-face-to-face environments, offering more efficient and tailored solutions. Full article
Show Figures

Figure 1

Figure 1
<p>System configuration diagram.</p>
Full article ">Figure 2
<p>Process of the proposed system.</p>
Full article ">
15 pages, 2189 KiB  
Article
Entropy-Based Ensemble of Convolutional Neural Networks for Clothes Texture Pattern Recognition
by Reham Al-Majed and Muhammad Hussain
Appl. Sci. 2024, 14(22), 10730; https://doi.org/10.3390/app142210730 - 20 Nov 2024
Viewed by 338
Abstract
Automatic clothes pattern recognition is important to assist visually impaired people and for real-world applications such as e-commerce or personal fashion recommendation systems, and it has attracted increased interest from researchers. It is a challenging texture classification problem in that even images of [...] Read more.
Automatic clothes pattern recognition is important to assist visually impaired people and for real-world applications such as e-commerce or personal fashion recommendation systems, and it has attracted increased interest from researchers. It is a challenging texture classification problem in that even images of the same texture class expose a high degree of intraclass variations. Moreover, images of clothes patterns may be taken in an unconstrained illumination environment. Machine learning methods proposed for this problem mostly rely on handcrafted features and traditional classification methods. The research works that utilize the deep learning approach result in poor recognition performance. We propose a deep learning method based on an ensemble of convolutional neural networks where feature engineering is not required while extracting robust local and global features of clothes patterns. The ensemble classifier employs a pre-trained ResNet50 with a non-local (NL) block, a squeeze-and-excitation (SE) block, and a coordinate attention (CA) block as base learners. To fuse the individual decisions of the base learners, we introduce a simple and effective fusing technique based on entropy voting, which incorporates the uncertainties in the decisions of base learners. We validate the proposed method on benchmark datasets for clothes patterns that have six categories: solid, striped, checkered, dotted, zigzag, and floral. The proposed method achieves promising results for limited computational and data resources. In terms of accuracy, it achieves 98.18% for the GoogleClothingDataset and 96.03% for the CCYN dataset. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Image Processing)
Show Figures

Figure 1

Figure 1
<p>Examples of six classes of clothes patterns: checkered, floral, dotted, solid, striped, and zigzag.</p>
Full article ">Figure 2
<p>High-level depiction of the architecture of the proposed ensemble classifier.</p>
Full article ">Figure 3
<p>Detail of ResNet50 architecture.</p>
Full article ">Figure 4
<p>Architecture of bottleneck residual block.</p>
Full article ">Figure 5
<p>ResNet50 with SE Blocks. <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>e</mi> <mi>s</mi> <msub> <mi>G</mi> <mi>i</mi> </msub> </mrow> </semantics></math> is the <span class="html-italic">i</span>th group of ResNet blocks.</p>
Full article ">Figure 6
<p>ResNet50 with CA block. <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>e</mi> <mi>s</mi> <msub> <mi>G</mi> <mi>i</mi> </msub> </mrow> </semantics></math> is the <span class="html-italic">i</span>th group of ResNet blocks.</p>
Full article ">Figure 7
<p>ResNet50 with NL block. <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>e</mi> <mi>s</mi> <msub> <mi>G</mi> <mi>i</mi> </msub> </mrow> </semantics></math> is the <span class="html-italic">i</span>th group of ResNet blocks.</p>
Full article ">Figure 8
<p>The performance of two ensemble classifiers. (<b>a</b>) The performance in terms of accuracy of ensemble learner 1 and the base learners. (<b>b</b>) The performance in terms of accuracy of ensemble learner 2 and the base learners.</p>
Full article ">Figure 9
<p>Venn diagram of base learners’ errors. (<b>a</b>) Error analysis of base learners of ensemble classifier 1. (<b>b</b>) Error analysis of base learners of ensemble classifier 2.</p>
Full article ">Figure 10
<p>Confusion matrix showing the decision making of the ensemble classifier.</p>
Full article ">Figure 11
<p>Performance of the base learners for each class.</p>
Full article ">
13 pages, 1736 KiB  
Article
Reexamination of the Sida Micrantha Mosaic Virus and Sida Mottle Virus Complexes: Classification Status, Diversity, Cognate DNA–B Components, and Host Spectrum
by Marcos Silva de Queiroz-Ferreira, Luciane de Nazaré Almeida dos Reis, Maria Esther de Noronha Fonseca, Felipe Fochat Silva Melo, Ailton Reis, Leonardo Silva Boiteux and Rita de Cássia Pereira-Carvalho
Viruses 2024, 16(11), 1796; https://doi.org/10.3390/v16111796 - 19 Nov 2024
Viewed by 389
Abstract
Sida mottle virus (SiMoV) and Sida micrantha mosaic virus (SiMMV) are major Brazilian begomoviruses (Geminiviridae). However, the range of DNA–A identity of isolates of these viruses (81–100%) is not in agreement with the current criteria for Begomovirus species demarcation (<91%). To [...] Read more.
Sida mottle virus (SiMoV) and Sida micrantha mosaic virus (SiMMV) are major Brazilian begomoviruses (Geminiviridae). However, the range of DNA–A identity of isolates of these viruses (81–100%) is not in agreement with the current criteria for Begomovirus species demarcation (<91%). To clarify this putative classification problem, we performed a comprehensive set of molecular analyses with all 53 publicly available isolates (with complete DNA–A genomes) designated as either SiMoV or SiMMV (including novel isolates obtained herein from nationwide metagenomics-based studies). Two well-defined phylogenetic clusters were identified. The SiMMV complex (n = 47) comprises a wide range of strains (with a continuum variation of 88.8–100% identity) infecting members of five botanical families (Malvaceae, Solanaceae, Fabaceae, Oxalidaceae, and Passifloraceae). The SiMoV group now comprises eight isolates (90–100% identity) restricted to Malvaceae hosts, including one former reference SiMMV isolate (gb|NC_077711) and SP77 (gb|FN557522; erroneously named as “true SiMMV”). Iteron analyses of metagenomics-derived information allowed for the discovery of the missing DNA–B cognate of SiMoV (93.5% intergenic region identity), confirming its bipartite nature. Henceforth, the correct identification of SiMoV and SiMMV isolates will be a crucial element for effective classical and biotech resistance breeding of the viral host species. Full article
(This article belongs to the Section Viruses of Plants, Fungi and Protozoa)
Show Figures

Figure 1

Figure 1
<p>Phylogenetic tree and Sequence Demarcation Tool (SDT) analysis of DNA–A components showing the phylogenetic distances among Sida mottle virus (SiMoV = <span class="html-italic">Begomovirus sidavariati</span>) and Sida micrantha mosaic virus (SiMMV = <span class="html-italic">Begomovirus sidamicranthae</span>) isolates. These isolates are identified by their accession numbers in GenBank. The accession numbers of isolates classified/named as SiMMV in GenBank are the following: JX415195, JX415194, JX41518, MT733814, HM357459, MT733803, KU852503, KC706537, KC706535, KX691401, PQ240611, KY650722, EU908733, FN557522 (=SP77 isolate), AJ557450 (NC_077711 = A1B3 isolate), FJ686693, MT214092, KC706536, AJ557451 (=A2B2 isolate), MF957204, KX691410, MT103982, MT103980, MT103983, KY650717, MT103979, MT103981, MT103986, MT103974, MT103985, MT103984, KX348162, KX348161, KX348163, KX348157, KX348160, KX348164, KX348159, KX348158, KX348155, KX348156, FN436005, FN436003, HM585433, HM585431, HM585439, and HM585437. Two isolates: FN557522 (=SP77 isolate) and AJ557450 (NC_077711 = A1B3 isolate) were reclassified as SiMoV in the present work. The isolates classified/named as SiMoV are the following: PQ240619, PQ240618, PQ240616, AY090555 (=NC_004637), JX871378, and JX871377. Tomato leaf curl virus (FM210277) was used as the outgroup. GenBank accessions PQ240619, PQ240618, and PQ240616 were characterized in the present work.</p>
Full article ">
15 pages, 4684 KiB  
Article
A Convolutional Neural Network-Based Method for Distinguishing the Flow Patterns of Gas-Liquid Two-Phase Flow in the Annulus
by Chen Cheng, Weixia Yang, Xiaoya Feng, Yarui Zhao and Yubin Su
Processes 2024, 12(11), 2596; https://doi.org/10.3390/pr12112596 - 19 Nov 2024
Viewed by 294
Abstract
In order to improve the accuracy and efficiency of flow pattern recognition and to solve the problem of the real-time monitoring of flow patterns, which is difficult to achieve with traditional visual recognition methods, this study introduced a flow pattern recognition method based [...] Read more.
In order to improve the accuracy and efficiency of flow pattern recognition and to solve the problem of the real-time monitoring of flow patterns, which is difficult to achieve with traditional visual recognition methods, this study introduced a flow pattern recognition method based on a convolutional neural network (CNN), which can recognize the flow pattern under different pressure and flow conditions. Firstly, the complex gas–liquid distribution and its velocity field in the annulus were investigated using a computational fluid dynamics (CFDs) simulation, and the gas–liquid distribution and velocity vectors in the annulus were obtained to clarify the complexity of the flow patterns in the annulus. Subsequently, a sequence model containing three convolutional layers and two fully connected layers was developed, which employed a CNN architecture, and the model was compiled using the Adam optimizer and the sparse classification cross entropy as a loss function. A total of 450 images of different flow patterns were utilized for training, and the trained model recognized slug and annular flows with probabilities of 0.93 and 0.99, respectively, confirming the high accuracy of the model in recognizing annulus flow patterns, and providing an effective method for flow pattern recognition. Full article
(This article belongs to the Special Issue Recent Advances in Hydrocarbon Production Processes from Geoenergy)
Show Figures

Figure 1

Figure 1
<p>Typical two-phase flow of annular gas–liquid under operating conditions. (<b>a</b>) Wellbore gas–liquid two-phase flow for dual-gradient drilling [<a href="#B11-processes-12-02596" class="html-bibr">11</a>]; (<b>b</b>) drainage of liquid–gas wells for gas recovery.</p>
Full article ">Figure 2
<p>Schematic diagram of flow pattern changes in a vertical annulus pipe.</p>
Full article ">Figure 3
<p>Flowchart of model operation.</p>
Full article ">Figure 4
<p>Gas–liquid phase distribution in 45° inclined pipe with different cross sections.</p>
Full article ">Figure 5
<p>Gas–liquid distribution pattern of the slug unit in the annulus.</p>
Full article ">Figure 6
<p>Streamlines and velocity vectors at 45° inclination angle.</p>
Full article ">Figure 7
<p>Uncertainty analysis process.</p>
Full article ">Figure 8
<p>Photograph of typical flow pattern in the annulus (the red line is the gas–liquid interface).</p>
Full article ">
21 pages, 2496 KiB  
Review
Transportation Mode Detection Using Learning Methods and Self-Contained Sensors: Review
by Ilhem Gharbi, Fadoua Taia-Alaoui, Hassen Fourati, Nicolas Vuillerme and Zebo Zhou
Sensors 2024, 24(22), 7369; https://doi.org/10.3390/s24227369 - 19 Nov 2024
Viewed by 363
Abstract
Due to increasing traffic congestion, travel modeling has gained importance in the development of transportion mode detection (TMD) strategies over the past decade. Nowadays, recent smartphones, equipped with integrated inertial measurement units (IMUs) and embedded algorithms, can play a crucial role in such [...] Read more.
Due to increasing traffic congestion, travel modeling has gained importance in the development of transportion mode detection (TMD) strategies over the past decade. Nowadays, recent smartphones, equipped with integrated inertial measurement units (IMUs) and embedded algorithms, can play a crucial role in such development. In particular, obtaining much more information on the transportation modes used by users through smartphones is very challenging due to the variety of the data (accelerometers, magnetometers, gyroscopes, proximity sensors, etc.), the standardization issue of datasets and the pertinence of learning methods for that purpose. Reviewing the latest progress on TMD systems is important to inform readers about recent datasets used in detection, best practices for classification issues and the remaining challenges that still impact the detection performances. Existing TMD review papers until now offer overviews of applications and algorithms without tackling the specific issues faced with real-world data collection and classification. Compared to these works, the proposed review provides some novelties such as an in-depth analysis of the current state-of-the-art techniques in TMD systems, relying on recent references and focusing particularly on the major existing problems, and an evaluation of existing methodologies for detecting travel modes using smartphone IMUs (including dataset structures, sensor data types, feature extraction, etc.). This review paper can help researchers to focus their efforts on the main problems and challenges identified. Full article
Show Figures

Figure 1

Figure 1
<p>Processing pipeline for predicting the transportation modes.</p>
Full article ">Figure 2
<p>Transforming time series (raw sensor data) into feature space through the segmentation (window partitioning in red) and computation of features (feature extraction (FE)) [<a href="#B35-sensors-24-07369" class="html-bibr">35</a>].</p>
Full article ">Figure 3
<p>Resultant acceleration in Tram [<a href="#B31-sensors-24-07369" class="html-bibr">31</a>].</p>
Full article ">Figure 4
<p>Resultant acceleration in Walk [<a href="#B31-sensors-24-07369" class="html-bibr">31</a>].</p>
Full article ">Figure 5
<p>Resultant acceleration in Car [<a href="#B31-sensors-24-07369" class="html-bibr">31</a>].</p>
Full article ">Figure 6
<p>Resultant acceleration in Motorcycle [<a href="#B31-sensors-24-07369" class="html-bibr">31</a>].</p>
Full article ">Figure 7
<p>Sensor placement for the perscido dataset [<a href="#B23-sensors-24-07369" class="html-bibr">23</a>].</p>
Full article ">Figure 8
<p>Sensor placement for the SHL dataset [<a href="#B27-sensors-24-07369" class="html-bibr">27</a>].</p>
Full article ">Figure 9
<p>Android applications: (<b>a</b>) Phyphox, (<b>b</b>) Physics toolbox suite and (<b>c</b>) Sensorlogger.</p>
Full article ">
20 pages, 3221 KiB  
Article
A VIKOR-Based Sequential Three-Way Classification Ranking Method
by Wentao Xu, Jin Qian, Yueyang Wu, Shaowei Yan, Yongting Ni and Guangjin Yang
Algorithms 2024, 17(11), 530; https://doi.org/10.3390/a17110530 - 19 Nov 2024
Viewed by 291
Abstract
VIKOR uses the idea of overall utility maximization and individual regret minimization to afford a compromise result for multi-attribute decision-making problems with conflicting attributes. Many researchers have proposed corresponding improvements and expansions to make it more suitable for sorting optimization in their respective [...] Read more.
VIKOR uses the idea of overall utility maximization and individual regret minimization to afford a compromise result for multi-attribute decision-making problems with conflicting attributes. Many researchers have proposed corresponding improvements and expansions to make it more suitable for sorting optimization in their respective research fields. However, these improvements and extensions only rank the alternatives without classifying them. For this purpose, this text introduces the three-way sequential decisions method and combines it with the VIKOR method to design a three-way VIKOR method that can deal with both ranking and classification. By using the final negative ideal solution (NIS) and the final positive ideal solution (PIS) for all alternatives, the individual regret value and group utility value of each alternative were calculated. Different three-way VIKOR models were obtained by four different combinations of individual regret value and group utility value. In the ranking process, the characteristics of VIKOR method are introduced, and the subjective preference of decision makers is considered by using individual regret, group utility, and decision index values. In the classification process, the corresponding alternatives are divided into the corresponding decision domains by sequential three-way decisions, and the risk of direct acceptance or rejection is avoided by putting the uncertain alternatives into the boundary region to delay the decision. The alternative is divided into decision domains through sequential three-way decisions, sorted according to the collation rules in the same decision domain, and the final sorting results are obtained according to the collation rules in different decision domains. Finally, the effectiveness and correctness of the proposed method are verified by a project investment example, and the results are compared and evaluated. The experimental results show that the proposed method has a significant correlation with the results of other methods, ad is effective and feasible, and is simpler and more effective in dealing with some problems. Errors caused by misclassification is reduced by sequential three-way decisions. Full article
Show Figures

Figure 1

Figure 1
<p>Connections between decisions regions, → denotes ≻.</p>
Full article ">Figure 2
<p>Ranking rules for different decision regions.</p>
Full article ">Figure 3
<p>Visualization of Example 5.</p>
Full article ">Figure 4
<p>Visualization of comparison results of Example 5 with those of other methods.</p>
Full article ">Figure 5
<p>Visualization of Example 6.</p>
Full article ">Figure 6
<p>Visualization of comparison results with other methods of Example 6.</p>
Full article ">Figure 7
<p>Influence of decision mechanism coefficient <math display="inline"><semantics> <mi>v</mi> </semantics></math> on decision index value <math display="inline"><semantics> <mi>Q</mi> </semantics></math>.</p>
Full article ">Figure 8
<p>Heatmap of Spearman correlation coefficient.</p>
Full article ">
28 pages, 7695 KiB  
Article
MAPM:PolSAR Image Classification with Masked Autoencoder Based on Position Prediction and Memory Tokens
by Jianlong Wang, Yingying Li, Dou Quan, Beibei Hou, Zhensong Wang, Haifeng Sima and Junding Sun
Remote Sens. 2024, 16(22), 4280; https://doi.org/10.3390/rs16224280 - 17 Nov 2024
Viewed by 523
Abstract
Deep learning methods have shown significant advantages in polarimetric synthetic aperture radar (PolSAR) image classification. However, their performances rely on a large number of labeled data. To alleviate this problem, this paper proposes a PolSAR image classification method with a Masked Autoencoder based [...] Read more.
Deep learning methods have shown significant advantages in polarimetric synthetic aperture radar (PolSAR) image classification. However, their performances rely on a large number of labeled data. To alleviate this problem, this paper proposes a PolSAR image classification method with a Masked Autoencoder based on Position prediction and Memory tokens (MAPM). First, MAPM designs a Masked Autoencoder (MAE) based on the transformer for pre-training, which can boost feature learning and improve classification results based on the number of labeled samples. Secondly, since the transformer is relatively insensitive to the order of the input tokens, a position prediction strategy is introduced in the encoder part of the MAE. It can effectively capture subtle differences and discriminate complex, blurry boundaries in PolSAR images. In the fine-tuning stage, the addition of learnable memory tokens can improve classification performance. In addition, L1 loss is used for MAE optimization to enhance the robustness of the model to outliers in PolSAR data. Experimental results show the effectiveness and advantages of the proposed MAPM in PolSAR image classification. Specifically, MAPM achieves performance gains of about 1% in classification accuracy compared with existing methods. Full article
Show Figures

Figure 1

Figure 1
<p>The scheme of the proposed PolSAR image classification method.</p>
Full article ">Figure 2
<p>Structural diagram of the MAE.</p>
Full article ">Figure 3
<p>Structural diagram of MP3.</p>
Full article ">Figure 4
<p>Structural diagram of MAPP.</p>
Full article ">Figure 5
<p>The transformer model with memory.</p>
Full article ">Figure 6
<p>AIRSAR Flevoland dataset and color code. (<b>a</b>) Pauli-RGB map. (<b>b</b>) Ground-truth map. (<b>c</b>) Legend.</p>
Full article ">Figure 7
<p>RADARSAT-2 San Francisco Bay dataset and color code. (<b>a</b>) Pauli-RGB map. (<b>b</b>) Ground-truth map. (<b>c</b>) Legend.</p>
Full article ">Figure 8
<p>ESAR Oberpfaffenhofen dataset and color code. (<b>a</b>) Pauli-RGB map. (<b>b</b>) Ground-truth map. (<b>c</b>) Legend.</p>
Full article ">Figure 9
<p>RADARSAT-2 Netherlands dataset and color code. (<b>a</b>) Pauli-RGB map. (<b>b</b>) Ground-truth map. (<b>c</b>) Legend.</p>
Full article ">Figure 10
<p>Predicted images for the ground truth of the AIRSAR Flevoland dataset. (<b>a</b>) Result of AE; (<b>b</b>) result of CAE; (<b>c</b>) result of VAE; (<b>d</b>) result of CCT; (<b>e</b>) result of MCPT; (<b>f</b>) result of MAPM.</p>
Full article ">Figure 11
<p>Predicted images of the AIRSAR Flevoland dataset. (<b>a</b>) Result of AE; (<b>b</b>) result of CAE; (<b>c</b>) result of VAE; (<b>d</b>) result of CCT; (<b>e</b>) result of MCPT; (<b>f</b>) result of MAPM.</p>
Full article ">Figure 12
<p>Predicted images of the RADARSAT-2 San Francisco Bay dataset. (<b>a</b>) Result of AE; (<b>b</b>) result of CAE; (<b>c</b>) result of VAE; (<b>d</b>) result of CCT; (<b>e</b>) result of MCPT; (<b>f</b>) result of MAPM.</p>
Full article ">Figure 13
<p>Predicted images of the ESAR Oberpfaffenhofen dataset. (<b>a</b>) Result of AE; (<b>b</b>) result of CAE; (<b>c</b>) result of VAE; (<b>d</b>) result of CCT; (<b>e</b>) result of MCPT; (<b>f</b>) result of MAPM.</p>
Full article ">Figure 14
<p>Predicted images of the RADARSAT-2 Netherlands dataset. (<b>a</b>) Result of AE; (<b>b</b>) result of CAE; (<b>c</b>) result of VAE; (<b>d</b>) result of CCT; (<b>e</b>) result of MCPT; (<b>f</b>) result of MAPM.</p>
Full article ">Figure 15
<p>Impact of the amount of training data on OA during the fine-tuning phase.</p>
Full article ">Figure 16
<p>Impact of masking ratio on OA during the fine-tuning phase.</p>
Full article ">Figure 17
<p>Comparison results of <math display="inline"><semantics> <msub> <mi mathvariant="italic">L</mi> <mn>1</mn> </msub> </semantics></math> loss and MSE loss.</p>
Full article ">Figure 18
<p>Training Time with memory tokens and baseline model.</p>
Full article ">Figure 19
<p>Result of model generalization performance study. (<b>a</b>) Pauli-RGB image of the AIRSAR San Francisco dataset. (<b>b</b>) Predicted image of the AIRSAR San Francisco dataset using the model trained on the RADARSAT-2 San Francisco dataset. (<b>c</b>) Legend of the RADARSAT-2 San Francisco dataset.</p>
Full article ">
16 pages, 4235 KiB  
Article
Mobile Accelerometer Applications in Core Muscle Rehabilitation and Pre-Operative Assessment
by Aleš Procházka, Daniel Martynek, Marie Vitujová, Daniela Janáková, Hana Charvátová and Oldřich Vyšata
Sensors 2024, 24(22), 7330; https://doi.org/10.3390/s24227330 - 16 Nov 2024
Viewed by 546
Abstract
Individual physiotherapy is crucial in treating patients with various pain and health issues, and significantly impacts abdominal surgical outcomes and further medical problems. Recent technological and artificial intelligent advancements have equipped healthcare professionals with innovative tools, such as sensor systems and telemedicine equipment, [...] Read more.
Individual physiotherapy is crucial in treating patients with various pain and health issues, and significantly impacts abdominal surgical outcomes and further medical problems. Recent technological and artificial intelligent advancements have equipped healthcare professionals with innovative tools, such as sensor systems and telemedicine equipment, offering groundbreaking opportunities to monitor and analyze patients’ physical activity. This paper investigates the potential applications of mobile accelerometers in evaluating the symmetry of specific rehabilitation exercises using a dataset of 1280 tests on 16 individuals in the age range between 8 and 75 years. A comprehensive computational methodology is introduced, incorporating traditional digital signal processing, feature extraction in both time and transform domains, and advanced classification techniques. The study employs a range of machine learning methods, including support vector machines, Bayesian analysis, and neural networks, to evaluate the balance of various physical activities. The proposed approach achieved a high classification accuracy of 90.6% in distinguishing between left- and right-side motion patterns by employing features from both the time and frequency domains using a two-layer neural network. These findings demonstrate promising applications of precise monitoring of rehabilitation exercises to increase the probability of successful surgical recovery, highlighting the potential to significantly enhance patient care and treatment outcomes. Full article
(This article belongs to the Special Issue Robust Motion Recognition Based on Sensor Technology)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Principle of data processing during rehabilitation exercises presenting (<b>a</b>) mobile Matlab initialization, (<b>b</b>) data acquisition using accelerometric sensors inside the smartphone, (<b>c</b>) export of recorded signals to the remote drive, and (<b>d</b>) processing of data on the remote drive in time and frequency domains to extract motion features and evaluate the coefficient of symmetry.</p>
Full article ">Figure 2
<p>Selected rehabilitation exercises used for accelerometric data acquisition recorded by wearable sensors (red squares) located on the left and right sides of the body used for data acquisition and processing in the computational and visualization environment of the mobile Matlab system.</p>
Full article ">Figure 3
<p>Principle of data processing during rehabilitation exercises presenting (<b>a</b>) animation of motion exercises to train individuals and data acquisition using a smartphone, (<b>b</b>) data import into the proposed web-page, (<b>c</b>) frequency domain remote signal processing including symmetry coefficient estimation, and (<b>d</b>) extraction and analysis of motion features.</p>
Full article ">Figure 4
<p>Symmetry criteria for 8 rehabilitation exercises evaluated by (<b>a</b>) time domain and (<b>b</b>) mixed-domain features presenting mean values by 16 tests of different individuals with 10 repetitions of each rehabilitation exercise.</p>
Full article ">Figure 5
<p>Comparison of symmetry criteria for 16 tests involving different individuals and eight rehabilitation exercises, evaluated using time domain and spectral domain features.</p>
Full article ">Figure 6
<p>Comparison of distribution of the time and spectral domain features for selected exercises of (<b>a</b>) prevailing asymmetric motion (individual 6, exercise 6) and (<b>b</b>) prevailing symmetric motion (individual 10, exercise 5) with centers of the right and left side positions and <span class="html-italic">c</span> multiples of standard deviations for <math display="inline"><semantics> <mrow> <mi>c</mi> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> <mn>0.5</mn> <mo>,</mo> <mn>1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Classification of symmetry features of the body cross-motion by mixed features using (<b>a</b>) support vector machine, (<b>b</b>) the Bayes method, and (<b>c</b>) the two-layer neural network for a selected individual 6-DH.</p>
Full article ">
23 pages, 109644 KiB  
Article
Discrete Cosine Transform-Based Joint Spectral–Spatial Information Compression and Band-Correlation Calculation for Hyperspectral Feature Extraction
by Ziqi Zhao, Changbao Yang, Zhongjun Qiu and Qiong Wu
Remote Sens. 2024, 16(22), 4270; https://doi.org/10.3390/rs16224270 - 16 Nov 2024
Viewed by 313
Abstract
Prediction tasks over pixels in hyperspectral images (HSI) require careful effort to engineer the features used for learning a classifier. However, the generated classification map may suffer from an over-smoothing problem, which is manifested in significant differences from the original image in terms [...] Read more.
Prediction tasks over pixels in hyperspectral images (HSI) require careful effort to engineer the features used for learning a classifier. However, the generated classification map may suffer from an over-smoothing problem, which is manifested in significant differences from the original image in terms of object boundaries and details. To address this over-smoothing problem, we designed a method for extracting spectral–spatial-band-correlation (SSBC) features. In SSBC features, joint spectral–spatial feature extraction is considered a discrete cosine transform-based information compression, where a flattening operation is used to avoid the high computational cost induced by the requirement of distillation from 3D images for joint spectral–spatial information. However, this process can yield extracted features with lost spectral information. We argue that increasing the spectral information in the extracted features is the key to addressing the over-smoothing problem in the classification map. Consequently, the normalized difference vegetation index and iron oxide are improved for HSI data in extracting band-correlation features as added spectral information because their calculations, involving two spectral bands, are not appropriate for the abundant spectral bands of HSI. Experimental results on four real HSI datasets show that the proposed features can significantly mitigate the over-smoothing problem, and the classification performance is comparable to that of state-of-the-art deep features. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the proposed method.</p>
Full article ">Figure 2
<p>Parameter selection for the local window <span class="html-italic">w</span> and feature selection <span class="html-italic">r</span>.</p>
Full article ">Figure 3
<p>Parameter selection for the type of the input matrix and the number of groups <span class="html-italic">G</span>.</p>
Full article ">Figure 4
<p>Indian Pines dataset. (<b>a</b>) False-color image. (<b>b</b>) Ground truth. (<b>c</b>) Joint spectral–spatial (JSS) features. (<b>d</b>) Spectral–spatial-band-correlation (SSBC) features. (<b>e</b>) Legend.</p>
Full article ">Figure 5
<p>KSC dataset. (<b>a</b>) False-color image. (<b>b</b>) Ground truth. (<b>c</b>) Joint spectral–spatial (JSS) features. (<b>d</b>) Spectral–spatial-band-correlation (SSBC) features. (<b>e</b>) Legend.</p>
Full article ">Figure 6
<p>Houston dataset. (<b>a</b>) False-color image. (<b>b</b>) Ground truth. (<b>c</b>) Joint spectral–spatial (JSS) features. (<b>d</b>) Spectral–spatial-band-correlation (SSBC) features. (<b>e</b>) Legend.</p>
Full article ">Figure 7
<p>Loukia dataset. (<b>a</b>) False-color image. (<b>b</b>) Ground truth. (<b>c</b>) Joint spectral–spatial (JSS) features. (<b>d</b>) Spectral–spatial-band-correlation (SSBC) features. (<b>e</b>) Legend.</p>
Full article ">Figure 8
<p>Indian Pines dataset. (<b>a</b>) SSAN. (<b>b</b>) SSAtt. (<b>c</b>) RSSAN. (<b>d</b>) <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="normal">A</mi> <mn>2</mn> </msup> <msup> <mi mathvariant="normal">S</mi> <mn>2</mn> </msup> </mrow> </semantics></math>K-ResNet. (<b>e</b>) SSSAN. (<b>f</b>) SSTN. (<b>g</b>) CVSSN. (<b>h</b>) SSBC. (<b>i</b>) Legend.</p>
Full article ">Figure 9
<p>KSC dataset. (<b>a</b>) SSAN. (<b>b</b>) SSAtt. (<b>c</b>) RSSAN. (<b>d</b>) <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="normal">A</mi> <mn>2</mn> </msup> <msup> <mi mathvariant="normal">S</mi> <mn>2</mn> </msup> </mrow> </semantics></math>K-ResNet. (<b>e</b>) SSSAN. (<b>f</b>) SSTN. (<b>g</b>) CVSSN. (<b>h</b>) SSBC. (<b>i</b>) Legend.</p>
Full article ">Figure 10
<p>Houston dataset. (<b>a</b>) SSAN. (<b>b</b>) SSAtt. (<b>c</b>) RSSAN. (<b>d</b>) <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="normal">A</mi> <mn>2</mn> </msup> <msup> <mi mathvariant="normal">S</mi> <mn>2</mn> </msup> </mrow> </semantics></math>K-ResNet. (<b>e</b>) SSSAN. (<b>f</b>) SSTN. (<b>g</b>) CVSSN. (<b>h</b>) SSBC. (<b>i</b>) Legend.</p>
Full article ">Figure 11
<p>Loukia dataset. (<b>a</b>) SSAN. (<b>b</b>) SSAtt. (<b>c</b>) RSSAN. (<b>d</b>) <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="normal">A</mi> <mn>2</mn> </msup> <msup> <mi mathvariant="normal">S</mi> <mn>2</mn> </msup> </mrow> </semantics></math>K-ResNet. (<b>e</b>) SSSAN. (<b>f</b>) SSTN. (<b>g</b>) CVSSN. (<b>h</b>) SSBC. (<b>i</b>) Legend.</p>
Full article ">Figure 12
<p>Summary plot of the Indian Pines test set.</p>
Full article ">Figure 13
<p>Visualization of some proposed features on the Indian Pines dataset. (<b>a</b>) JSS-13. (<b>b</b>) JSS-84. (<b>c</b>) JSS-164. (<b>d</b>) JSS-127. (<b>e</b>) BC-644. (<b>f</b>) BC-279. (<b>g</b>) BC-650. (<b>h</b>) BC-632.</p>
Full article ">
Back to TopTop