Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (128)

Search Parameters:
Keywords = inter-subject training

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1473 KiB  
Case Report
Tact Training with Augmentative Gestural Support for Language Disorder and Challenging Behaviors: A Case Study in an Italian Community-Based Setting
by Laura Turriziani, Rosa Vartellini, Maria Grazia Barcello, Marcella Di Cara and Francesca Cucinotta
J. Clin. Med. 2024, 13(22), 6790; https://doi.org/10.3390/jcm13226790 - 11 Nov 2024
Viewed by 481
Abstract
Background: Gestures or manual signing are valid options for augmentative and alternative communication. However, the data in the literature are limited to a few neurodevelopmental disorders, and less is known about its application in the community setting. Objectives: This case report explores the [...] Read more.
Background: Gestures or manual signing are valid options for augmentative and alternative communication. However, the data in the literature are limited to a few neurodevelopmental disorders, and less is known about its application in the community setting. Objectives: This case report explores the feasibility and preliminary efficacy of tact training with augmentative gestural support intervention for a child affected by a language disorder with challenging behaviors in a community setting. Methods: Baseline assessments were conducted using the Verbal Behavior Milestone Assessment and Placement Program (VB-MAPP) and Griffiths Mental Developmental Scale-III (GMDS-III). The patient received six months of standard treatment, consisting of neuropsychomotor and speech therapy each twice a week, with improved cooperation in proposed activities, but no improvement in language. Afterward, a total of 24 sessions of tact training with augmentative gestural support interventions were performed. Data were collected by two independent observers and analyzed to measure language and behavioral outcomes. Results: VB-MAPP scores increased form minimal communication and social interaction at T0 (baseline) to improved compliance but unchanged language skills at T1 (after standard therapy). After tack training with augmentative gestural support (T2), VB-MAPP scores showed significant improvements, with notable increases in verbal operants, independence in communication, and intersubjectivity skills. GMDS-III scores at T2 also demonstrated growth in social, communicative, and cognitive skills. Additionally, challenging behaviors were reduced by more than 70% and nearly resolved by the end of the intervention. Conclusions: Personalized approaches appear to be essential for interventions tailored to developmental age. Further research is needed to determine the effectiveness of these approaches for other neurodevelopmental disorders, identify patient characteristics that may be predictors of outcomes to tailor the intervention, and explore the generalization of the results obtained with these strategies. Full article
(This article belongs to the Special Issue Diagnosis, Treatment, and Prognosis of Neuropsychiatric Disorders)
Show Figures

Figure 1

Figure 1
<p>Barriers assessment. VB-MAPP at baseline (T0), after six months (T1), and after twelve months (T2). The barriers assessment determines the presence of 24 learning and language acquisition barriers frequently faced by children with Autism or developmental delays.</p>
Full article ">Figure 1 Cont.
<p>Barriers assessment. VB-MAPP at baseline (T0), after six months (T1), and after twelve months (T2). The barriers assessment determines the presence of 24 learning and language acquisition barriers frequently faced by children with Autism or developmental delays.</p>
Full article ">Figure 2
<p>VB-MAPP at T0-T1-T2. This figure illustrates the progression of the child’s performance based on the VB-MAPP assessment over three key time points.</p>
Full article ">Figure 3
<p>Griffiths Mental Developmental Scale-III at T1 and T2. In the GMDS-III assessment, the child obtains an equivalent age (AE) in months for each skill. In the bar graph, the AE values for each subscale obtained at six months and after twelve months have been placed side by side.</p>
Full article ">Figure 4
<p>Challenging behaviors measured at baseline (T0), after six months (T1) and after twelve months (T2). The duration was measured as they were persistent, and it was not possible to measure their frequency.</p>
Full article ">
18 pages, 1469 KiB  
Article
A Multi-Scale CNN for Transfer Learning in sEMG-Based Hand Gesture Recognition for Prosthetic Devices
by Riccardo Fratti, Niccolò Marini, Manfredo Atzori, Henning Müller, Cesare Tiengo and Franco Bassetto
Sensors 2024, 24(22), 7147; https://doi.org/10.3390/s24227147 - 7 Nov 2024
Viewed by 628
Abstract
Advancements in neural network approaches have enhanced the effectiveness of surface Electromyography (sEMG)-based hand gesture recognition when measuring muscle activity. However, current deep learning architectures struggle to achieve good generalization and robustness, often demanding significant computational resources. The goal of this paper was [...] Read more.
Advancements in neural network approaches have enhanced the effectiveness of surface Electromyography (sEMG)-based hand gesture recognition when measuring muscle activity. However, current deep learning architectures struggle to achieve good generalization and robustness, often demanding significant computational resources. The goal of this paper was to develop a robust model that can quickly adapt to new users using Transfer Learning. We propose a Multi-Scale Convolutional Neural Network (MSCNN), pre-trained with various strategies to improve inter-subject generalization. These strategies include domain adaptation with a gradient-reversal layer and self-supervision using triplet margin loss. We evaluated these approaches on several benchmark datasets, specifically the NinaPro databases. This study also compared two different Transfer Learning frameworks designed for user-dependent fine-tuning. The second Transfer Learning framework achieved a 97% F1 Score across 14 classes with an average of 1.40 epochs, suggesting potential for on-site model retraining in cases of performance degradation over time. The findings highlight the effectiveness of Transfer Learning in creating adaptive, user-specific models for sEMG-based prosthetic hands. Moreover, the study examined the impacts of rectification and window length, with a focus on real-time accessible normalizing techniques, suggesting significant improvements in usability and performance. Full article
(This article belongs to the Special Issue Wearable Sensors for Human Health Monitoring and Analysis)
Show Figures

Figure 1

Figure 1
<p>Selected hand gestures from Activities of Daily Living (ADL).</p>
Full article ">Figure 2
<p>Architecture of the Multi-Scale CNN model.</p>
Full article ">Figure 3
<p>Diagram illustrating the architecture used during pre-training with triplet loss and gradient reversal. The two black dotted lines indicate the portions of the model retrained under two different configurations: <span class="html-italic">last</span> and <span class="html-italic">middle</span>.</p>
Full article ">Figure 4
<p>Performance averaged across 12 healthy patients and three amputees for the Transfer Learning <span class="html-italic">last</span> with varying numbers of repetitions. Where <math display="inline"><semantics> <mrow> <mi>X</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> (no retraining), this reflects the average results of testing over new, unseen subjects.</p>
Full article ">Figure 5
<p>Comparison of the TL <span class="html-italic">last</span> for the three amputees. Model backbone pre-trained with reversal gradient and inter-subject splitting technique.</p>
Full article ">Figure 6
<p>Performance averaged across 12 healthy patients and three amputees for the Transfer Learning <span class="html-italic">middle</span> with varying numbers of repetitions.</p>
Full article ">Figure 7
<p>Comparison of the TL <span class="html-italic">middle</span> for the three amputees. The Model backbone was pre-trained with gradient reversal and inter-subject splitting.</p>
Full article ">
14 pages, 787 KiB  
Article
TMG Symmetry and Kinematic Analysis of the Impact of Different Plyometric Programs on Female Athletes’ Lower-Body Muscles
by Nikola Prvulović, Milena Žuža Praštalo, Ana Lilić, Saša Pantelić, Borko Katanić, Milan Čoh and Vesna Vučić
Symmetry 2024, 16(10), 1393; https://doi.org/10.3390/sym16101393 - 19 Oct 2024
Viewed by 702
Abstract
Asymmetries in sports are common and can lead to various issues; however, different training programs can facilitate change. This study aimed to assess the effects of opposing plyometric programs on tensiomyography lateral symmetry (TMG LS)/inter-limb asymmetry in female athletes’ lower-body muscles, alongside kinematic [...] Read more.
Asymmetries in sports are common and can lead to various issues; however, different training programs can facilitate change. This study aimed to assess the effects of opposing plyometric programs on tensiomyography lateral symmetry (TMG LS)/inter-limb asymmetry in female athletes’ lower-body muscles, alongside kinematic and body composition parameters. Twenty female subjects from basketball, volleyball, and track and field (sprinting disciplines) were divided into two experimental groups (n = 10 each). Two six-week plyometric programs (two sessions/week) were implemented: the first program (E1) focused on eccentric exercises, depth landings, while the second (E2) emphasized concentric exercises, squat jumps. TMG assessed LS in six muscles: vastus lateralis, vastus medialis, biceps femoris, semitendinosus, gastrocnemius lateralis, and gastrocnemius medialis. A kinematic analysis of the countermovement jump (CMJ) and body composition was conducted using “Kinovea; Version 0.9.4” software and InBody 770, respectively. The results showed significant increases in LS percentages (E1—VL 9.9%, BF 18.0%, GM 10.6% and E2—BF 22.5%, p < 0.05), and a significant large effect in E1 for VL, and in E2 for BF, p < 0.01). They also showed that E1 had a significant effect on VL, and that E2 had a significant large effect on BF (p < 0.01). E1 also led to increased lean muscle mass in both legs (left: 1.88%, right: 2.74%) and decreased BMIs (−0.4, p < 0.05). Both programs improved LS, with E1 enhancing muscle mass and lower-body positioning in CMJ. We recommend future studies use varied jump tests, incorporate 3D kinematic analysis, include male subjects, and examine more muscles to enhance TMG LS analysis. Full article
Show Figures

Figure 1

Figure 1
<p>Effects between plyometric programs on TMG lateral symmetry of six lower-body muscles. *—statistically significant result with <span class="html-italic">p</span> &lt; 0.05; **—statistically significant result with <span class="html-italic">p</span> &lt; 0.01. Note: Ef Diff was calculated by subtracting the mean value for E1 from E2.</p>
Full article ">Figure 2
<p>Effects between plyometric programs of CMJ kinematic parameters from sagittal and frontal view. *—statistically significant result with <span class="html-italic">p</span> &lt; 0.05. Note: Ef Diff was calculated by subtracting the mean value for E1 from E2.</p>
Full article ">
13 pages, 2391 KiB  
Article
A Machine Learning Approach for Predicting Pedaling Force Profile in Cycling
by Reza Ahmadi, Shahram Rasoulian, Samira Fazeli Veisari, Atousa Parsaei, Hamidreza Heidary, Walter Herzog and Amin Komeili
Sensors 2024, 24(19), 6440; https://doi.org/10.3390/s24196440 - 4 Oct 2024
Viewed by 1050
Abstract
Accurate measurement of pedaling kinetics and kinematics is vital for optimizing rehabilitation, exercise training, and understanding musculoskeletal biomechanics. Pedal reaction force, the main external force in cycling, is essential for musculoskeletal modeling and closely correlates with lower-limb muscle activity and joint reaction forces. [...] Read more.
Accurate measurement of pedaling kinetics and kinematics is vital for optimizing rehabilitation, exercise training, and understanding musculoskeletal biomechanics. Pedal reaction force, the main external force in cycling, is essential for musculoskeletal modeling and closely correlates with lower-limb muscle activity and joint reaction forces. However, sensor instrumentation like 3-axis pedal force sensors is costly and requires extensive postprocessing. Recent advancements in machine learning (ML), particularly neural network (NN) models, provide promising solutions for kinetic analyses. In this study, an NN model was developed to predict radial and mediolateral forces, providing a low-cost solution to study pedaling biomechanics with stationary cycling ergometers. Fifteen healthy individuals performed a 2 min pedaling task at two different self-selected (58 ± 5 RPM) and higher (72 ± 7 RPM) cadences. Pedal forces were recorded using a 3-axis force system. The dataset included pedal force, crank angle, cadence, power, and participants’ weight and height. The NN model achieved an inter-subject normalized root mean square error (nRMSE) of 0.15 ± 0.02 and 0.26 ± 0.05 for radial and mediolateral forces at high cadence, respectively, and 0.20 ± 0.04 and 0.22 ± 0.04 at self-selected cadence. The NN model’s low computational time suits real-time pedal force predictions, matching the accuracy of previous ML algorithms for estimating ground reaction forces in gait. Full article
(This article belongs to the Special Issue Signal Processing and Machine Learning for Sensor Systems)
Show Figures

Figure 1

Figure 1
<p>The flowchart for developing the ML model in the present study. After calibrating sensors, data collection was conducted, and the recorded data were preprocessed for feature extraction. An NN was trained to predict radial and mediolateral forces.</p>
Full article ">Figure 2
<p>The crank angle was measured from the horizontal position. The radial and tangential forces were measured along and perpendicular to the crank axis, respectively. The mediolateral force was perpendicular to the radial and tangential forces.</p>
Full article ">Figure 3
<p>The NN structure consisted of five inputs, including cycling power, cadence, crank angle, and subject weight and height. The output prediction was radial and mediolateral forces.</p>
Full article ">Figure 4
<p>Radial force prediction using the developed ML methods: intra-subject (<b>a</b>,<b>b</b>) and inter-subject (<b>c</b>,<b>d</b>) analyses. Radial forces were predicted at self-selected (<b>a</b>,<b>c</b>) and high (<b>b</b>,<b>d</b>) cadences from the cross-validation set. A schematic representation of the pedal position is shown next to the <span class="html-italic">x</span>-axis in subfigure (<b>c</b>). The lines represent the mean values, while the shaded areas indicate the standard deviations.</p>
Full article ">Figure 5
<p>Mediolateral force prediction using the developed ML methods: intra-subject (<b>a</b>,<b>b</b>) and inter-subject (<b>c</b>,<b>d</b>) analyses. Mediolateral forces were predicted at self-selected (<b>a</b>,<b>c</b>) and high (<b>b</b>,<b>d</b>) cadences from the cross-validation set. A schematic representation of the pedal position is shown next to the x-axis in subfigure (<b>c</b>). The lines represent the mean values, while the shaded areas indicate the standard deviations.</p>
Full article ">Figure 6
<p>The mean ± SD values for all force components (radial, mediolateral, and tangential) were measured by advanced 3-axis pedal force sensors, highlighting the contribution of each component in resultant pedal reaction force. The lines represent the mean values, while the shaded areas indicate the standard deviations.</p>
Full article ">
16 pages, 3196 KiB  
Article
Transferable Deep Learning Models for Accurate Ankle Joint Moment Estimation during Gait Using Electromyography
by Amged Elsheikh Abdelgadir Ali, Dai Owaki and Mitsuhiro Hayashibe
Appl. Sci. 2024, 14(19), 8795; https://doi.org/10.3390/app14198795 - 30 Sep 2024
Viewed by 847
Abstract
The joint moment is a key measurement in locomotion analysis. Transferable prediction across different subjects is advantageous for calibration-free, practical clinical applications. However, even for similar gait motions, intersubject variance presents a significant challenge in maintaining reliable prediction performance. The optimal deep learning [...] Read more.
The joint moment is a key measurement in locomotion analysis. Transferable prediction across different subjects is advantageous for calibration-free, practical clinical applications. However, even for similar gait motions, intersubject variance presents a significant challenge in maintaining reliable prediction performance. The optimal deep learning models for ankle moment prediction during dynamic gait motions remain underexplored for both intrasubject and intersubject usage. This study evaluates the feasibility of different deep-learning models for estimating ankle moments using sEMG data to find an optimal intrasubject model against the inverse dynamic approach. We verified and compared the performance of 1302 intrasubject models per subject on 597 steps from seven subjects using various architectures and feature sets. The best-performing intrasubject models were recurrent convolutional neural networks trained using signal energy features. They were then transferred to realize intersubject ankle moment estimation. Full article
(This article belongs to the Special Issue Advances in Foot Biomechanics and Gait Analysis)
Show Figures

Figure 1

Figure 1
<p>Anatomical (red) and tracking (white) markers.</p>
Full article ">Figure 2
<p>Time-series dataset inputs/outputs illustration.</p>
Full article ">Figure 3
<p>sEMG intrasubject models results. (<b>a</b>) boxplots illustrate the distribution of <math display="inline"><semantics> <msup> <mi>R</mi> <mn>2</mn> </msup> </semantics></math> values for models trained on various combinations of autoregressive model (AR) coefficients and zero crossing count (ZC) features. (<b>b</b>) illustrates the distribution of <math display="inline"><semantics> <msup> <mi>R</mi> <mn>2</mn> </msup> </semantics></math> values for models trained on Signal Power features: root mean square (RMS), mean absolute value (MAV), and waveform length (WL). (<b>c</b>) shows the distribution of <math display="inline"><semantics> <msup> <mi>R</mi> <mn>2</mn> </msup> </semantics></math> values for models for tibialis anterior (TA), soleus (SOL), gastrocnemius medialis (GM), and peroneus brevis (PB) muscles groups when using Signal Power features. The outliers observed in (<b>b</b>) are primarily attributed to the TA+PB muscle group, which exhibited lower estimation performance than other muscle groups.</p>
Full article ">Figure 4
<p>This figure compares the estimated S1 (first volunteer) ankle moments from the inverse dynamic (ID) Equation (<a href="#FD7-applsci-14-08795" class="html-disp-formula">7</a>) against intrasubject models (MLP, RCNN, LSTM) using the mean absolute value (MAV) and waveform length (WL) features extracted from tibialis anterior (TA), soleus (SOL), gastrocnemius medialis (GM), and peroneus brevis (PB) sEMG signals. The x-axis represents the normalized gait cycle, starting from the swing phase and ending with the subsequent swing phase. The y-axis shows the normalized ankle moment. Positive values indicate plantar flexion, while negative values indicate dorsiflexion. The shaded areas represent the standard deviation of the estimations across subjects.</p>
Full article ">Figure 5
<p>DEMG intrasubject models results. (<b>a</b>) boxplots illustrate the distribution of <math display="inline"><semantics> <msup> <mi>R</mi> <mn>2</mn> </msup> </semantics></math> values for models trained on various combinations of autoregressive model (AR) coefficients and zero crossing count (ZC) features. (<b>b</b>) illustrates the distribution of <math display="inline"><semantics> <msup> <mi>R</mi> <mn>2</mn> </msup> </semantics></math> values for models trained on Signal Power features: root mean square (RMS), mean absolute value (MAV), and waveform length (WL). (<b>c</b>) shows the distribution of <math display="inline"><semantics> <msup> <mi>R</mi> <mn>2</mn> </msup> </semantics></math> values for models for tibialis anterior (TA), soleus (SOL), gastrocnemius medialis (GM), and peroneus brevis (PB) muscles groups when using signal power features.</p>
Full article ">Figure 6
<p>This figure compares the three intersubject models against their corresponding ankle moments from the inverse dynamic (ID) Equation (<a href="#FD7-applsci-14-08795" class="html-disp-formula">7</a>). The x-axis represents the normalized gait cycle, starting from the swing phase and ending with the subsequent swing phase. The y-axis shows the normalized ankle moment. Positive values indicate plantar flexion, while negative values indicate dorsiflexion. The shaded areas represent the standard deviation of the estimations across subjects. S6 and S7 refer to volunteers number 6 and 7, respectively, tibialis anterior (TA), soleus (SOL), gastrocnemius medialis (GM), and peroneus brevis (PB) muscles.</p>
Full article ">
18 pages, 763 KiB  
Article
Learning to Score: A Coding System for Constructed Response Items via Interactive Clustering
by Lingjing Luo, Hang Yang, Zhiwu Li and Witold Pedrycz
Systems 2024, 12(9), 380; https://doi.org/10.3390/systems12090380 - 21 Sep 2024
Viewed by 704
Abstract
Constructed response items that require the student to give more detailed and elaborate responses are widely applied in large-scale assessments. However, the hand-craft scoring with a rubric for massive responses is labor-intensive and impractical due to rater subjectivity and answer variability. The automatic [...] Read more.
Constructed response items that require the student to give more detailed and elaborate responses are widely applied in large-scale assessments. However, the hand-craft scoring with a rubric for massive responses is labor-intensive and impractical due to rater subjectivity and answer variability. The automatic response coding method, such as the automatic scoring of short answers, has become a critical component of the learning and assessment system. In this paper, we propose an interactive coding system called ASSIST to efficiently score student responses with expert knowledge and then generate an automatic score classifier. First, the ungraded responses are clustered to generate specific codes, representative responses, and indicator words. The constraint set based on feedback from experts is taken as training data in metric learning to compensate for machine bias. Meanwhile, the classifier from responses to code is trained according to the clustering results. Second, the experts review each coded cluster with the representative responses and indicator words to score a rating. The coded cluster and score pairs will be validated to ensure inter-rater reliability. Finally, the classifier is available for scoring a new response with out-of-distribution detection, which is based on the similarity between response representation and class proxy, i.e., the weight of class in the last linear layer of the classifier. The originality of the system developed stems from the interactive response clustering procedure, which involves expert feedback and an adaptive automatic classifier that can identify new response classes. The proposed system is evaluated on our real-world assessment dataset. The results of the experiments demonstrate the effectiveness of the proposed system in saving human effort and improving scoring performance. The average improvements in clustering quality and scoring accuracy are 14.48% and 18.94%, respectively. Additionally, we reported the inter-rater reliability, out-of-distribution rate, and cluster statistics, before and after interaction. Full article
Show Figures

Figure 1

Figure 1
<p>An example of coding for constructed response items. The example question and responses are translated from Mandarin.</p>
Full article ">Figure 2
<p>Architecture of ASSIST system.</p>
Full article ">Figure 3
<p>Score distribution of items. Orange colour refers to score 0, blue colour refers to score 1, purple colour refers to score 2, grey colour refers to score 3, yellow colour refers to score 4 and green colour refers to score 5.</p>
Full article ">Figure 4
<p>User interface of clustering results preview.</p>
Full article ">Figure 5
<p>User interface of automatic scoring.</p>
Full article ">Figure 6
<p>Comparative results of four response clustering methods.</p>
Full article ">
16 pages, 1511 KiB  
Article
“Sharing Worldviews: Learning in Encounter for Common Values in Diversity” in School and Teacher Education—Contexts in Germany and Europe
by Katja Boehme
Religions 2024, 15(9), 1077; https://doi.org/10.3390/rel15091077 - 5 Sep 2024
Viewed by 550
Abstract
Challenges and tensions that arise in a pluralistic society with differing worldviews among its citizens must be addressed from the outset in school education. To enable social cohesion within a heterogeneous society, students must learn to harmonize their own worldviews with other interpretations [...] Read more.
Challenges and tensions that arise in a pluralistic society with differing worldviews among its citizens must be addressed from the outset in school education. To enable social cohesion within a heterogeneous society, students must learn to harmonize their own worldviews with other interpretations of the world in a spirit of “reciprocal inclusivity” (Reinhold Bernhardt). This article argues that this task particularly falls within the responsibility of subjects in schools that address the existential “problems of constitutive rationality” (Jürgen Baumert), specifically religious education, ethics, and philosophy. In Germany and Austria, multiple subjects within denominational religious education, as well as ethics and philosophy, are offered in schools. When these subjects collaborate on projects, students learn to engage in dialogue with the various religious and secular, individual, and collective interpretations, perspectives, and worldviews they encounter. Since 2002/03, and in teacher training since 2011, such a didactically guided Sharing Worldviews approach has been implemented in school projects in Southern Germany through a four-phase concept. This concept can be flexibly applied to the local conditions of the school, contributes to internationalisation and digitalisation, and does not require additional teaching hours. By incorporating secular worldviews, Sharing Worldviews goes beyond interreligious learning and has also been realised digitally in other European countries. The following article begins by considering the educational requirements in a heterogeneous society (1), describes the prerequisites needed to positively influence students’ attitudes (2), outlines common foundational concepts for interreligious and inter-worldview dialogue (3), and recommends “Mutual Hospitality” as the basis for such dialogue in schools (4). The article then explains how “Mutual Hospitality” can be practically implemented in a four-phase concept of Sharing Worldviews both in schools and in teacher training (5 and 6) by tracing the origins of this concept (7). The Sharing Worldviews concept has been both internationalised and digitalised in schools and teacher education (8), aligns with the educational principles of the OECD (9), and demonstrates significant benefits in empirical studies (10). Full article
(This article belongs to the Special Issue Shared Religious Education)
Show Figures

Figure 1

Figure 1
<p><span class="html-italic">Sharing Wordviews</span> in four phases with presentation phase and discussion phase as station work in mixed groups of students. Each subject prepares a station from its own worldview on the topic (cf. <a href="#B12-religions-15-01077" class="html-bibr">Boehme 2023, p. 381</a>; <a href="http://www.sharing-worldviews.com/en/node/2" target="_blank">www.sharing-worldviews.com/en/node/2</a>, accessed on 29 August 2024).</p>
Full article ">Figure 2
<p><span class="html-italic">Sharing Wordviews</span> in four phases with presentation phase and discussion phase as station work in mixed groups of students. Each subject contributes its own view of a subtopic to each station (cf. <a href="#B12-religions-15-01077" class="html-bibr">Boehme 2023, p. 382</a>; <a href="http://www.sharing-worldviews.com/en/node/2" target="_blank">www.sharing-worldviews.com/en/node/2</a>, accessed on 1 September 2024).</p>
Full article ">
13 pages, 1563 KiB  
Article
How to Optimize the Experimental Protocol for Surface EMG Signal Measurements Using the InterCriteria Decision-Making Approach
by Maria Angelova, Silvija Angelova and Rositsa Raikova
Appl. Sci. 2024, 14(13), 5436; https://doi.org/10.3390/app14135436 - 22 Jun 2024
Cited by 1 | Viewed by 1050
Abstract
The InterCriteria decision-making approach, known as InterCriteria analysis (ICrA), was applied here to optimize the experimental protocol when the surface electromyography (EMG) signals of upper arm human muscles are recorded. Ten healthy subjects performed cycling movements in the sagittal plane with and without [...] Read more.
The InterCriteria decision-making approach, known as InterCriteria analysis (ICrA), was applied here to optimize the experimental protocol when the surface electromyography (EMG) signals of upper arm human muscles are recorded. Ten healthy subjects performed cycling movements in the sagittal plane with and without added weight for ten, six, two, and one second, respectively, for each active phase. The EMG signals from six muscles or parts of muscles, namely m. deltoideus pars clavicularis and pars spinata, m. brachialis, m. anconeus, m. biceps brachii, and m. triceps brachii caput longum, were recorded. ICrA was used on the obtained data to find correlations between the sixteen different phases, eight for elbow flexion and eight for elbow extension. Based on the obtained results, we proposed an optimized experimental protocol (OEP) that omits slower and more difficult tasks while saving crucial data. The optimized protocol consists of seven, instead of ten, tasks and takes three minutes less than the time taken for the full experimental protocol (FEP). The lower number of movements in the OEP could prevent physical and psychical fatigue, discomfort, or even pain in the investigated subjects. In addition, the time to train subjects, as well as the time to process the surface EMG data, can be significantly reduced. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Starting position; (<b>b</b>) position of maximal upper flexion with a wristband of 0.5 kg.</p>
Full article ">Figure 2
<p>A subject and raw EMG signals during TASK 5 performance of the FEP.</p>
Full article ">
22 pages, 3394 KiB  
Article
Multi-View and Multimodal Graph Convolutional Neural Network for Autism Spectrum Disorder Diagnosis
by Tianming Song, Zhe Ren, Jian Zhang and Mingzhi Wang
Mathematics 2024, 12(11), 1648; https://doi.org/10.3390/math12111648 - 24 May 2024
Viewed by 1037
Abstract
Autism Spectrum Disorder (ASD) presents significant diagnostic challenges due to its complex, heterogeneous nature. This study explores a novel approach to enhance the accuracy and reliability of ASD diagnosis by integrating resting-state functional magnetic resonance imaging with demographic data (age, gender, and IQ). [...] Read more.
Autism Spectrum Disorder (ASD) presents significant diagnostic challenges due to its complex, heterogeneous nature. This study explores a novel approach to enhance the accuracy and reliability of ASD diagnosis by integrating resting-state functional magnetic resonance imaging with demographic data (age, gender, and IQ). This study is based on improving the spectral graph convolutional neural network (GCN). It introduces a multi-view attention fusion module to extract useful information from different views. The graph’s edges are informed by demographic data, wherein an edge-building network computes weights grounded in demographic information, thereby bolstering inter-subject correlation. To tackle the challenges of oversmoothing and neighborhood explosion inherent in deep GCNs, this study introduces DropEdge regularization and residual connections, thus augmenting feature diversity and model generalization. The proposed method is trained and evaluated on the ABIDE-I and ABIDE-II datasets. The experimental results underscore the potential of integrating multi-view and multimodal data to advance the diagnostic capabilities of GCNs for ASD. Full article
(This article belongs to the Special Issue Network Biology and Machine Learning in Bioinformatics)
Show Figures

Figure 1

Figure 1
<p>The overall methodological framework.</p>
Full article ">Figure 2
<p>Multi-view data fusion framework.</p>
Full article ">Figure 3
<p>Edge-building network framework.</p>
Full article ">Figure 4
<p>The improved spectral graph convolutional neural network framework.</p>
Full article ">Figure 5
<p>Results of multi-view experiments.</p>
Full article ">Figure 6
<p>ROC curves for different methods.</p>
Full article ">Figure 7
<p>Results of comparative experiments on ABIDE-II.</p>
Full article ">Figure 8
<p>Results of 2D feature visualization. (<b>a</b>) Original feature distribution; (<b>b</b>) post-classification feature distribution.</p>
Full article ">
20 pages, 7925 KiB  
Article
Motion Correction for Brain MRI Using Deep Learning and a Novel Hybrid Loss Function
by Lei Zhang, Xiaoke Wang, Michael Rawson, Radu Balan, Edward H. Herskovits, Elias R. Melhem, Linda Chang, Ze Wang and Thomas Ernst
Algorithms 2024, 17(5), 215; https://doi.org/10.3390/a17050215 - 15 May 2024
Cited by 3 | Viewed by 1518
Abstract
Purpose: Motion-induced magnetic resonance imaging (MRI) artifacts can deteriorate image quality and reduce diagnostic accuracy, but motion by human subjects is inevitable and can even be caused by involuntary physiological movements. Deep-learning-based motion correction methods might provide a solution. However, most studies have [...] Read more.
Purpose: Motion-induced magnetic resonance imaging (MRI) artifacts can deteriorate image quality and reduce diagnostic accuracy, but motion by human subjects is inevitable and can even be caused by involuntary physiological movements. Deep-learning-based motion correction methods might provide a solution. However, most studies have been based on directly applying existing models, and the trained models are rarely accessible. Therefore, we aim to develop and evaluate a deep-learning-based method (Motion Correction-Net, or MC-Net) for suppressing motion artifacts in brain MRI scans. Methods: A total of 57 subjects, providing 20,889 slices in four datasets, were used. Furthermore, 3T 3D sagittal magnetization-prepared rapid gradient-echo (MP-RAGE) and 2D axial fluid-attenuated inversion-recovery (FLAIR) sequences were acquired. The MC-Net was derived from a UNet combined with a two-stage multi-loss function. T1-weighted axial brain images contaminated with synthetic motions were used to train the network to remove motion artifacts. Evaluation used simulated T1- and T2-weighted axial, coronal, and sagittal images unseen during training, as well as T1-weighted images with motion artifacts from real scans. The performance indices included the peak-signal-to-noise ratio (PSNR), the structural similarity index measure (SSIM), and visual reading scores from three blinded clinical readers. A one-sided Wilcoxon signed-rank test was used to compare reader scores, with p < 0.05 considered significant. Intraclass correlation coefficients (ICCs) were calculated for inter-rater evaluations. Results: The MC-Net outperformed other methods in terms of PSNR and SSIM for the T1 axial test set. The MC-Net significantly improved the quality of all T1-weighted images for all directions (i.e., the mean SSIM of axial, sagittal, and coronal slices improved from 0.77, 0.64, and 0.71 to 0.92, 0.75, and 0.84; the mean PSNR improved from 26.35, 24.03, and 24.55 to 29.72, 24.40, and 25.37, respectively) and for simulated as well as real motion artifacts, both using quantitative measures and visual scores. However, MC-Net performed poorly for images with untrained T2-weighted contrast because the T2 contrast was unseen during training and is different from T1 contrast. Conclusion: The proposed two-stage multi-loss MC-Net can effectively suppress motion artifacts in brain MRI without compromising image quality. Given the efficiency of MC-Net (with a single-image processing time of ~40 ms), it can potentially be used in clinical settings. Full article
Show Figures

Figure 1

Figure 1
<p>Architecture of MC-Net, which was derived from UNet. The filter number in each convolutional layer of the customized UNet is half that of the original UNet [<a href="#B22-algorithms-17-00215" class="html-bibr">22</a>]. An optional concatenation on top of the UNet structure is indicated by a dashed line.</p>
Full article ">Figure 2
<p>The pipeline corresponding to generating motion-corrupted k-space data. Step 1 describes the synthesis of motion trajectories. Step 2 shows how motion corrupted images are generated using motion trajectories and high-resolution images as input.</p>
Full article ">Figure 3
<p>SSIM values of images corrected with MC-Net (blue dots) relative to those of uncorrupted images (red dots) are plotted against the magnitude of motion simulated (standard deviation of motion across 256 time points, in mm/°). The red line (Y = 0.99 − 0.028X) and blue line (Y = 0. 98 − 0.014X) show linear regression of images without and with motion correction against the motion magnitude.</p>
Full article ">Figure 4
<p>Examples of motion artifact removal from images in the test data set of Dataset 1 using various algorithms. In (<b>A</b>) (1.90 mm/°), the first row contains the clean reference image, corrupted image, and motion correction results using the L1, L1 + TV, and MC-Net algorithms. The second row shows an enlarged image of the red rectangle. The SSIM and PSNR values for each corrected image (relative to the “clean” image) are also shown (bottom row). The third row shows the error maps between the reference (i.e., clean image) and corrupted image and motion correction results. The difference between each pixel shown in the error maps was multiplied by a factor of five. (<b>B</b>) The motion trajectory for (<b>A</b>), where the horizontal axis labels refer to y-position in k-space.</p>
Full article ">Figure 5
<p>Average motion artifact scores of test data set of Dataset 1 made by three blinded clinical readers (top to bottom for reader 1 (<b>A</b>), reader 2 (<b>B</b>) and reader 3 (<b>C</b>)) as a function of motion magnitude (x-axis). The left column shows visual scores for “clean” reference images (red lines), motion-corrupted images (blue lines), and the MC-Net predictions (green lines). The right column shows visual scores for the L1 only network (black lines) and the MC-Net (green lines; second reading). The x-axis represents the standard deviation of motion (in mm/°), and the y-axis shows average reading scores. Error bars represent standard errors of the means.</p>
Full article ">Figure 5 Cont.
<p>Average motion artifact scores of test data set of Dataset 1 made by three blinded clinical readers (top to bottom for reader 1 (<b>A</b>), reader 2 (<b>B</b>) and reader 3 (<b>C</b>)) as a function of motion magnitude (x-axis). The left column shows visual scores for “clean” reference images (red lines), motion-corrupted images (blue lines), and the MC-Net predictions (green lines). The right column shows visual scores for the L1 only network (black lines) and the MC-Net (green lines; second reading). The x-axis represents the standard deviation of motion (in mm/°), and the y-axis shows average reading scores. Error bars represent standard errors of the means.</p>
Full article ">Figure 6
<p>Examples of motion artifact removal from images in the test data set of Dataset 1 using various algorithms. In (<b>A</b>) (6.04 mm/°) and (<b>B</b>) (4.67 mm/°), the first row of each subfigure contains the clean reference image, corrupted image, and motion correction results obtained using the L1, L1 + TV, and MC-Net algorithms. The second row of each subfigure zooms in on the red rectangle. The SSIM and PSNR values for each corrected image (relative to the “clean” image) are also shown (bottom row). The third row shows the error maps between reference (i.e., clean image) and corrupted image and motion correction results. The difference between each pixel shown in the error maps was multiplied by a factor of five. (<b>C</b>,<b>D</b>) show the motion trajectories for (<b>A</b>,<b>B</b>), where the horizontal axis labels refer to y-positions in k-space.</p>
Full article ">Figure 6 Cont.
<p>Examples of motion artifact removal from images in the test data set of Dataset 1 using various algorithms. In (<b>A</b>) (6.04 mm/°) and (<b>B</b>) (4.67 mm/°), the first row of each subfigure contains the clean reference image, corrupted image, and motion correction results obtained using the L1, L1 + TV, and MC-Net algorithms. The second row of each subfigure zooms in on the red rectangle. The SSIM and PSNR values for each corrected image (relative to the “clean” image) are also shown (bottom row). The third row shows the error maps between reference (i.e., clean image) and corrupted image and motion correction results. The difference between each pixel shown in the error maps was multiplied by a factor of five. (<b>C</b>,<b>D</b>) show the motion trajectories for (<b>A</b>,<b>B</b>), where the horizontal axis labels refer to y-positions in k-space.</p>
Full article ">Figure 7
<p>Results of cross-dataset generalization with motion-corrupted MP-RAGE images of saggital (<b>A</b>,<b>C</b>) and coronal orientations (<b>B</b>,<b>D</b>) of Dataset 2. In each subfigure, the first row shows the motion-free image, motion-corrupted image, and the image corrected by MC-Net. The second row shows a magnification of the region of interest (ROI) within the red rectangle. Yellow and white numbers represent the SSIM and PSNR relative to the motion-free image. The third row shows the error maps between the reference (i.e., clean image) and corrupted images and motion correction results. The difference between each pixel shown in the error maps was multiplied by a factor of five.</p>
Full article ">Figure 8
<p>Results for two T2-weighted (FLAIR) images, obtained using simulated motion, from Dataset 3. The left set (<b>A</b>) was corrupted with relatively minor motion, and the right set (<b>B</b>) was corrupted with more-severe motion. Note the appearance of false anatomical “features” (yellow arrows). Within each set, images in each column are original images, corrupted images, and outputs from MC-Net (left to right). The second row shows a magnification of the region of interest (ROI) within the red rectangle. The third row shows the error maps between reference (i.e., clean image) and corrupted image and motion correction results. The difference between each pixel shown in the error maps was multiplied by a factor of five.</p>
Full article ">Figure 9
<p>Examples of images with real motion (non-simulated) artifacts from Dataset 4. From left to right (<b>A</b>–<b>C</b>), the severity of motion artifacts increases. Each set shows the original motion-corrupted image (left) and outputs from MC-Net (right). The second row shows a magnification of the region of interest (ROI) within the red rectangle. The third row shows the error maps between network input (i.e., corrupted image) and MC-Net prediction. The difference between each pixel shown in the error maps was multiplied by a factor of five.</p>
Full article ">
25 pages, 20853 KiB  
Article
Optimising Plate Thickness in Interlocking Inter-Module Connections for Modular Steel Buildings: A Finite Element and Random Forest Approach
by Khaled Elsayed, Azrul A. Mutalib, Mohamed Elsayed and Mohd Reza Azmi
Buildings 2024, 14(5), 1254; https://doi.org/10.3390/buildings14051254 - 29 Apr 2024
Viewed by 1080
Abstract
Interlocking Inter-Module Connections (IMCs) in Modular Steel Buildings (MSBs) have garnered significant interest from researchers. Despite this, the optimisation of plate thicknesses in such structures has yet to be extensively explored in the existing literature. Therefore, this paper focuses on optimising the thickness [...] Read more.
Interlocking Inter-Module Connections (IMCs) in Modular Steel Buildings (MSBs) have garnered significant interest from researchers. Despite this, the optimisation of plate thicknesses in such structures has yet to be extensively explored in the existing literature. Therefore, this paper focuses on optimising the thickness of interlocking IMCs in MSBs by leveraging established experimental and numerical simulation methodologies. The study developed various numerical models for IMCs with plate thicknesses of 4 mm, 6 mm, 10 mm, and 12 mm, all subjected to compression loading conditions. The novelty of this study lies in its comprehensive parametric analysis, which evaluates the slip prediction model. A random forest regression model, trained using the ‘TreeBagger’ function, was also implemented to predict slip values based on applied force. Sensitivity analysis and comparisons with alternative methods underscored the reliability and applicability of the findings. The results indicate that a plate thickness of 11.03 mm is optimal for interlocking IMCs in MSBs, achieving up to 8.08% in material cost reductions while increasing deformation resistance by up to 50.75%. The ‘TreeBagger’ random forest regression significantly enhanced slip prediction accuracy by up to 7% at higher force levels. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

Figure 1
<p>Classification of inter-module connections (IMCs) [<a href="#B46-buildings-14-01254" class="html-bibr">46</a>].</p>
Full article ">Figure 2
<p>Lacey et al. [<a href="#B20-buildings-14-01254" class="html-bibr">20</a>], inter-module connection (CTC).</p>
Full article ">Figure 3
<p>Illustration of Lacey et al. [<a href="#B20-buildings-14-01254" class="html-bibr">20</a>] specimen (A). Reprinted/adapted with permission from Ref. [<a href="#B20-buildings-14-01254" class="html-bibr">20</a>]. Copyright 2019, Elsevier.</p>
Full article ">Figure 4
<p>Comparative thickness of plates SP14, SP15, SP16.</p>
Full article ">Figure 5
<p>Conceptual view of Lacey et al. [<a href="#B20-buildings-14-01254" class="html-bibr">20</a>] experimental setup. Reprinted/adapted with permission from Ref. [<a href="#B20-buildings-14-01254" class="html-bibr">20</a>]. Copyright 2019, Elsevier.</p>
Full article ">Figure 6
<p>Comparative analysis between force-slip Lacey et al. [<a href="#B20-buildings-14-01254" class="html-bibr">20</a>] experimental approach results and FE model mesh size results.</p>
Full article ">Figure 7
<p>Loading and boundary conditions in ANSYS 2023 R2.</p>
Full article ">Figure 8
<p>FE model (1.0, 2.0, and 4.0 mm) mesh size.</p>
Full article ">Figure 9
<p>Comparison of slip behaviour between connection plate thickness cases, including Lacey et al. [<a href="#B20-buildings-14-01254" class="html-bibr">20</a>] case (A<sub>0</sub>).</p>
Full article ">Figure 10
<p>Deformation and buckling behaviour for models (A<sub>0</sub>) to (A<sub>4</sub>).</p>
Full article ">Figure 11
<p>Comparison of (SP14) plate and bolt deformation across models (A<sub>0</sub>) to (A<sub>4</sub>).</p>
Full article ">Figure 12
<p>Impact of plate thickness Interlocking (IMCs) stiffness behaviour.</p>
Full article ">Figure 13
<p>R<sup>2</sup> coefficients of polynomial fits for case (A<sub>0</sub>) against Lacey et al. [<a href="#B20-buildings-14-01254" class="html-bibr">20</a>] empirical data and models (A<sub>1</sub>–A<sub>4</sub>).</p>
Full article ">Figure 14
<p>Comparison of the predicted slip (∆) and Lacey et al. [<a href="#B20-buildings-14-01254" class="html-bibr">20</a>] experimental results.</p>
Full article ">Figure 15
<p>Comparison between FE models (A<sub>1</sub>, A<sub>2</sub>, A<sub>3</sub>, and A<sub>4</sub>) slip and the polynomial regression predicted slips.</p>
Full article ">Figure 16
<p>Comparison between FE models (A<sub>1</sub>, A<sub>2</sub>, A<sub>3</sub>, and A<sub>4</sub>) slip and random forest prediction.</p>
Full article ">Figure 17
<p>Sensitivity analysis of stiffness attributes in interlocking (IMCs) across applied forces.</p>
Full article ">Figure 18
<p>Comparative analysis of ‘Original FE Stiffness’ and filtered data across force range.</p>
Full article ">Figure 19
<p>Linear regression analysis of stiffness vs applied force for attributes (A<sub>0</sub> to A<sub>4</sub>).</p>
Full article ">Figure 20
<p>Residuals and anomalies in stiffness attributes under varying forces.</p>
Full article ">Figure 21
<p>Comparative analysis of Yield Strength, Tensile Strength, and Efficiency across plate thicknesses for IMCs in MSBs.</p>
Full article ">
11 pages, 513 KiB  
Article
Interprofessional Faculty Development on Health Disparities: Engineering a Crossover “Jigsaw” Journal Club
by Jessica T. Servey and Gayle Haischer-Rollo
Educ. Sci. 2024, 14(5), 468; https://doi.org/10.3390/educsci14050468 - 28 Apr 2024
Viewed by 975
Abstract
Medical education acknowledges our need to teach our physicians about “social determinants of health” and “health care disparities”. However, educators often lack actionable training to address this need. We describe a faculty development activity, a health disparities journal club, using the jigsaw strategy [...] Read more.
Medical education acknowledges our need to teach our physicians about “social determinants of health” and “health care disparities”. However, educators often lack actionable training to address this need. We describe a faculty development activity, a health disparities journal club, using the jigsaw strategy with the intent of increasing awareness, encouraging self-directed learning, and inspiring future teaching of the subject to health professional learners. We completed six workshops at six individual hospitals, with 95 total attendees in medicine and numerous other health professions. Our evaluation asked trainees to: report the number of journal articles about health disparities they had read, excluding the assigned journal club articles, in the past 12 months, and to predict future plans for reading about health disparities. In total, 28.9% responded they had “never read” a prior article on health or healthcare disparities, while 54.2% responded “1–5 articles”. Many (60%) reported they would continue to investigate this topic. Our experience has demonstrated the utility and positive impact of a “flipped classroom” jigsaw method, showing it can be used successfully in Inter-Professional (IPE) Faculty Development to increase active exposure and discussion of the content. Additionally, this method promotes individual reflection and enhances continued collective engagement. Full article
Show Figures

Figure 1

Figure 1
<p>(E) Original grouping of “Expert” groups—1 article per group. (H) Grouping of “Home” groups (jigsaw groups), where someone from each article teaches other members, so all members debrief all three articles.</p>
Full article ">
16 pages, 2509 KiB  
Article
Effect of the Exterior Traffic Noises on the Sound Environment Evaluation in Office Spaces with Different Interior Noise Conditions
by Boya Yu, Yuying Chai and Chao Wang
Appl. Sci. 2024, 14(7), 3017; https://doi.org/10.3390/app14073017 - 3 Apr 2024
Cited by 1 | Viewed by 1152
Abstract
The present study focuses on the impact of exterior traffic noises on sound environment evaluation in office spaces, considering their interaction with interior noises. There were three interior noise conditions: silence, air-conditioner noise, and irrelevant speech noise. Six exterior traffic noises (road, maglev, [...] Read more.
The present study focuses on the impact of exterior traffic noises on sound environment evaluation in office spaces, considering their interaction with interior noises. There were three interior noise conditions: silence, air-conditioner noise, and irrelevant speech noise. Six exterior traffic noises (road, maglev, tram, metro, conventional inter-city train, and high-speed train) were merged with interior noise clips to create the combined noise stimuli. Forty subjects participated in the experiment to assess the acoustic environment in office spaces exposed to multiple noises. The results showed that both interior and exterior noise significantly affected acoustic comfort and noise disturbance. As for the exterior traffic noise, both the traffic noise source and the noise level were found to be influential on both attributes. More temporally fluctuating traffic noises, such as high-speed train noise, were found to have a greater negative effect on subjective evaluations. Meanwhile, the interior noise source was also found to influence evaluations of the sound environment. Compared to the single traffic noise condition, irrelevant speech noise significantly increased the negative impact of traffic noises, while the air-conditioner noise had a neutral effect. In addition, participants in offices with speech noise were less sensitive to the traffic noise level. Full article
(This article belongs to the Special Issue Recent Advances in Soundscape and Environmental Noise)
Show Figures

Figure 1

Figure 1
<p>Experimental environment setup.</p>
Full article ">Figure 2
<p>The layout of the field recording site for speech noise and air-conditioner noise (S indicates the position of the recorder).</p>
Full article ">Figure 3
<p>Spectrogram of noise clips extracted from the field recordings.</p>
Full article ">Figure 4
<p>Sound reduction index of the filter applied to exterior traffic noises, simulating the sound insulation afforded by the building facade (data from [<a href="#B37-applsci-14-03017" class="html-bibr">37</a>]).</p>
Full article ">Figure 5
<p>Spectrogram of exterior traffic noise clips after filtering and sound level calibration (L<sub>Aeq,1 min</sub> = 40 dB).</p>
Full article ">Figure 6
<p>Setup of the combined noise stimuli.</p>
Full article ">Figure 7
<p>Experimental procedure.</p>
Full article ">Figure 8
<p>Effect of traffic noise source and traffic noise level on sound environment evaluations. The symbol ‘*’ represents a significant difference at the 0.05 significance level in the pairwise comparison analysis (LSD method).</p>
Full article ">Figure 9
<p>Effect of traffic noise level on sound environment evaluations in different traffic noise groups.</p>
Full article ">Figure 10
<p>Effect of traffic noise level change on the change in sound environment evaluations in various traffic noise groups. Comfort<sub>40</sub>–Comfort<sub>50</sub> is the difference in acoustic comfort between the 40 dB group and the 50 dB group. The symbol ‘*’ represents a significant difference at the 0.05 significance level in the pairwise comparison analysis (LSD method).</p>
Full article ">Figure 11
<p>Effect of traffic noise level on sound environment evaluations. The symbol ‘*’ represents a significant difference at the 0.05 significance level in the pairwise comparison analysis (LSD method).</p>
Full article ">Figure 12
<p>Effect of interior noise condition on sound environment evaluations in different traffic noise source groups. The symbol ‘*’ represents a significant difference at the 0.05 significance level in the pairwise comparison analysis (LSD method).</p>
Full article ">Figure 13
<p>Effect of the interior noise condition on sound environment evaluations in different traffic noise level groups. The symbol ‘*’ represents a significant difference at the 0.05 significance level in the pairwise comparison analysis (LSD method). <math display="inline"><semantics> <mrow> <mi mathvariant="normal">Δ</mi> </mrow> </semantics></math> represents the difference between the SL groups and the IS groups.</p>
Full article ">
18 pages, 3164 KiB  
Article
PixRevive: Latent Feature Diffusion Model for Compressed Video Quality Enhancement
by Weiran Wang, Minge Jing, Yibo Fan and Wei Weng
Sensors 2024, 24(6), 1907; https://doi.org/10.3390/s24061907 - 16 Mar 2024
Viewed by 1338
Abstract
In recent years, the rapid prevalence of high-definition video in Internet of Things (IoT) systems has been directly facilitated by advances in imaging sensor technology. To adapt to limited uplink bandwidth, most media platforms opt to compress videos to bitrate streams for transmission. [...] Read more.
In recent years, the rapid prevalence of high-definition video in Internet of Things (IoT) systems has been directly facilitated by advances in imaging sensor technology. To adapt to limited uplink bandwidth, most media platforms opt to compress videos to bitrate streams for transmission. However, this compression often leads to significant texture loss and artifacts, which severely degrade the Quality of Experience (QoE). We propose a latent feature diffusion model (LFDM) for compressed video quality enhancement, which comprises a compact edge latent feature prior network (ELPN) and a conditional noise prediction network (CNPN). Specifically, we first pre-train ELPNet to construct a latent feature space that captures rich detail information for representing sharpness latent variables. Second, we incorporate these latent variables into the prediction network to iteratively guide the generation direction, thus resolving the problem that the direct application of diffusion models to temporal prediction disrupts inter-frame dependencies, thereby completing the modeling of temporal correlations. Lastly, we innovatively develop a Grouped Domain Fusion module that effectively addresses the challenges of diffusion distortion caused by naive cross-domain information fusion. Comparative experiments on the MFQEv2 benchmark validate our algorithm’s superior performance in terms of both objective and subjective metrics. By integrating with codecs and image sensors, our method can provide higher video quality. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>The diffusion process and inverse diffusion process of diffusion models for compressed video frame restoration.</p>
Full article ">Figure 2
<p>The overall architecture of the proposed LFDM. First, the current frame and neighboring frames are fed into the ELPN for pre-training. Second, the ELPN extracts the prior latent features and feeds them to the CNPN to direct its generation process. The details of the CNPN are illustrated in the figure. Finally, feature information from different domains is consolidated via “fusion”, comprehensively elaborated on in <a href="#sec4dot3-sensors-24-01907" class="html-sec">Section 4.3</a>. Here, <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>∼</mo> <mo form="prefix">Uniform</mo> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mi>T</mi> <mo>}</mo> </mrow> </semantics></math> and transformed into <math display="inline"><semantics> <msub> <mi>t</mi> <mi>e</mi> </msub> </semantics></math> through an MLP; FFB represents the frequency-domain filling block; (w/o c-a) denotes without cross-attention.</p>
Full article ">Figure 3
<p>The overall structure of our proposed ELPNet. DWT refers to the Discrete Wavelet Transform and CA denotes the Channel Attention mechanism.</p>
Full article ">Figure 4
<p>The structure of the fusion module. The left half is the image domain features obtained from the neural network, the right half is the probability distribution features obtained from the diffusion model, and the center represents the fusion of the heterogeneous information.</p>
Full article ">Figure 5
<p>Subjective comparison results between state-of-the-art methods and our proposed method in five video sequences at QP = 37. Test video names (from top to bottom): BasketballPass, Johanny, BQMall, Kimono, and Racehorses. The zoom-in of <span style="color: #FF0000">red box</span> area is shown.</p>
Full article ">Figure 6
<p>Rate–distortion curves of four test sequences.</p>
Full article ">Figure 7
<p>PSNR curves of HEVC, RFDA, BasicVSR++, and ours on four test sequence Cactus at QP = 37.</p>
Full article ">Figure 8
<p>Subjective comparison images depicting the restoration with and without ELPNet intervention. The zoom-in of <span style="color: #FF0000">red box</span> and <span style="color: #FFFF00">yellow box</span> area is shown.</p>
Full article ">
19 pages, 2849 KiB  
Article
A Lightweight Image Super-Resolution Reconstruction Algorithm Based on the Residual Feature Distillation Mechanism
by Zihan Yu, Kai Xie, Chang Wen, Jianbiao He and Wei Zhang
Sensors 2024, 24(4), 1049; https://doi.org/10.3390/s24041049 - 6 Feb 2024
Cited by 3 | Viewed by 1721
Abstract
In recent years, the development of image super-resolution (SR) has explored the capabilities of convolutional neural networks (CNNs). The current research tends to use deeper CNNs to improve performance. However, blindly increasing the depth of the network does not effectively enhance its performance. [...] Read more.
In recent years, the development of image super-resolution (SR) has explored the capabilities of convolutional neural networks (CNNs). The current research tends to use deeper CNNs to improve performance. However, blindly increasing the depth of the network does not effectively enhance its performance. Moreover, as the network depth increases, more issues arise during the training process, requiring additional training techniques. In this paper, we propose a lightweight image super-resolution reconstruction algorithm (SISR-RFDM) based on the residual feature distillation mechanism (RFDM). Building upon residual blocks, we introduce spatial attention (SA) modules to provide more informative cues for recovering high-frequency details such as image edges and textures. Additionally, the output of each residual block is utilized as hierarchical features for global feature fusion (GFF), enhancing inter-layer information flow and feature reuse. Finally, all these features are fed into the reconstruction module to restore high-quality images. Experimental results demonstrate that our proposed algorithm outperforms other comparative algorithms in terms of both subjective visual effects and objective evaluation quality. The peak signal-to-noise ratio (PSNR) is improved by 0.23 dB, and the structural similarity index (SSIM) reaches 0.9607. Full article
(This article belongs to the Special Issue Deep Learning-Based Image and Signal Sensing and Processing)
Show Figures

Figure 1

Figure 1
<p>The architecture of a single-image super-resolution network based on the residual feature distillation mechanism.</p>
Full article ">Figure 2
<p>Residual feature distillation block.</p>
Full article ">Figure 3
<p>Spatial attention module.</p>
Full article ">Figure 4
<p>Effect of RFDB module structure on the model.</p>
Full article ">Figure 5
<p>Results of ablation experiments on GFF and SA (validation set).</p>
Full article ">Figure 6
<p>Image visual effects of different algorithms with scale factor ×2.</p>
Full article ">Figure 7
<p>Image visual effects of different algorithms with scale factor ×3.</p>
Full article ">Figure 8
<p>Image visual effects of different algorithms with scale factor ×4.</p>
Full article ">Figure 9
<p>Comparison of network parameters and the PSNR correspondence for different algorithms.</p>
Full article ">
Back to TopTop