Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (836)

Search Parameters:
Keywords = human movement analysis

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 5727 KiB  
Article
An Identification Method for Road Hypnosis Based on the Fusion of Human Life Parameters
by Bin Wang, Jingheng Wang, Xiaoyuan Wang, Longfei Chen, Chenyang Jiao, Han Zhang and Yi Liu
Sensors 2024, 24(23), 7529; https://doi.org/10.3390/s24237529 (registering DOI) - 25 Nov 2024
Abstract
A driver in road hypnosis has two different types of characteristics. One is the external characteristics, which are distinct and can be directly observed. The other is internal characteristics, which are indistinctive and cannot be directly observed. The eye movement characteristic, as a [...] Read more.
A driver in road hypnosis has two different types of characteristics. One is the external characteristics, which are distinct and can be directly observed. The other is internal characteristics, which are indistinctive and cannot be directly observed. The eye movement characteristic, as a distinct external characteristic, is one of the typical characteristics of road hypnosis identification. The electroencephalogram (EEG) characteristic, as an internal feature, is a golden parameter of drivers’ life identification. This paper proposes an identification method for road hypnosis based on the fusion of human life parameters. Eye movement data and EEG data are collected through vehicle driving experiments and virtual driving experiments. The collected data are preprocessed with principal component analysis (PCA) and independent component analysis (ICA), respectively. Eye movement data can be trained with a self-attention model (SAM), and the EEG data can be trained with the deep belief network (DBN). The road hypnosis identification model can be constructed by combining the two trained models with the stacking method. Repeated Random Subsampling Cross-Validation (RRSCV) is used to validate models. The results show that road hypnosis can be effectively recognized using the constructed model. This study is of great significance to reveal the essential characteristics and mechanisms of road hypnosis. The effectiveness and accuracy of road hypnosis identification can also be improved through this study. Full article
(This article belongs to the Section Vehicular Sensing)
15 pages, 1266 KiB  
Review
Technology Innovation for Discovering Renal Autoantibodies in Autoimmune Conditions
by Maurizio Bruschi, Giovanni Candiano, Andrea Petretto, Andrea Angeletti, Pier Luigi Meroni, Marco Prunotto and Gian Marco Ghiggeri
Int. J. Mol. Sci. 2024, 25(23), 12659; https://doi.org/10.3390/ijms252312659 - 25 Nov 2024
Abstract
Autoimmune glomerulonephritis is a homogeneous area of renal pathology with clinical relevance in terms of its numerical impact and difficulties in its treatment. Systemic lupus erythematosus/lupus nephritis and membranous nephropathy are the two most frequent autoimmune conditions with clinical relevance. They are characterized [...] Read more.
Autoimmune glomerulonephritis is a homogeneous area of renal pathology with clinical relevance in terms of its numerical impact and difficulties in its treatment. Systemic lupus erythematosus/lupus nephritis and membranous nephropathy are the two most frequent autoimmune conditions with clinical relevance. They are characterized by glomerular deposition of circulating autoantibodies that recognize glomerular antigens. Technologies for studying renal tissue and circulating antibodies have evolved over the years and have culminated with the direct analysis of antigen–antibody complexes in renal bioptic fragments. Initial studies utilized renal microdissection to obtain glomerular tissue. Obtaining immunoprecipitates after partial proteolysis of renal tissue is a recent evolution that eliminates the need for tissue microdissection. New technologies based on ‘super-resolution microscopy’ have added the possibility of a direct analysis of the interaction between circulating autoantibodies and their target antigens in glomeruli. Peptide and protein arrays represent the new frontier for identifying new autoantibodies in circulation. Peptide arrays consist of 7.5 million aligned peptides with 16 amino acids each, which cover the whole human proteome; protein arrays utilize, instead, a chip containing structured proteins, with 26.000 overall. An example of the application of the peptide array is the discovery in membranous nephropathy of many new circulating autoantibodies including formin-like-1, a protein of podosomes that is implicated in macrophage movements. Studies that utilize protein arrays are now in progress and will soon be published. The contribution of new technologies is expected to be relevant for extending our knowledge of the mechanisms involved in the pathogenesis of several autoimmune conditions. They may also add significant tools in clinical settings and modify the therapeutic handling of conditions that are not considered to be autoimmune. Full article
Show Figures

Figure 1

Figure 1
<p>Workflow utilized for characterizing glomerular antibodies microeluted from the kidney. Glomerular microdissection is the first step: (<b>a</b>,<b>b</b>) show a renal bioptic sample before and after microdissection, and (<b>c</b>) shows the glomerulus derived from the procedure. Glomerular extracts are then incubated with podocyte proteins previously separated by 2D electrophoresis and transferred to nitrocellulose membranes. Those spots that are recognized by immunoglobulin glomerular extracts undergo characterization by mass spectrometry.</p>
Full article ">Figure 2
<p>Validation of antibodies microeluted from glomeruli and characterized by immunoblot and mass spectrometry is carried out by immunofluorescence on kidney biopsies. The example presented in this figure is the validation of alpha-enolase as an antigen in patients with lupus nephritis. In this case, alpha-enolase, stained in red, and IgG2, stained in green, have an intense yellow merge, which indicates that the two proteins interact in the tissue. Magnification: ×400.</p>
Full article ">Figure 3
<p>Whole proteome peptide arrays consist of 7,499,126 peptides with 16 amino acids each that together cover the amino acid sequence of all the proteins coded by the human genome. Sera are incubated with all 7,499,126 peptides of the customized array, and the intensity of the relative fluorescence deriving from their interaction is aligned in sequence by informatic technologies to obtain the identification of a unique linear epitope corresponding to a specific protein. This figure shows the application of the peptide arrays for discovering new circulating antibodies in patients with membranous nephropathy, which is a unique example of the application of the array in human pathology [<a href="#B17-ijms-25-12659" class="html-bibr">17</a>].</p>
Full article ">
18 pages, 2814 KiB  
Article
Impact of Situation Awareness Variations on Multimodal Physiological Responses in High-Speed Train Driving
by Wenli Dong, Weining Fang, Hanzhao Qiu and Haifeng Bao
Brain Sci. 2024, 14(11), 1156; https://doi.org/10.3390/brainsci14111156 - 20 Nov 2024
Viewed by 354
Abstract
Background: In safety-critical environments, human error is a leading cause of accidents, with the loss of situation awareness (SA) being a key contributing factor. Accurate SA assessment is essential for minimizing such risks and ensuring operational safety. Traditional SA measurement methods have limitations [...] Read more.
Background: In safety-critical environments, human error is a leading cause of accidents, with the loss of situation awareness (SA) being a key contributing factor. Accurate SA assessment is essential for minimizing such risks and ensuring operational safety. Traditional SA measurement methods have limitations in dynamic real-world settings, while physiological signals, particularly EEG, offer a non-invasive, real-time alternative for continuous SA monitoring. However, the reliability of SA measurement based on physiological signals depends on the accuracy of SA labeling. Objective: This study aims to design an effective SA measurement paradigm specific to high-speed train driving, investigate more accurate physiological signal-based SA labeling methods, and explore the relationships between SA levels and key physiological metrics based on the developed framework. Methods: This study recruited 19 male high-speed train driver trainees and developed an SA measurement paradigm specific to high-speed train driving. A method combining subjective SA ratings and task performance was introduced to generate accurate SA labels. Results: The results of statistical analysis confirmed the effectiveness of this paradigm in inducing SA level changes, revealing significant relationships between SA levels and key physiological metrics, including eye movement patterns, ECG features (e.g., heart rate variability), and EEG power spectral density across theta, alpha, and beta bands. Conclusions: This study supports the use of multimodal physiological signals for SA assessment and provides a theoretical foundation for future applications of SA monitoring in railway operations, contributing to enhanced operational safety. Full article
Show Figures

Figure 1

Figure 1
<p>Experimental environment for signals acquisition under a simulated driving environment.</p>
Full article ">Figure 2
<p>Multimodal data acquisition and synchronization.</p>
Full article ">Figure 3
<p>Experimental procedure.</p>
Full article ">Figure 4
<p>Two-component GMM is fitted on RT and median of SA subjective score is used to identify the threshold of high and low SA.</p>
Full article ">Figure 5
<p>Trends in subjective fatigue and stress scores over time for different experimental conditions.</p>
Full article ">Figure 6
<p>AOIs.</p>
Full article ">Figure 7
<p>ET heatmaps at high and low SA levels.</p>
Full article ">Figure 8
<p>Effect of SA Variations on ECG Features. *** indicates <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">Figure 9
<p>Effect of SA Variations on EEG Features.</p>
Full article ">
23 pages, 17187 KiB  
Article
Human Daily Breathing Monitoring via Analysis of CSI Ratio Trajectories for WiFi Link Pairs on the I/Q Plane
by Wei Zhuang, Yuhang Lu, Yixian Shen and Jian Su
Sensors 2024, 24(22), 7352; https://doi.org/10.3390/s24227352 - 18 Nov 2024
Viewed by 415
Abstract
The measurement of human breathing is crucial for assessing the condition of the body. It opens up possibilities for various intelligent applications, like advanced medical monitoring and sleep analysis. Conventional approaches relying on wearable devices tend to be expensive and inconvenient for users. [...] Read more.
The measurement of human breathing is crucial for assessing the condition of the body. It opens up possibilities for various intelligent applications, like advanced medical monitoring and sleep analysis. Conventional approaches relying on wearable devices tend to be expensive and inconvenient for users. Recent research has shown that inexpensive WiFi devices commonly available in the market can be utilized effectively for non-contact breathing monitoring. WiFi-based breathing monitoring is highly sensitive to motion during the breathing process. This sensitivity arises because current methods primarily rely on extracting breathing signals from the amplitude and phase variations of WiFi Channel State Information (CSI) during breathing. However, these variations can be masked by body movements, leading to inaccurate counting of breathing cycles. To address this issue, we propose a method for extracting breathing signals based on the trajectories of two-chain CSI ratios on the I/Q plane. This method accurately monitors breathing by tracking and identifying the inflection points of the CSI ratio samples’ trajectories on the I/Q plane throughout the breathing cycle. We propose a dispersion model to label and filter out CSI ratio samples representing significant motion interference, thereby enhancing the robustness of the breathing monitoring system. Furthermore, to obtain accurate breathing waveforms, we propose a method for fitting the trajectory curve of the CSI ratio samples. Based on the fitted curve, a breathing segment extraction algorithm is introduced, enabling precise breathing monitoring. Our experimental results demonstrate that this approach achieves minimal error and significantly enhances the accuracy of WiFi-based breathing monitoring. Full article
Show Figures

Figure 1

Figure 1
<p>Breathing monitoring model.The green arrows indicate the LOS path signal and static reflection signal, while the red arrows indicate the dynamic reflection signal.</p>
Full article ">Figure 2
<p>Time series plot of breathing signal detected using only amplitude or phase information. Subgraphs (<b>A</b>,<b>B</b>) exhibit the raw phase information and the original amplitude information of the CSI data, while subgraphs (<b>C</b>,<b>D</b>) depict smoothed versions of the phase and amplitude information.</p>
Full article ">Figure 3
<p>The flow diagram of our proposed method.</p>
Full article ">Figure 4
<p>Vector representation diagram of CSI ratio signal.</p>
Full article ">Figure 5
<p>Breathing pattern on I/Q complex plane: (<b>A</b>) represents the unsmoothed trajectory, while (<b>B</b>) represents the smoothed trajectory.</p>
Full article ">Figure 6
<p>Dispersion of sample points of CSI ratio trajectories for breathing processes containing large action perturbations.</p>
Full article ">Figure 7
<p>The trajectory of CSI ratio sample points containing breathing information. The red boxes mark areas where some of the CSI ratio sample points are clustered and where the breath state is altered.</p>
Full article ">Figure 8
<p>Trajectory of CSI ratio after curve fitting. The red boxes indicate part of the breath transition region after fitting the curve.</p>
Full article ">Figure 9
<p>Schematic diagram of the scene.</p>
Full article ">Figure 10
<p>Extracted breathing pattern and recovered breathing signal using trajectory tracking method: (<b>A</b>) denotes the respiratory pattern on the I/Q complex plane and (<b>B</b>) represents the curves of the CSI ratio and phase changes during breathing.</p>
Full article ">Figure 11
<p>Path length change of extracted breathing wave and the recovered breathing signal: (<b>A</b>) denotes the total path length of the change at each point on the I/Q plane, and (<b>B</b>) indicates the normalized breathing waveform.</p>
Full article ">Figure 12
<p>The human chest at an angle of 90 degrees, 60 degrees, and 30 degrees to the vertical plane.</p>
Full article ">Figure 13
<p>Comparison of ACC for four breathing monitoring methods in three different orientations.</p>
Full article ">Figure 14
<p>Comparison of MAE for four breathing monitoring methods in three different orientations.</p>
Full article ">Figure 15
<p>Comparison of AER for four breathing monitoring methods in three different orientations.</p>
Full article ">Figure 16
<p>Schematic illustration of a scenario in which the human chest is in position 1, position 2, and position 3.</p>
Full article ">Figure 17
<p>Comparison of ACC for four breathing monitoring methods in three different positions.</p>
Full article ">Figure 18
<p>Comparison of MAE for four breathing monitoring methods in three different positions.</p>
Full article ">Figure 19
<p>Comparison of AER for four breathing monitoring methods in three different positions.</p>
Full article ">
14 pages, 10763 KiB  
Article
Prenatal Exposure to a Human Relevant Mixture of Endocrine-Disrupting Chemicals Affects Mandibular Development in Mice
by Vagelis Rinotas, Antonios Stamatakis, Athanasios Stergiopoulos, Carl-Gustaf Bornehag, Joëlle Rüegg, Marietta Armaka and Efthymia Kitraki
Int. J. Mol. Sci. 2024, 25(22), 12312; https://doi.org/10.3390/ijms252212312 - 16 Nov 2024
Viewed by 505
Abstract
Mandible is a bony structure of neuroectodermal origin with unique characteristics that support dentition and jaw movements. In the present study, we investigated the effects of gestational exposure to a mixture of endocrine-disrupting chemicals (EDCs) on mandibular growth in mice. The mixture under [...] Read more.
Mandible is a bony structure of neuroectodermal origin with unique characteristics that support dentition and jaw movements. In the present study, we investigated the effects of gestational exposure to a mixture of endocrine-disrupting chemicals (EDCs) on mandibular growth in mice. The mixture under study (Mixture N1) has been associated with neurodevelopmental effects in both a human cohort and animal studies. Pregnant mice were exposed throughout gestation to 0.5× (times of pregnant women’s exposure levels), 10×, 100× and 500× of Mixture N1, or the vehicle, and the mandibles of the male offspring were studied in adulthood. Micro-CT analysis showed non-monotonic effects of Mixture N1 in the distances between specific mandibular landmarks and in the crown width of M1 molar, as well as changes in the mandibular bone characteristics. The alveolar bone volume was reduced, and the trabecular separation was increased in the 500× exposed mice. Bone volume in the condyle head was increased in all treated groups. Τhe Safranin-O-stained area of mature hypertrophic chondrocytes and the width of their zones were reduced in 0.5×, 10× and 100× exposed groups. This is the first indication that prenatal exposure to an epidemiologically defined EDC mixture, associated with neurodevelopmental impacts, can also affect mandibular growth in mammals. Full article
(This article belongs to the Special Issue Progress in Research on Endocrine-Disrupting Chemicals)
Show Figures

Figure 1

Figure 1
<p>Morphometric analysis of distance measurements in mandibles from adult mice prenatally exposed to 0.5×, 10×, 100× and 500× of Mixture N1 or the vehicle (DMSO). Quantification of the measurements. Data represent estimated marginal means ± SEM. Generalized linear models (GLMs) were performed for statistical analysis followed by Bonferroni post hoc tests when appropriate. (<b>a</b>) Landmarks used in micro-CT analysis. (<b>b</b>) Distance from Go to Id was affected by the treatment (W4 = 26.548; <span class="html-italic">p</span> &lt; 0.001) and was decreased in the 100× group vs. DMSO. (<b>c</b>) Distance from Me to MAP was affected by the treatment (W4 = 27.826; <span class="html-italic">p</span> &lt; 0.001) and was increased in the groups 0.5× and 100×, compared to DMSO. (<b>d</b>) The distance from M3 molar to MF was affected by the treatment (W4 = 20.497; <span class="html-italic">p</span> &lt; 0.001) and was increased in the 0.5× group. (<b>e</b>) The distance from M3 to Id was not significantly changed. (<b>f</b>) The Μ1 crown width was affected by the treatment (W4 = 66.279; <span class="html-italic">p</span> &lt; 0.001) and was increased in the 0.5×, 100× and 500× groups. No significant effects were observed in the distances from Go to Pg (<b>g</b>), Go to Co (<b>h</b>), Go to Cd (<b>i</b>), Go to Cp (<b>j</b>), Me to Cp (<b>k</b>), MF to Cd (<b>l</b>), and in the crown width of molars M2 (<b>m</b>) and M3 (<b>n</b>). Go: gonion; Id: infradental; Me: menton; MAP: mandibular alveolar point; MF: mandibular foramen; Pg: pogonion; Co: dorsal condylar process; Cd: ventral condylar process; Cp: coronoid process; M: molar. # Statistically significant vs. DMSO group.</p>
Full article ">Figure 2
<p>Representative 2D (<b>a</b>) and 3D (<b>b</b>) micro-CT images of trabecular alveolar bone from adult mice prenatally exposed to DMSO, 0.5×, 10×, 100× and 500× of Mixture N1. (<b>c</b>–<b>h</b>) Quantification of the measurements. Data represent estimated marginal means ± SEM. Generalized linear models (GLMs) were performed for statistical analysis followed by Bonferroni post hoc tests when appropriate. (<b>c</b>) BV was affected by the treatment (W4 = 27.966, <span class="html-italic">p</span> &lt; 0.001) and was decreased in 500× vs. DMSO. (<b>d</b>) BV/TV was affected by the treatment (W4 = 29.316 <span class="html-italic">p</span> &lt; 0.001) and was decreased in 500× vs. DMSO. (<b>f</b>) Tb.S was affected by the treatment (W4 = 18.371, <span class="html-italic">p</span> = 0.001) and was increased in 500× vs. DMSO. Trabecular number (Tb.N), (<b>e</b>), thickness (Tb.Th) (<b>g</b>) and bone mineral density (BMD) (<b>h</b>) did not differ significantly from DMSO group. # Statistically significant vs. DMSO group. Scale bars: 2a = 1 mm; 2b = 100 μm.</p>
Full article ">Figure 3
<p>Representative 2D (<b>a</b>) and 3D (<b>b</b>) micro-CT images of cancellous bone in the condylar head from adult mice prenatally exposed to DMSO, 0.5×, 10×, 100× and 500× of Mixture N1. Quantification of the measurements (<b>c</b>–<b>h</b>). Data represent estimated marginal means ± SEM. Generalized linear models (GLMs) followed by Bonferroni post hoc tests when appropriate were performed for statistical analysis. (<b>c</b>) BV was affected by the treatment (W4 = 29.733, <span class="html-italic">p</span> &lt; 0.001) and increased in all treated groups as compared to DMSO. # denotes statistical significance vs. DMSO treated group. Scale bars: 3a 400 μm; 3b 100 μm.</p>
Full article ">Figure 4
<p>(<b>a</b>) Representative micro-CT images of cortical bone of the mandible from adult mice prenatally exposed to DMSO, 0.5×, 10×, 100× and 500× of Mixture N1. Quantification of the measurements for Ct.BV (<b>b</b>), Ct.Th (<b>c</b>) and TMD (<b>d</b>). Data represent estimated marginal means ± SEM. Generalized linear models (GLMs) were performed for statistical analysis. Scale bar: 1 mm.</p>
Full article ">Figure 5
<p>(<b>a</b>–<b>c</b>) Representative photomicrographs of the condylar head from adult mice prenatally exposed to DMSO, 0.5×, 10×, 100× and 500× of Mixture N1, stained with H&amp;E (<b>a</b>) and Safranin-O-red/Methylene green (<b>b</b>,<b>c</b>). Higher magnification images in (<b>c</b>) correspond to the inset area of respective figures in (<b>b</b>). (<b>d</b>) Quantification of % condyle head area stained with Safranin-O. The Safranin-O-stained area was modified by the Mixture exposure (W4 = 39.939; <span class="html-italic">p</span> &lt; 0.001). The % of Safranin-stained area was significantly reduced vs. DMSO in the groups of 0.5× (<span class="html-italic">p</span> = 0.002), 10× (<span class="html-italic">p</span> &lt; 0.001) and 100× (<span class="html-italic">p</span> = 0.002), but not in the 500× group (<span class="html-italic">p</span> = 1.000). (<b>e</b>) The width of the hypertrophic chondrocytes’ zone was modified by the Mixture exposure (W4 = 62.003; <span class="html-italic">p</span> &lt; 0.001). It was significantly reduced vs. DMSO in the groups of 0.5×, 10× and 100× (<span class="html-italic">p</span> &lt; 0.001 for all). Data represent estimated marginal means ± SEM. Generalized linear model (GLM) and Bonferroni post hoc test were performed for statistical analysis. # denotes statistical significance vs. DMSO treated group. Scale bars: 200 μm in (<b>a</b>,<b>b</b>); 50 μm in (<b>c</b>).</p>
Full article ">
18 pages, 2224 KiB  
Article
Guided Decision Tree: A Tool to Interactively Create Decision Trees Through Visualization of Subsequent LDA Diagrams
by Miguel A. Mohedano-Munoz, Laura Raya and Alberto Sanchez
Appl. Sci. 2024, 14(22), 10497; https://doi.org/10.3390/app142210497 - 14 Nov 2024
Viewed by 468
Abstract
Decision trees are a widely used machine learning technique due to their ease of interpretation and construction. This method allows domain experts to learn from raw data, but they cannot include their prior knowledge in the analysis due to its automatic nature, which [...] Read more.
Decision trees are a widely used machine learning technique due to their ease of interpretation and construction. This method allows domain experts to learn from raw data, but they cannot include their prior knowledge in the analysis due to its automatic nature, which implies minimal human intervention in its computation. Conversely, interactive visualization methods have proven to be effective in gaining insights from data, as they incorporate the researcher’s criteria into the analysis process. In an effort to combine both methodologies, we have developed a tool to manually build decision trees according to subsequent visualizations of data mapping after applying linear discriminant analysis in combination with Star Coordinates in order to analyze the importance of each feature in the separation. The nodes’ information contains data about the features that can be used to split and their cut-off values, in order to select them in a guided manner. In this way, it is possible to produce simpler and more expertly driven decision trees than those obtained by automatic methods. The resulting decision trees reduces the tree size compared to those generated by automatic machine learning algorithms, obtaining a similar accuracy and therefore improving their understanding. The tool developed and presented here to manually create decision trees in a guided manner based on the subsequent visualizations of the data mapping facilitates the use of this method in real-world applications. The usefulness of this tool is demonstrated through a case study with a complex dataset used for motion recognition, where domain experts built their own decision trees by applying their prior knowledge and the visualizations provided by the tool in node construction. The resulting trees are more comprehensible and explainable, offering valuable insights into the data and confirming the relevance of upper body features and hand movements for motion recognition. Full article
(This article belongs to the Special Issue AI Applied to Data Visualization)
Show Figures

Figure 1

Figure 1
<p>Use of Star Coordinates to represent records. The first case shows how to apply it when all the features have the same weight and arbitrary orientation. The second case shows the application after use of the transformation matrix obtained from the LDA algorithm to define the feature weight and orientation in the mapping.</p>
Full article ">Figure 2
<p>Schematic of user interaction for node building. The method proposes applying LDA in every target node subset, represent it on Star Coordinates, and decide the partitioning feature from that visualization.</p>
Full article ">Figure 3
<p>Software architecture diagram. GDT presents a client–server architecture, based on Python libraries. Plotly Dash manages the presentation and service layers. The logic layer includes the scikit-learn and pandas libraries and the modification of entropy algorithms.</p>
Full article ">Figure 4
<p>GUI. The work space is divided into three areas: (1) management of the user interaction in the data upload, feature selection, and the manual tree creation; (2) interactive tree and LDA mapping viewer; (3) automatic decision trees and comparing performance settings.</p>
Full article ">Figure 5
<p>Data interaction. From these menus, the user can (<b>a</b>) prepare the data for the analysis and build the train and test sets; and (<b>b</b>) create nodes and extract information from the partition in the node. In (<b>a</b>), analysts can define the separator used in the data file and the feature class for supervised learning and select features for the analysis and the percentage of the training–testing set. In (<b>b</b>), they can define the parameters for the node construction and deletion and define it as a leaf node.</p>
Full article ">Figure 6
<p>Users can extract knowledge about how the subset is split in every step of the classification. In this case, we show the steps to creating a tree in the Iris dataset [<a href="#B33-applsci-14-10497" class="html-bibr">33</a>]. Each node is represented as a pie chart with colors indicating the different categories, and the size representing their rate. If the node is defined as a leaf node, the outline stoke is colored in green. If the last node on a branch has not been defined as a leaf node, its outline stoke is colored in red.</p>
Full article ">Figure 7
<p>Support decision visualization based on LDA and Star Coordinates for the Iris dataset [<a href="#B33-applsci-14-10497" class="html-bibr">33</a>].</p>
Full article ">Figure 8
<p>GDT allows the users to choose the representation for decision support in two classes of nodes. (<b>a</b>) SC representation of LDA after applying a jittering based on PCA in the same node. (<b>b</b>) Coordinated view of 1D LDA projection, distribution plot of samples, and values of features in linear transformation matrix. In this instance, the focus is on how it distinguishes between the versicolor (2, green) and virginica (3, red) classes within the iris dataset.</p>
Full article ">Figure 9
<p>Performance obtained by the automatic and manually built trees in the Iris dataset [<a href="#B33-applsci-14-10497" class="html-bibr">33</a>]. Each class consists of 50 balanced records before training and testing sets with a 70:30 ratio.</p>
Full article ">Figure 10
<p>Example of exported trees both manually obtained and automatically generated by means of scikit-learn from the Iris dataset [<a href="#B33-applsci-14-10497" class="html-bibr">33</a>]. The similar format allows domain experts to make a comparison at first sight. (<b>a</b>) Guided interactively built tree and (<b>b</b>) scikit-learn tree.</p>
Full article ">Figure 11
<p>Representation of the variables in the original dataset (<b>a</b>) and in the two scenarios used for decision tree construction (<b>b</b>). The set of variables has been reduced based on feature importance metrics.</p>
Full article ">Figure 12
<p>Resulting decision trees. (<b>a</b>) Decision tree built by domain experts by using GDT. (<b>b</b>) Decision tree built by using scikit-learn. The manually built tree (<b>a</b>) is simpler and easier to follow than the tree created by the algorithm (<b>b</b>).</p>
Full article ">Figure 13
<p>Performance comparison between the tree built by scikit-learn and manually, with six classes in the dataset. Each class starts balanced, with 1024 samples, and training and testing sets are created with a 70:30 ratio. The results of the manual tree are similar to those obtained with the automatic algorithm, but with a simpler tree built by a process that allows domain experts to participate in the process.</p>
Full article ">Figure 14
<p>Detail from the decision tree built by the domain experts. Highlighted in a red box are the first two steps where leaf nodes (pink and green) are defined for more than 27% of the total samples.</p>
Full article ">Figure 15
<p>Resulting decision trees for the whole dataset with 14 classes. (<b>a</b>) Guided Decision tree. (<b>b</b>) Scikit-learn decision tree.</p>
Full article ">Figure 16
<p>Performance comparison between the tree built by scikit-learn and manually, with 14 classes in the dataset. Each class starts balanced, with 885 samples, and training and testing sets are created with a 70:30 ratio. The results of the manual tree are slightly worse in this case, but the result is a simpler decision tree wherein the domain experts could apply their knowledge.</p>
Full article ">Figure 17
<p>Zoomedin view of <a href="#applsci-14-10497-f015" class="html-fig">Figure 15</a>a, focusing on the early stages, where it can be seen, for instance in the nodes highlighted with the red boxes, that the variable ‘R_Hip_1’ is a decisive node.</p>
Full article ">
14 pages, 9346 KiB  
Article
Human Perception of the Emotional Expressions of Humanoid Robot Body Movements: Evidence from Survey and Eye-Tracking Measurements
by Wa Gao, Shiyi Shen, Yang Ji and Yuan Tian
Biomimetics 2024, 9(11), 684; https://doi.org/10.3390/biomimetics9110684 - 8 Nov 2024
Viewed by 596
Abstract
The emotional expression of body movement, which is an aspect of emotional communication between humans, has not been considered enough in the field of human–robot interactions (HRIs). This paper explores human perceptions of the emotional expressions of humanoid robot body movements to study [...] Read more.
The emotional expression of body movement, which is an aspect of emotional communication between humans, has not been considered enough in the field of human–robot interactions (HRIs). This paper explores human perceptions of the emotional expressions of humanoid robot body movements to study the emotional design of the bodily expressions of robots and the characteristics of the human perception of these emotional body movements. Six categories of emotional behaviors, including happiness, anger, sadness, surprise, fear, and disgust, were designed by imitating human emotional body movements, and they were implemented on a Yanshee robot. A total of 135 participants were recruited for questionnaires and eye-tracking measurements. Statistical methods, including K-means clustering, repeated analysis of variance (ANOVA), Friedman’s ANOVA, and Spearman’s correlation test, were used to analyze the data. According to the statistical results of emotional categories, intensities, and arousals perceived by humans, a guide to grading the designed robot’s bodily expressions of emotion is created. By combining this guide with certain objective analyses, such as fixation and trajectory of eye movements, the characteristics of human perception, including the perceived differences between happiness and negative emotions and the trends of eye movements for different emotional categories, are described. This study not only illustrates subjective and objective evidence that humans can perceive robot bodily expressions of emotions through only vision but also provides helpful guidance for designing appropriate emotional bodily expressions in HRIs. Full article
Show Figures

Figure 1

Figure 1
<p>Yanshee robot.</p>
Full article ">Figure 2
<p>Designed robot’s emotional body movements.</p>
Full article ">Figure 3
<p>(<b>a</b>) Five AOIs of Yanshee robot. (<b>b</b>) Experimental scenario.</p>
Full article ">Figure 4
<p>M-values of self-reports obtained by questionnaires.</p>
Full article ">Figure 5
<p>Estimated marginal means of data obtained by participants’ self-reports in different emotional categories. (<b>a</b>–<b>f</b>) represent the results obtained from the adjusted emotional categories. The error bars in the legend represent the 95% confidence interval.</p>
Full article ">Figure 6
<p>(<b>a</b>–<b>d</b>) Estimated marginal means of average pupil diameters in the categories of happiness, anger, surprise, and fear, respectively. (<b>e</b>) Estimated marginal means of maximum pupil diameters for surprise. The error bars in the legend represent the 95% confidence interval.</p>
Full article ">Figure 7
<p>(<b>a</b>) Estimated marginal means of saccade counts. The error bars in the legend represent the 95% confidence interval. (<b>b</b>) AOIs with the highest first fixation durations.</p>
Full article ">Figure 8
<p>Heat maps for designed behaviors shown in <a href="#biomimetics-09-00684-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure 9
<p>Trajectories of participants’ eye movement.</p>
Full article ">
21 pages, 3490 KiB  
Review
Mapping the Landscape of Biomechanics Research in Stroke Neurorehabilitation: A Bibliometric Perspective
by Anna Tsiakiri, Spyridon Plakias, Georgia Karakitsiou, Alexandrina Nikova, Foteini Christidi, Christos Kokkotis, Georgios Giarmatzis, Georgia Tsakni, Ioanna-Giannoula Katsouri, Sarris Dimitrios, Konstantinos Vadikolias, Nikolaos Aggelousis and Pinelopi Vlotinou
Biomechanics 2024, 4(4), 664-684; https://doi.org/10.3390/biomechanics4040048 - 8 Nov 2024
Viewed by 921
Abstract
Background/Objectives: The incorporation of biomechanics into stroke neurorehabilitation may serve to strengthen the effectiveness of rehabilitation strategies by increasing our understanding of human movement and recovery processes. The present bibliometric analysis of biomechanics research in stroke neurorehabilitation is conducted with the objectives of [...] Read more.
Background/Objectives: The incorporation of biomechanics into stroke neurorehabilitation may serve to strengthen the effectiveness of rehabilitation strategies by increasing our understanding of human movement and recovery processes. The present bibliometric analysis of biomechanics research in stroke neurorehabilitation is conducted with the objectives of identifying influential studies, key trends, and emerging research areas that would inform future research and clinical practice. Methods: A comprehensive bibliometric analysis was performed using documents retrieved from the Scopus database on 6 August 2024. The analysis included performance metrics such as publication counts and citation analysis, as well as science mapping techniques, including co-authorship, bibliographic coupling, co-citation, and keyword co-occurrence analyses. Data visualization tools such as VOSviewer and Power BI were utilized to map the bibliometric networks and trends. Results: An overabundance of recent work has yielded substantial advancements in the application of brain–computer interfaces to electroencephalography and functional neuroimaging during stroke neurorehabilitation., which translate neural activity into control signals for external devices and provide critical insights into the biomechanics of motor recovery by enabling precise tracking and feedback of movement during rehabilitation. A sampling of the most impactful contributors and influential publications identified two leading countries of contribution: the United States and China. Three prominent research topic clusters were also noted: biomechanical evaluation and movement analysis, neurorehabilitation and robotics, and motor recovery and functional rehabilitation. Conclusions: The findings underscore the growing integration of advanced technologies such as robotics, neuroimaging, and virtual reality into neurorehabilitation practices. These innovations are poised to enhance the precision and effectiveness of therapeutic interventions. Future research should focus on the long-term impacts of these technologies and the development of accessible, cost-effective tools for clinical use. The integration of multidisciplinary approaches will be crucial in optimizing patient outcomes and improving the quality of life for stroke survivors. Full article
(This article belongs to the Section Injury Biomechanics and Rehabilitation)
Show Figures

Figure 1

Figure 1
<p>Visualization of the bibliometric data extraction process from the Scopus database.</p>
Full article ">Figure 2
<p>Annual number of publications on stroke neurorehabilitation biomechanics.</p>
Full article ">Figure 3
<p>Co-authorship network map based on countries as the unit of analysis. Each node represents a country, with node size reflecting the number of publications and line thickness indicating the strength of collaboration between countries. The color codes represent different clusters of countries that frequently collaborate on biomechanics and stroke neurorehabilitation research. Countries within the same color group have stronger co-authorship links with one another.</p>
Full article ">Figure 4
<p>Bibliographic coupling analysis using sources as the unit of analysis. The node size indicates the influence of each source, based on the number of shared references. The color codes represent different clusters of sources that share similar citation patterns. Journals or sources within the same color group have a higher degree of shared references, suggesting thematic or disciplinary similarity.</p>
Full article ">Figure 5
<p>Co-citation analysis focused on authors within biomechanics and stroke neurorehabilitation. The size of each node represents the frequency of an author’s co-citation with others. The color codes represent clusters of authors whose works are frequently cited together, indicating that their research is related or falls within a similar subfield.</p>
Full article ">Figure 6
<p>Co-occurrence analysis of author keywords, grouped into three distinct clusters.</p>
Full article ">Figure 7
<p>Co-occurrence network of author keywords in biomechanics and stroke rehabilitation research. Each node represents a keyword, with node size indicating the frequency of its occurrence. The thickness of the links reflects the strength of co-occurrence between keywords. The color codes group keywords into clusters that frequently co-occur in the same publications, representing distinct thematic areas within the research landscape.</p>
Full article ">
22 pages, 9430 KiB  
Article
Using Space Syntax and GIS to Determine Future Growth Routes of Cities: The Case of the Kyrenia White Zone
by Cem Doğu and Cemil Atakara
ISPRS Int. J. Geo-Inf. 2024, 13(11), 399; https://doi.org/10.3390/ijgi13110399 - 7 Nov 2024
Viewed by 481
Abstract
Cities are in constant development, both structurally and demographically, necessitating careful planning to enhance their orderliness and livability. This research focuses on identifying development directions and routes for the Kyrenia White Zone, situated between the sea and the mountains in northern Cyprus, a [...] Read more.
Cities are in constant development, both structurally and demographically, necessitating careful planning to enhance their orderliness and livability. This research focuses on identifying development directions and routes for the Kyrenia White Zone, situated between the sea and the mountains in northern Cyprus, a significant tourist area. The rapid implementation of zoning laws over different periods has led to swift development and population growth, resulting in various infrastructure challenges, particularly related to transportation. The primary aim of this study is to assess the current infrastructure issues within the zone, understand user perceptions, and identify key factors influencing future growth. Based on the collected data, we propose an alternative growth area for the future development plan of the city. Additionally, this research seeks to explore irregular urban developments and make informed design decisions for their future. Utilizing Space Syntax and GIS as core methodologies, the study employs Space Syntax, a research method developed by Bill Hillier and Julienne Hanson in the 1970s, to analyze human movement and perception. The existing map system of the Kyrenia White Zone was digitized, and essential geographical information was gathered. This data were analyzed using GIS and evaluated through the Space Syntax method. The analysis yielded alternative growth routes that address current challenges within the zone, accompanied by recommendations for enhancing its future development. Full article
Show Figures

Figure 1

Figure 1
<p>Cyprus Island—Kyrenia City.</p>
Full article ">Figure 2
<p>Kyrenia City White Zone border.</p>
Full article ">Figure 3
<p>Kyrenia City White Zone depth and slope section. Slope section direction North-South.</p>
Full article ">Figure 4
<p>The view of the city of Girne and Besparmak Range in the older times (Source: Girne City Archive- Derya Oktay).</p>
Full article ">Figure 5
<p>Configurational relationships [<a href="#B1-ijgi-13-00399" class="html-bibr">1</a>]. (Letters inside of the spaces ‘a-b’ showing spaces and ‘<b>a</b>–<b>e</b>’ figures are showing different space relations with connection and shows graphically ‘<b>d</b>,<b>e</b>’ that different connection pattern).</p>
Full article ">Figure 6
<p>Depths in different spatial organizations [<a href="#B1-ijgi-13-00399" class="html-bibr">1</a>]. (‘<b>a</b>–<b>c</b>’ graphics show more complicated connections. ‘<b>d</b>’ is shows connection pattern of the spaces and numbers shows depth of the spaces according to reachability).</p>
Full article ">Figure 7
<p>Analysis chart (author, Cem Doğu).</p>
Full article ">Figure 8
<p>Kyrenia White Zone connectivity analysis graph.</p>
Full article ">Figure 9
<p>Kyrenia White Zone connectivity analysis graph/axial line graph.</p>
Full article ">Figure 10
<p>Kyrenia White Zone integration analysis graph.</p>
Full article ">Figure 11
<p>Kyrenia White Zone integration analysis graph/axial line graph.</p>
Full article ">Figure 12
<p>Kyrenia White Zone intensity analysis graph.</p>
Full article ">Figure 13
<p>Kyrenia White Zone intensity analysis graph/axial line graph.</p>
Full article ">Figure 14
<p>Kyrenia White Zone line length analysis graph.</p>
Full article ">Figure 15
<p>Kyrenia White Zone line length analysis graph/axial line graph.</p>
Full article ">Figure 16
<p>Kyrenia White Zone line road proposal/analysis graph. (The road in the red circle is the suggested route to be integrated into the existing system).</p>
Full article ">Figure 17
<p>Kyrenia White Zone development axis. (The red arrows show the necessity of the city to grow in this direction along with the proposed road route).</p>
Full article ">
17 pages, 2330 KiB  
Article
Decoding Motor Skills: Video Analysis Unveils Age-Specific Patterns in Childhood and Adolescent Movement
by Luca Russo, Massimiliano Micozzi, Ghazi Racil, Alin Larion, Elena Lupu, Johnny Padulo and Gian Mario Migliaccio
Children 2024, 11(11), 1351; https://doi.org/10.3390/children11111351 - 5 Nov 2024
Viewed by 840
Abstract
Motor skill development is crucial in human growth, evolving with the maturation of the nervous and musculoskeletal systems. Quantifying these skills, especially coordinative abilities, remains challenging. This study aimed to assess the performance of five motor tasks in children and adolescents using high-speed [...] Read more.
Motor skill development is crucial in human growth, evolving with the maturation of the nervous and musculoskeletal systems. Quantifying these skills, especially coordinative abilities, remains challenging. This study aimed to assess the performance of five motor tasks in children and adolescents using high-speed video analysis, providing data for movement and health professionals. Seventy-two volunteers were divided into three age groups: 27 first-grade primary school students (19 males and 8 females, aged 6.5 ± 0.5 years), 35 fourth-grade primary school students (16 males and 19 females, aged 9.2 ± 0.4 years), and 28 s-year middle school students (16 males and 12 females, aged 13.0 ± 0.3 years). Participants performed five motor tasks: standing long jump, running long jump, stationary ball throw, running ball throw, and sprint running. Each task was recorded at 120 frames per second and analyzed using specialized software to measure linear and angular kinematic parameters. Quantitative measurements were taken in the sagittal plane, while qualitative observations were made using a dichotomous approach. Statistical analysis was performed using the Kruskal–Wallis and Mann–Whitney tests with Bonferroni correction. Significant differences were observed across age groups in various parameters. In the standing long jump, older participants exhibited a longer time between initial movement and maximum loading. The running long jump revealed differences in the take-off angle, with fourth-grade students performing the best. Ball-throwing tests indicated improvements in the release angle with age, particularly in females. Sprint running demonstrated the expected improvements in time and stride length with age. Gender differences were notable in fourth-grade students during the running long jump, with females showing greater knee flexion, while males achieved better take-off angles. Video analysis effectively identified age-related and gender-specific differences in motor skill performance. The main differences were measured between first-grade primary school and second-year middle school students while gender differences were limited to all age groups. This method provides valuable insights into motor development trajectories and can be used by professionals to objectively assess and monitor the technical aspects of motor skills across different age groups. Full article
Show Figures

Figure 1

Figure 1
<p>Sprint run (SR). 1: Running time to cover the central 6 m of the sprint. 2: Ground contact time. 3: Flight time. 4: Step length.</p>
Full article ">Figure 2
<p>Standing long jump (SLJ). 1: Time from the initial movement to maximum knee flexion. 2: Time from maximum knee flexion to take-off. 3: Flight time. 4: Knee angle at maximum flexion.</p>
Full article ">Figure 3
<p>Long jump with run-up (SLJ-R). 1: Ground contact time of the last 3 steps. 2: Knee angle at maximum flexion. 3: Take-off angle.</p>
Full article ">Figure 4
<p>Standing ball throw (BT). 1: Time from maximum posterior loading to release of the ball. 2: Angle of the ball at release.</p>
Full article ">Figure 5
<p>Standing ball throw with run-up (BT-R) 1: Horizontal distance between the front support foot and the throw line. 2: Angle of the ball at release. 3: Time from maximum posterior loading to release of the ball.</p>
Full article ">Figure 6
<p>Differences in the long jump with run-up (SLJ-R) between genders according to age groups: (<b>A</b>) values for maximum flexion knee angle before the take-off; (<b>B</b>) values for take-off angle. Note. * Significant differences between genders in fourth-grade primary school participants.</p>
Full article ">
24 pages, 4521 KiB  
Article
The Polarization Loop: How Emotions Drive Propagation of Disinformation in Online Media—The Case of Conspiracy Theories and Extreme Right Movements in Southern Europe
by Erik Bran Marino, Jesus M. Benitez-Baleato and Ana Sofia Ribeiro
Soc. Sci. 2024, 13(11), 603; https://doi.org/10.3390/socsci13110603 - 5 Nov 2024
Viewed by 981
Abstract
This paper examines the influence of emotions on political polarization, looking at online propagation of conspiracy thinking by extreme right movements in Southern Europe. Integrating insights from psychology, political science, media studies, and system theory, we propose the ‘polarization loop’, a causal mechanism [...] Read more.
This paper examines the influence of emotions on political polarization, looking at online propagation of conspiracy thinking by extreme right movements in Southern Europe. Integrating insights from psychology, political science, media studies, and system theory, we propose the ‘polarization loop’, a causal mechanism explaining the cyclical relationship between extreme messages, emotional engagement, media amplification, and societal polarization. We illustrate the utility of the polarization loop observing the use of the Great Replacement Theory by extreme right movements in Italy, Portugal, and Spain. We suggest possible options to mitigate the negative effects of online polarization in democracy, including public oversight of algorithmic decission-making, involving social science and humanities in algorithmic design, and strengthening resilience of citizenship to prevent emotional overflow. We encourage interdisciplinary research where historical analysis can guide computational methods such as Natural Language Processing (NLP), using Large Language Models fine-tunned consistently with political science research. Provided the intimate nature of emotions, the focus of connected research should remain on structural patterns rather than individual behavior, making it explicit that results derived from this research cannot be applied as the base for decisions, automated or not, that may affect individuals. Full article
(This article belongs to the Special Issue Disinformation in the Public Media in the Internet Society)
Show Figures

Figure 1

Figure 1
<p>The Feedback Loop in Political Communication.</p>
Full article ">Figure 2
<p>Giorgia Meloni’s Tweet on Ethnic Substitution.</p>
Full article ">Figure 3
<p>Tweet by Matteo Salvini on Ethnic Substitution.</p>
Full article ">Figure 4
<p>Tweet by Matteo Salvini on Ethnic Substitution.</p>
Full article ">Figure 5
<p>Chega’s tweet on alleged population replacement in Lisbon.</p>
Full article ">Figure 6
<p>Chega tweet emphasizing cultural conflicts over religious constructions.</p>
Full article ">Figure 7
<p>Tweet by Vox questioning the rationality of demographic replacement theories.</p>
Full article ">Figure 8
<p>Tweet by Vox claiming widespread recognition of demographic replacement in Spain.</p>
Full article ">
19 pages, 3586 KiB  
Article
Effect of Stimulus Regularities on Eye Movement Characteristics
by Bilyana Genova, Nadejda Bocheva and Ivan Hristov
Appl. Sci. 2024, 14(21), 10055; https://doi.org/10.3390/app142110055 - 4 Nov 2024
Viewed by 510
Abstract
Humans have the unique ability to discern spatial and temporal regularities in their surroundings. However, the effect of learning these regularities on eye movement characteristics has not been studied enough. In the present study, we investigated the effect of the frequency of occurrence [...] Read more.
Humans have the unique ability to discern spatial and temporal regularities in their surroundings. However, the effect of learning these regularities on eye movement characteristics has not been studied enough. In the present study, we investigated the effect of the frequency of occurrence and the presence of common chunks in visual images on eye movement characteristics like the fixation duration, saccade amplitude and number, and gaze number across sequential experimental epochs. The participants had to discriminate the patterns presented in pairs as the same or different. The order of pairs was repeated six times. Our results show an increase in fixation duration and a decrease in saccade amplitude in the sequential epochs, suggesting a transition from ambient to focal information processing as participants acquire knowledge. This transition indicates deeper cognitive engagement and extended analysis of the stimulus information. Interestingly, contrary to our expectations, the saccade number increased, and the gaze number decreased. These unexpected results might imply a reduction in the memory load and a narrowing of attentional focus when the relevant stimulus characteristics are already determined. Full article
(This article belongs to the Special Issue Latest Research on Eye Tracking Applications)
Show Figures

Figure 1

Figure 1
<p>Pattern set used in the stimuli design. Each column contains patterns A, B, C, and D; each row—the patterns from different groups.</p>
Full article ">Figure 2
<p>Distribution of the fixation duration in the sequential epochs (1–6) of the experiment.</p>
Full article ">Figure 3
<p>Predicted fixation duration for the different pattern combinations with 95% credible interval.</p>
Full article ">Figure 4
<p>Predicted fixation duration for the different stimuli and epochs with 95% credible interval.</p>
Full article ">Figure 5
<p>Distribution of the saccade number in the sequential epochs (1–6) of the experiment.</p>
Full article ">Figure 6
<p>Predicted number of saccades for different pattern combinations.</p>
Full article ">Figure 7
<p>Predicted number of saccades for different stimuli and epochs with 95% credible interval.</p>
Full article ">Figure 8
<p>Distribution of the gaze number in the sequential epochs (1–6) of the experiment.</p>
Full article ">Figure 9
<p>Predicted number of gazes for different pattern combinations.</p>
Full article ">Figure 10
<p>Predicted number of gazes for different pattern combinations and epochs.</p>
Full article ">Figure 11
<p>Distribution of the saccade amplitude in the sequential epochs (1–6) of the experiment.</p>
Full article ">Figure 12
<p>Posterior distributions for the saccade amplitudes for all stimulus patterns: (<b>A</b>) for the stimuli from Group 1; (<b>B</b>) for the stimuli from Group 2; (<b>C</b>) for the stimuli from Group 3. The shaded regions correspond to the 90% credible intervals of the median.</p>
Full article ">
27 pages, 6449 KiB  
Article
In Vivo Insights: Near-Infrared Photon Sampling of Reflectance Spectra from Cranial and Extracranial Sites in Healthy Individuals and Patients with Essential Tremor
by Antonio Currà, Riccardo Gasbarrone, Davide Gattabria, Giuseppe Bonifazi, Silvia Serranti, Daniela Greco, Paolo Missori, Francesco Fattapposta, Alessandra Picciano, Andrea Maffucci and Carlo Trompetto
Photonics 2024, 11(11), 1025; https://doi.org/10.3390/photonics11111025 - 30 Oct 2024
Viewed by 453
Abstract
Near-infrared (NIR) spectroscopy is a powerful non-invasive technique for assessing the optical properties of human tissues, capturing spectral signatures that reflect their biochemical and structural characteristics. In this study, we investigated the use of NIR reflectance spectroscopy combined with chemometric analysis to distinguish [...] Read more.
Near-infrared (NIR) spectroscopy is a powerful non-invasive technique for assessing the optical properties of human tissues, capturing spectral signatures that reflect their biochemical and structural characteristics. In this study, we investigated the use of NIR reflectance spectroscopy combined with chemometric analysis to distinguish between patients with Essential Tremor (ET) and healthy individuals. ET is a common movement disorder characterized by involuntary tremors, often making it difficult to clinically differentiate from other neurological conditions. We hypothesized that NIR spectroscopy could reveal unique optical fingerprints that differentiate ET patients from healthy controls, potentially providing an additional diagnostic tool for ET. We collected NIR reflectance spectra from both extracranial (biceps and triceps) and cranial (cerebral cortex and brainstem) sites in ET patients and healthy subjects. Using Partial Least Squares Discriminant Analysis (PLS-DA) and Partial Least Squares (PLS) regression models, we analyzed the optical properties of the tissues and identified significant wavelength peaks associated with spectral differences between the two groups. The chemometric analysis successfully classified subjects based on their spectral profiles, revealing distinct differences in optical properties between cranial and extracranial sites in ET patients compared to healthy controls. Our results suggest that NIR spectroscopy, combined with machine learning algorithms, offers a promising non-invasive method for the in vivo characterization and differentiation of tissues in ET patients. Full article
Show Figures

Figure 1

Figure 1
<p>Average reflectance spectra collected from extracranial and cranial sites: (<b>a</b>) ET biceps vs. Normal biceps, (<b>b</b>) ET triceps vs. Normal triceps, (<b>c</b>) ET cortical vs. Normal cortical, and (<b>d</b>) ET brainstem vs. Normal brainstem.</p>
Full article ">Figure 2
<p>Principal Component Analysis (PCA) scores plot for the first two principal components, based on spectra collected at the extracranial/biceps site in both ET patients and healthy subjects (Normal) (<b>a</b>). The loadings plot for the first principal component (PC1) is shown in (<b>b</b>).</p>
Full article ">Figure 3
<p>Panel (<b>a</b>): scores plot of Principal Component Analysis (PCA) for the first two principal components, based on spectra collected from the extracranial/triceps site in both patients (ET) and healthy subjects (Normal). Panel (<b>b</b>): loadings plot for the first principal component.</p>
Full article ">Figure 4
<p>Panel (<b>a</b>): scores plot of Principal Component Analysis (PCA) for the first two components, based on spectra collected from the cranial/cortical site in both patients (ET) and healthy subjects (Normal). Panel (<b>b</b>): loadings plot for the first principal component.</p>
Full article ">Figure 5
<p>Panel (<b>a</b>): scores plot of Principal Component Analysis (PCA) for the first two components, based on spectra collected from the cranial/brainstem site in both patients (ET) and healthy subjects (Normal)<b>.</b> Panel (<b>b</b>): loadings plot for the first principal component.</p>
Full article ">Figure 6
<p>VIP scores plot for “ET biceps/Normal biceps” (<b>a</b>) and “ET triceps/Normal triceps” (<b>b</b>).</p>
Full article ">Figure 7
<p>Panel (<b>a</b>) VIP scores plot for “ET cortical/Normal cortical”; panel (<b>b</b>) “ET brainstem/Normal brainstem”.</p>
Full article ">Figure 8
<p>Regression models for “ET biceps/Normal biceps” data based on age panel (<b>a</b>) and BMI panel (<b>c</b>), along with the corresponding VIP score plots for the age-based model panel (<b>b</b>) and the BMI-based model panel (<b>d</b>). Biceps (B) = ET biceps; Biceps (N) = Normal biceps.</p>
Full article ">Figure 9
<p>Regression models for “ET triceps/Normal triceps” data based on age panel (<b>a</b>) and BMI panel (<b>c</b>), including the VIP score plots for the age-based model panel (<b>b</b>) and the BMI-based model panel (<b>d</b>). Triceps (B) = ET triceps; Triceps (N) = Normal triceps.</p>
Full article ">Figure 10
<p>Regression models for “ET cortical/Normal cortical” data based on age panel (<b>a</b>) and BMI panel (<b>c</b>), along with VIP score plots for the age-based model panel (<b>b</b>) and the BMI-based model panel (<b>d</b>). Cerebral cortex (B) = ET cortical; Cerebral cortex (N) = Normal cortical.</p>
Full article ">Figure 10 Cont.
<p>Regression models for “ET cortical/Normal cortical” data based on age panel (<b>a</b>) and BMI panel (<b>c</b>), along with VIP score plots for the age-based model panel (<b>b</b>) and the BMI-based model panel (<b>d</b>). Cerebral cortex (B) = ET cortical; Cerebral cortex (N) = Normal cortical.</p>
Full article ">Figure 11
<p>Regression models for “ET brainstem/Normal brainstem” data based on age panel (<b>a</b>) and BMI panel (<b>c</b>), featuring the VIP score plots for the age-based model panel (<b>b</b>) and the BMI-based model panel (<b>d</b>). Mid-brain (B) = ET brainstem; Mid-brain (N) = Normal brainstem.</p>
Full article ">
20 pages, 5455 KiB  
Article
A New Iterative Algorithm for Magnetic Motion Tracking
by Tobias Schmidt, Johannes Hoffmann, Moritz Boueke, Robert Bergholz, Ludger Klinkenbusch and Gerhard Schmidt
Sensors 2024, 24(21), 6947; https://doi.org/10.3390/s24216947 - 29 Oct 2024
Viewed by 435
Abstract
Motion analysis is of great interest to a variety of applications, such as virtual and augmented reality and medical diagnostics. Hand movement tracking systems, in particular, are used as a human–machine interface. In most cases, these systems are based on optical or acceleration/angular [...] Read more.
Motion analysis is of great interest to a variety of applications, such as virtual and augmented reality and medical diagnostics. Hand movement tracking systems, in particular, are used as a human–machine interface. In most cases, these systems are based on optical or acceleration/angular speed sensors. These technologies are already well researched and used in commercial systems. In special applications, it can be advantageous to use magnetic sensors to supplement an existing system or even replace the existing sensors. The core of a motion tracking system is a localization unit. The relatively complex localization algorithms present a problem in magnetic systems, leading to a relatively large computational complexity. In this paper, a new approach for pose estimation of a kinematic chain is presented. The new algorithm is based on spatially rotating magnetic dipole sources. A spatial feature is extracted from the sensor signal, the dipole direction in which the maximum magnitude value is detected at the sensor. This is introduced as the “maximum vector”. A relationship between this feature, the location vector (pointing from the magnetic source to the sensor position) and the sensor orientation is derived and subsequently exploited. By modelling the hand as a kinematic chain, the posture of the chain can be described in two ways: the knowledge about the magnetic correlations and the structure of the kinematic chain. Both are bundled in an iterative algorithm with very low complexity. The algorithm was implemented in a real-time framework and evaluated in a simulation and first laboratory tests. In tests without movement, it could be shown that there was no significant deviation between the simulated and estimated poses. In tests with periodic movements, an error in the range of 1° was found. Of particular interest here is the required computing power. This was evaluated in terms of the required computing operations and the required computing time. Initial analyses have shown that a computing time of 3 μs per joint is required on a personal computer. Lastly, the first laboratory tests basically prove the functionality of the proposed methodology. Full article
(This article belongs to the Special Issue Sensor-Based Human Activity Recognition)
Show Figures

Figure 1

Figure 1
<p>System overview: The illustration includes an external localization (blue) consisting of a defined setup of (here 8) sensors. The inner localization (red) consists of a 3D coil which is attached to the wrist as well as magnetic 1D sensors which are attached to each finger element. Following the localization, gesture recognition or processing of the data for the human–machine interface can be carried out.</p>
Full article ">Figure 2
<p>Typical example of use of the presented algorithm: At the origin of the coordinate system a 3D magnetic transmitter is located. A kinematic chain is equipped with 1D magnetic sensors, such as fluxgate magnetometers or magnetoelectric sensors, on every chain element. The kinematic chains are connected through joints with ellipsoidal cross-sections, each providing two degrees of freedom. Any additional information from the kinematic chain about the position is used to increase the speed of the localization algorithm.</p>
Full article ">Figure 3
<p>3D coil: (<b>a</b>) sketches the modelled simulation object. A photograph of the corresponding realization is shown in (<b>b</b>). Note that both constructions consist of three orthogonal coils.</p>
Full article ">Figure 4
<p>Geometry used for the derivation: <math display="inline"><semantics> <mover accent="true"> <mi>r</mi> <mo stretchy="false">→</mo> </mover> </semantics></math> and <math display="inline"><semantics> <msub> <mover accent="true"> <mi>e</mi> <mo stretchy="false">→</mo> </mover> <mi mathvariant="normal">s</mi> </msub> </semantics></math> both lie in the <math display="inline"><semantics> <mrow> <mi>x</mi> <mi>y</mi> </mrow> </semantics></math>-plane. <math display="inline"><semantics> <msub> <mi>ϕ</mi> <mi mathvariant="normal">m</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>θ</mi> <mi mathvariant="normal">m</mi> </msub> </semantics></math> define the orientation of the rotating magnetic dipole <math display="inline"><semantics> <mover accent="true"> <mi>m</mi> <mo stretchy="false">→</mo> </mover> </semantics></math> in spherical coordinates.</p>
Full article ">Figure 5
<p>The relation in Equation (<a href="#FD17-sensors-24-06947" class="html-disp-formula">17</a>) is independent of the angle <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math>. Moreover, the unique relationship between the three unit vectors <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>e</mi> <mo stretchy="false">→</mo> </mover> <mi>s</mi> </msub> <mo>,</mo> <msub> <mover accent="true"> <mi>e</mi> <mo stretchy="false">→</mo> </mover> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>,</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <msub> <mover accent="true"> <mi>e</mi> <mo stretchy="false">→</mo> </mover> <mi>r</mi> </msub> </semantics></math> is clarified.</p>
Full article ">Figure 6
<p>Blue vectors: Calculated sensor orientations <math display="inline"><semantics> <msub> <mover accent="true"> <mi>e</mi> <mo stretchy="false">→</mo> </mover> <mi>s</mi> </msub> </semantics></math> for different values of the sensor location <math display="inline"><semantics> <mover accent="true"> <mi>r</mi> <mo stretchy="false">→</mo> </mover> </semantics></math>. The starting point of each blue vector represents the corresponding <math display="inline"><semantics> <mover accent="true"> <mi>r</mi> <mo stretchy="false">→</mo> </mover> </semantics></math>. Yellow vector: The maximum vector at the origin, always polarized in the <span class="html-italic">y</span>-direction. Note that the lengths of the blue vectors are not of interest here, as only the directions are relevant.</p>
Full article ">Figure 7
<p>Track of the magnetic dipole <math display="inline"><semantics> <mrow> <mover accent="true"> <mi>m</mi> <mo stretchy="false">→</mo> </mover> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> with starting point at the origin as a function of time. The tip of <math display="inline"><semantics> <mover accent="true"> <mi>m</mi> <mo stretchy="false">→</mo> </mover> </semantics></math> moves on the surface a sphere with radius <math display="inline"><semantics> <msub> <mi>m</mi> <mn>0</mn> </msub> </semantics></math>, according to Equation (<a href="#FD22-sensors-24-06947" class="html-disp-formula">22</a>) for <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>ω</mi> </msub> <mo>=</mo> <msub> <mi>ω</mi> <mi>θ</mi> </msub> <mo>/</mo> <msub> <mi>ω</mi> <mi>ϕ</mi> </msub> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>The left figure exemplary shows a max vector at the origin (yellow), the position and orientation (orange) of the sensor, and the corresponding plane of zero-crossing vectors (purple). The right side shows the corresponding sensor signal as a function of time. The times when a zero-crossing is achieved are marked with a purple dot. The simulation works with a source which is driven with <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>ω</mi> </msub> <mo>=</mo> <msub> <mi>ω</mi> <mi>θ</mi> </msub> <mo>/</mo> <mi>ω</mi> <mi>ϕ</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>. The absolute values/lengths are not relevant, as the relative relationship between the vectors and the zero crossings are both of interest.</p>
Full article ">Figure 9
<p>Flow chart of the iterative algorithm: The algorithm starts with a random initial orientation. Then, the sensor position relative to the source is determined. Afterwards, the corresponding orientation is calculated. When there is no relevant change between the data obtained with two subsequent iterations, convergence is reached, and this orientation is the estimated result.</p>
Full article ">Figure 10
<p>Exemplary iterative process: This figure shows the iterations for a simple setup. The first sub-figure shows the used setup with the coordinate system. The origin of this setup is located at the first kinematic chain element, where the source is also located. The source is attached to the first kinematic element in such a way that the relative position of the source to the kinematic chain is always constant. In the following figures, the coordinate system has been omitted. The blue vectors represent the potential poses for the detected MV. The light green construction shows the ground truth. The red vector is a normalized position vector of the sensor which points to the potential pose in this direction. The sensor is mounted on the second bone. It is represented by a black rectangle with a vector in the sensitive direction. Subfigure (<b>b</b>) starts with a bone orientation in <span class="html-italic">x</span>-direction. For some iterations, the kinematic chain and the related position vector are shown. After 15 iterations, subfigure (<b>d</b>), the sensor pose matches a potential pose (ground truth) and the algorithm converges.</p>
Full article ">Figure 11
<p>Angular error in dependence of the iteration: The figure shows the behaviour of the angular error in dependence of the number of iteration. Different setups of length ratios are looked at. The legend shows the corresponding <span class="html-italic">Q</span> for each curve. All curves tend closer to zero with each iteration.</p>
Full article ">Figure 12
<p>Definition of the angles at a joint between two bones.</p>
Full article ">Figure 13
<p>The relation between <math display="inline"><semantics> <msub> <mi>ϕ</mi> <mrow> <mi mathvariant="normal">s</mi> <mo>,</mo> <mi mathvariant="normal">j</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>ϕ</mi> <mrow> <mi>max</mi> <mo>,</mo> <mi mathvariant="normal">j</mi> </mrow> </msub> </semantics></math> is represented for different values of Q. The plots are subdivided for different values of <span class="html-italic">Q</span>. In the ranges <math display="inline"><semantics> <mrow> <mn>0</mn> <mo>&lt;</mo> <mi>Q</mi> <mo>&lt;</mo> <mn>0.5</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>Q</mi> <mo>&gt;</mo> <mn>1</mn> </mrow> </semantics></math> there is a clear assignment, i.e., there is a unique bidirectional relation between <math display="inline"><semantics> <msub> <mi>ϕ</mi> <mrow> <mi>max</mi> <mo>,</mo> <mi mathvariant="normal">j</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>ϕ</mi> <mrow> <mi mathvariant="normal">s</mi> <mo>,</mo> <mi mathvariant="normal">j</mi> </mrow> </msub> </semantics></math>. However, between <math display="inline"><semantics> <mrow> <mn>0.5</mn> </mrow> </semantics></math> and 1 we observe a non-unique relation.</p>
Full article ">Figure 14
<p>For <math display="inline"><semantics> <mrow> <mi>Q</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, the maximum angular error does not approach zero even after several iterations, i.e., the algorithm is non-convergent.</p>
Full article ">Figure 15
<p>Simulation overview: The simulation is divided in two sections. The upper (dark) part simulates the motion and the resulting field at the sensor. In the lower part (bright), the described algorithm is implemented and the pose is calculated.</p>
Full article ">Figure 16
<p>Simulation of a motion: All elements are in the <math display="inline"><semantics> <mrow> <mi>y</mi> <mi>z</mi> </mrow> </semantics></math>-plane. The 3D coil source is located in the origin. The first bone is aligned with the <span class="html-italic">z</span>-axis and its end represents the position of the joint. The second bone moves from <math display="inline"><semantics> <msup> <mn>90</mn> <mo>∘</mo> </msup> </semantics></math> to <math display="inline"><semantics> <mrow> <mo>−</mo> <msup> <mn>90</mn> <mo>∘</mo> </msup> </mrow> </semantics></math> with respect to the axis of the first bone.</p>
Full article ">Figure 17
<p>(<b>a</b>) shows both the estimated angle and the simulated one. The dark red line represents the simulation (ground truth) while the light red line is the estimation of the described algorithm. The latter is shifted 10° to enhance the clarity of the visualization. In (<b>b</b>), the difference between the simulation and the estimation is plotted. We observe an error signal which follows the angle of the movement.</p>
Full article ">Figure 18
<p>Upper figure: The built prototype consists of two PVC elements, connected to each other with a screw allowing for one degree of freedom. The 3D coil source is located at one end of the longer element. On the shorter element, a fluxgate magnetometer [<a href="#B24-sensors-24-06947" class="html-bibr">24</a>] is mounted. The illustration in the lower figure shows the assembly for a 30° position.</p>
Full article ">Figure 19
<p>The boxplots show the experimental results of the measured sensor angles for each of the seven given (ground-truth) joint angles. The box plots show the median, the first quartile, the third quartile, the minimum, the maximum, and several outliers for each joint angle.</p>
Full article ">Figure 20
<p>Possible angle ranges <math display="inline"><semantics> <mi>δ</mi> </semantics></math> for two different lengths of the second bone at a fixed length of the first bone (<span class="html-italic">Q</span> is higher for the left realization).</p>
Full article ">
19 pages, 1024 KiB  
Article
Comparison Between InterCriteria and Correlation Analyses over sEMG Data from Arm Movements in the Horizontal Plane
by Maria Angelova, Rositsa Raikova and Silvija Angelova
Appl. Sci. 2024, 14(21), 9864; https://doi.org/10.3390/app14219864 - 28 Oct 2024
Viewed by 588
Abstract
InterCriteria analysis (ICrA) and two kinds of correlation analyses, Pearson (PCA) and Spearman (SCA), were applied to surface electromyography (sEMG) signals obtained from human arm movements in the horizontal plane. Ten healthy participants performed ten movements, eight of which were cyclic. Each cyclic [...] Read more.
InterCriteria analysis (ICrA) and two kinds of correlation analyses, Pearson (PCA) and Spearman (SCA), were applied to surface electromyography (sEMG) signals obtained from human arm movements in the horizontal plane. Ten healthy participants performed ten movements, eight of which were cyclic. Each cyclic movement (CM) consisted of flexion and extension phases with equal duration (10 s, 6 s, 2 s, and 1 s) and two 5 s rest poses between them. The CMs were performed with and without an added load of 0.5 kg on the wrists of the participants. The sEMG signals from six different muscles or separate muscle heads (m. deltoideus pars clavicularis, m. deltoideus pars spinata, m. brachialis, m. anconeus, m. biceps brachii, and m. triceps brachii long head) were recorded and used to compare the results of the ICrA, PCA, and SCA. All three methods found identical consonance pairs for the flexion and extension CM phases. Additionally, PCA detected two more consonance pairs in the extension phases. In this investigation, ICrA, PCA, and SCA were proven to be reliable tools when applied separately or in combination for sEMG data. These three methods are appropriate for researching arm movements in the horizontal plane and experimental protocol revision. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Starting position for motor tasks from MT 3 to MT 10. Here is an attached weight of 0.5 kg used for movements MT 7, MT 8, MT 9, and MT 10; (<b>b</b>) outline of the trajectory of cyclic motor tasks. The shoulder (1), elbow (2), and wrist joints (3 and 4) are indicated with green dots. The endpoints 3 and 4 are where the wrist is located during the phases of the poses.</p>
Full article ">Figure 2
<p>sEMG data from Delcla, Delspi, BIC, TRI, ANC, and BRA for eight different flexion (<b>a</b>,<b>c</b>), and extension phases (<b>b</b>,<b>d</b>) for two participants—P1 and P10.</p>
Full article ">Figure 3
<p>Identical results obtained by ICrA, PCA, and SCA for twenty-eight (<b>a</b>) flexion phases and (<b>b</b>) extension phases.</p>
Full article ">
Back to TopTop