Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,089)

Search Parameters:
Keywords = IMU

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 4202 KiB  
Article
Know Your Grip: Real-Time Holding Posture Recognition for Smartphones
by Rene Hörschinger, Marc Kurz and Erik Sonnleitner
Electronics 2024, 13(23), 4596; https://doi.org/10.3390/electronics13234596 - 21 Nov 2024
Abstract
This paper introduces a model that predicts four common smartphone-holding postures, aiming to enhance user interface adaptability. It is unique in being completely independent of platform and hardware, utilizing the inertial measurement unit (IMU) for real-time posture detection based on sensor data collected [...] Read more.
This paper introduces a model that predicts four common smartphone-holding postures, aiming to enhance user interface adaptability. It is unique in being completely independent of platform and hardware, utilizing the inertial measurement unit (IMU) for real-time posture detection based on sensor data collected around tap gestures. The model identifies whether the user is holding and operating the smartphone with one hand or using both hands in different configurations. For model training and validation, sensor time series data undergo extensive feature extraction, including statistical, frequency, magnitude, and wavelet analyses. These features are incorporated into 74 distinct sets, tested across various machine learning frameworks—k-nearest neighbors (KNN), support vector machine (SVM), and random forest (RF)—and evaluated for their effectiveness using metrics such as cross-validation scores, test accuracy, Kappa statistics, confusion matrices, and ROC curves. The optimized model demonstrates a high degree of accuracy, successfully predicting the holding hand with a 95.7% success rate. This approach highlights the potential of leveraging sensor data to improve mobile user experiences by adapting interfaces to natural user interactions. Full article
(This article belongs to the Special Issue Applied Machine Learning in Intelligent Systems)
16 pages, 1721 KiB  
Article
Comparison of Velocity and Estimated One Repetition Maximum Measured with Different Measuring Tools in Bench Presses and Squats
by Roland van den Tillaar, Hallvard Nygaard Falch and Stian Larsen
Sensors 2024, 24(23), 7422; https://doi.org/10.3390/s24237422 - 21 Nov 2024
Viewed by 62
Abstract
The aim of this study was to compare barbell velocities at different intensities and estimated 1-RM with actual 1-RM measured with different measuring tools in bench presses and squats. Fourteen resistance-trained athletes (eight men, six women, age 28.1 ± 7.5 years, body mass [...] Read more.
The aim of this study was to compare barbell velocities at different intensities and estimated 1-RM with actual 1-RM measured with different measuring tools in bench presses and squats. Fourteen resistance-trained athletes (eight men, six women, age 28.1 ± 7.5 years, body mass 78.1 ± 12.2 kg, body height 1.73 ± 0.09 m) performed bench presses and squats at five loads varying from 45 to 85% of one repetition maximum (1-RM), together with 1-RM testing, while measuring mean, mean propulsive, and peak barbell velocity with six different commercially used inertial measurement units (IMUs) and linear encoder software systems attached to the barbell. The 1-RM was also estimated based upon the load–velocity regression, which was compared with the actual 1-RM in the bench press and squat exercises. The main findings were that GymAware revealed the highest reliability along with minimal bias, while Musclelab and Vmaxpro showed moderate reliability with some variability at higher loads. Speed4lifts and PUSH band indicated greater variability, specifically at higher intensities. Furthermore, in relation to the second aim of the study, significant discrepancies were found between actual and estimated 1-RM values, with Speed4lifts and Musclelab notably underestimating 1-RM. These findings underscore the importance of selecting reliable tools for accurate velocity-based training and load prescription. Full article
Show Figures

Figure 1

Figure 1
<p>Bland–Altman plot with the mean bias and limits of agreement for mean propulsive velocity, mean velocity, and peak velocity between T-Force and the other velocity measurement devices in the squat. A negative value means that T-Force measures lower than the comparable measuring tool. * indicates a significant bias with T-Force on a <span class="html-italic">p</span> &lt; 0.05 level.</p>
Full article ">Figure 2
<p>Bland–Altman plot with the mean bias and limits of agreement for mean propulsive velocity, mean velocity, and peak velocity between T-Force and the other velocity measurement devices in the bench press. A negative value means that T-Force measures lower than the comparable measuring tool. * indicates a significant bias with T-Force on a <span class="html-italic">p</span> &lt; 0.05 level.</p>
Full article ">Figure 3
<p>Average mean propulsive, mean, and peak (±SD) velocities per load in squats per measuring tool. * indicates a significant difference with all other measuring tools on a <span class="html-italic">p</span> &lt; 0.05 level. † indicates a significant difference between these two measuring tools on a <span class="html-italic">p</span> &lt; 0.05 level.</p>
Full article ">Figure 4
<p>Average mean propulsive, mean, and peak velocities per load in bench presses per measuring tool. † indicates a significant difference between these two measuring tools on a <span class="html-italic">p</span> &lt; 0.05 level.</p>
Full article ">
17 pages, 5560 KiB  
Communication
Leveraging Sensor Technology to Characterize the Postural Control Spectrum
by Christopher Aliperti, Josiah Steckenrider, Darius Sattari, James Peterson, Caspian Bell and Rebecca Zifchock
Sensors 2024, 24(23), 7420; https://doi.org/10.3390/s24237420 - 21 Nov 2024
Viewed by 67
Abstract
The purpose of this paper is to describe ongoing research on appropriate instrumentation and analysis techniques to characterize postural stability, postural agility, and dynamic stability, which collectively comprise the postural control spectrum. This study had a specific focus on using emerging sensors to [...] Read more.
The purpose of this paper is to describe ongoing research on appropriate instrumentation and analysis techniques to characterize postural stability, postural agility, and dynamic stability, which collectively comprise the postural control spectrum. This study had a specific focus on using emerging sensors to develop protocols suitable for use outside laboratory or clinical settings. First, we examined the optimal number and placement of wearable accelerometers for assessing postural stability. Next, we proposed metrics and protocols for assessing postural agility with the use of a custom force plate-controlled video game. Finally, we proposed a method to quantify dynamic stability during walking tasks using novel frequency-domain metrics extracted from acceleration data obtained with a single body-worn IMU. In each of the three studies, a surrogate for instability was introduced, and the sensors and metrics discussed in this paper show promise for differentiating these trials from stable condition trials. Next steps for this work include expanding the tested population size and refining the methods to even more reliably and unobtrusively characterize postural control status in a variety of scenarios. Full article
Show Figures

Figure 1

Figure 1
<p>The three components of postural control as defined in this paper, examples of each element, and the sensors used to collect data for each in the presented study.</p>
Full article ">Figure 2
<p>IMU placement. Lower-back IMU has been lightened to indicate placement on back of subject.</p>
Full article ">Figure 3
<p>For some trials, the participants stood on a thick foam mat to elicit a destabilizing effect.</p>
Full article ">Figure 4
<p>IMU (accelerometer)-derived trace (blue) with force plate-derived COP trace (red) for an example trial. Note the different range and shapes of the traces.</p>
Full article ">Figure 5
<p>(<b>a</b>) Aerial view of the postural agility assessment game. The green line denotes the transition from stage 1 to stage 2 and the orange line denotes the transition from stage 2 to stage 3. The blue line depicts the path being traced by the subject’s center of pressure on a force plate. The red points are where the subject failed to avoid an obstacle and incurred a penalty (note that these may not directly coincide with the blocks depicted in stage 2 at this instant because the blocks are moving). (<b>b</b>) Image of gameplay. The magenta lines are the boundaries of the course, and the blocks (rotating clockwise) are the obstacles that the subject is instructed to avoid. A counter in the top right of the screen increases as penalties are incurred to provide feedback to the subject. (<b>c</b>) Test setup consisting of a portable (USB-powered) force plate and screen.</p>
Full article ">Figure 6
<p>(<b>a</b>) Asymmetric loading condition for dynamic stability trials. (<b>b</b>) Foam walking surface with specified centerline for dynamic stability trials.</p>
Full article ">Figure 7
<p>Acceleration FFT plots for four IMU placements, frequency zones, and peak frequencies for the ankle case (blue) of a single subject.</p>
Full article ">Figure 8
<p>Correlation values for the head, back, and chest sensors for total excursion (<b>A</b>) and circular area of best fit (<b>B</b>).</p>
Full article ">Figure 9
<p>Medial–lateral and anterior–posterior total excursions for (<b>a</b>) hard and (<b>b</b>) soft surfaces.</p>
Full article ">Figure 10
<p>Penalty vs. <math display="inline"><semantics> <mrow> <mi>m</mi> <mi>T</mi> <mi>N</mi> <mi>S</mi> <mi>P</mi> </mrow> </semantics></math> for all trials including means, with trial number coded by color.</p>
Full article ">Figure 11
<p>Frequency ratios for low-cadence and high-cadence, for the ankle, chest and head sensors in the four dynamic walking trials.</p>
Full article ">
13 pages, 2433 KiB  
Article
Multi-Model Gait-Based KAM Prediction System Using LSTM-RNN and Wearable Devices
by Doyun Jung, Cheolwon Lee and Heung Seok Jeon
Appl. Sci. 2024, 14(22), 10721; https://doi.org/10.3390/app142210721 - 19 Nov 2024
Viewed by 330
Abstract
The purpose of this study is to develop an optimized system for predicting Knee Adduction Moment (KAM) using wearable Inertial Measurement Unit (IMU) sensors and Long Short-Term Memory (LSTM) RNN. Traditional KAM measurement methods are limited by the need for complex laboratory equipment [...] Read more.
The purpose of this study is to develop an optimized system for predicting Knee Adduction Moment (KAM) using wearable Inertial Measurement Unit (IMU) sensors and Long Short-Term Memory (LSTM) RNN. Traditional KAM measurement methods are limited by the need for complex laboratory equipment and significant time and cost investments. This study proposes two systems for predicting Knee Adduction Moment based on wearable IMU sensor data and gait patterns: the Multi-model Gait-based KAM Prediction System and the Single-model KAM Prediction System. The Multi-model system pre-classifies different gait patterns and uses specific prediction models tailored for each pattern, while the Single-model system handles all gait patterns with one unified model. Both systems were evaluated using IMU sensor data and GRF data collected from participants in a controlled laboratory environment. The overall performance of the Multi-model Gait-based KAM Prediction System showed an approximately 20% improvement over the Single-model KAM Prediction System. Specifically, the RMSE for the Multi-model system was 6.84 N·m, which is lower than the 8.82 N·m of the Single-model system, indicating a better predictive accuracy. The Multi-model system also achieved a MAPE of 8.47%, compared with 12.95% for the Single-model system, further demonstrating its superior performance. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Biomedical Informatics)
Show Figures

Figure 1

Figure 1
<p>Overview of Multi-model Gait-based KAM Prediction System.</p>
Full article ">Figure 2
<p>Laboratory environment with motion camera (<b>A</b>) and force plate (<b>B</b>) installed.</p>
Full article ">Figure 3
<p>Detailed sensor attachment locations for the participants.</p>
Full article ">Figure 4
<p>Collecting datasets with IMU and GRF sensors.</p>
Full article ">Figure 5
<p>Organization of models in Multi-model Gait-based KAM Prediction System.</p>
Full article ">Figure 6
<p>Confusion matrix of Multi-model Gait-based KAM Prediction System (Knee Thrust 97.15%, Normal 94.03%, Toe in 92.59%, Toe out 89.77%, Trunk lean 82.23%).</p>
Full article ">Figure 7
<p>Comparison of the actual vs. predicted KAM values for the Multi-model and Single-model Gait-based KAM Prediction Systems. The close alignment of actual and predicted values in the Multi-model system demonstrates a higher prediction accuracy, especially across varying gait patterns.</p>
Full article ">Figure 8
<p>Predictive accuracy visualization for the Multi-model and Single-model KAM Prediction Systems. Data points in the Multi-model system cluster more closely along the <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>=</mo> <mi>x</mi> </mrow> </semantics></math> line, highlighting its enhanced accuracy and robustness compared with the Single-model system, particularly in managing diverse gait patterns.</p>
Full article ">
22 pages, 5386 KiB  
Article
A Novel Multi-Sensor Nonlinear Tightly-Coupled Framework for Composite Robot Localization and Mapping
by Lu Chen, Amir Hussain, Yu Liu, Jie Tan, Yang Li, Yuhao Yang, Haoyuan Ma, Shenbing Fu and Gun Li
Sensors 2024, 24(22), 7381; https://doi.org/10.3390/s24227381 - 19 Nov 2024
Viewed by 261
Abstract
Composite robots often encounter difficulties due to changes in illumination, external disturbances, reflective surface effects, and cumulative errors. These challenges significantly hinder their capabilities in environmental perception and the accuracy and reliability of pose estimation. We propose a nonlinear optimization approach to overcome [...] Read more.
Composite robots often encounter difficulties due to changes in illumination, external disturbances, reflective surface effects, and cumulative errors. These challenges significantly hinder their capabilities in environmental perception and the accuracy and reliability of pose estimation. We propose a nonlinear optimization approach to overcome these issues to develop an integrated localization and navigation framework, IIVL-LM (IMU, Infrared, Vision, and LiDAR Fusion for Localization and Mapping). This framework achieves tightly coupled integration at the data level using inputs from an IMU (Inertial Measurement Unit), an infrared camera, an RGB (Red, Green and Blue) camera, and LiDAR. We propose a real-time luminance calculation model and verify its conversion accuracy. Additionally, we designed a fast approximation method for the nonlinear weighted fusion of features from infrared and RGB frames based on luminance values. Finally, we optimize the VIO (Visual-Inertial Odometry) module in the R3LIVE++ (Robust, Real-time, Radiance Reconstruction with LiDAR-Inertial-Visual state Estimation) framework based on the infrared camera’s capability to acquire depth information. In a controlled study, using a simulated indoor rescue scenario dataset, the IIVL-LM system demonstrated significant performance enhancements in challenging luminance conditions, particularly in low-light environments. Specifically, the average RMSE ATE (Root Mean Square Error of absolute trajectory Error) improved by 23% to 39%, with reductions from 0.006 to 0.013. At the same time, we conducted comparative experiments using the publicly available TUM-VI (Technical University of Munich Visual-Inertial Dataset) without the infrared image input. It was found that no leading results were achieved, which verifies the importance of infrared image fusion. By maintaining the active engagement of at least three sensors at all times, the IIVL-LM system significantly boosts its robustness in both unknown and expansive environments while ensuring high precision. This enhancement is particularly critical for applications in complex environments, such as indoor rescue operations. Full article
(This article belongs to the Special Issue New Trends in Optical Imaging and Sensing Technologies)
Show Figures

Figure 1

Figure 1
<p>IIVL-LM system framework applied to the composite robot.</p>
Full article ">Figure 2
<p>Schematic diagram of each module of the IIVL-LM system.</p>
Full article ">Figure 3
<p>Feature extraction performance of RGB and infrared images under extreme illuminance value. (<b>a</b>) Feature extraction performance of RGB images at a normalized illuminance value of 0.148. (<b>b</b>) Feature extraction performance of infrared images at a normalized illuminance value of 0.853.</p>
Full article ">Figure 4
<p>Weight-based nonlinear interpolation frame method.</p>
Full article ">Figure 5
<p>VIO (Optimized Visual-Inertial Odometry).</p>
Full article ">Figure 6
<p>Schematic diagram of IIVL-LM system and sensors deployed on composite robots. (<b>a</b>) Multi-sensor. (<b>b</b>) Composite robots.</p>
Full article ">Figure 7
<p>Comparison of X/Y axis data and actual trajectory of review robots under the IIVL-LM system. (<b>a</b>) Comparison of <span class="html-italic">X</span>-axis data. (<b>b</b>) Comparison of <span class="html-italic">Y</span>-axis data. (<b>c</b>) Actual testing and running trajectory of composite robots.</p>
Full article ">Figure 8
<p>The feature extraction results in the VIO module using RGB, infrared, and depth images under different lighting conditions in a small-scale indoor simulated environment. (<b>a</b>) The extraction of environmental features from RGB frames during the day. (<b>b</b>) Feature extraction of environmental characteristics from infrared frames during the day. (<b>c</b>) Feature extraction of environmental characteristics from infrared frames during the night. (<b>d</b>) Feature coordinates with depth values in the depth image.</p>
Full article ">Figure 9
<p>Real-time reconstruction process and radiance map of the small-scale indoor environment. (<b>a</b>) Real-time reconstruction process of the map. (<b>b</b>) Reconstructed radiance map of the small-scale indoor environment.</p>
Full article ">Figure 10
<p>Test conclusion and comparison under different illuminances. (<b>a</b>) RMSE ATE of all methods under different illuminance values. (<b>b</b>) Comparison between various methods and overall average.</p>
Full article ">Figure 11
<p>Test conclusion and comparison under multiple sequences in the TUM-VI dataset. (<b>a</b>) RMSE ATE of all methods under multiple sequences in the TUM-VI dataset. (<b>b</b>) Comparison between various methods and overall average.</p>
Full article ">Figure 12
<p>The test scenario on ORB-SLAM3.</p>
Full article ">
16 pages, 4963 KiB  
Article
Simultaneous Localization and Mapping Methods for Snake-like Robots Based on Gait Adjustment
by Chaoquan Tang, Zhipeng Zhang, Meng Sun, Menggang Li, Hongwei Tang and Deen Bai
Biomimetics 2024, 9(11), 710; https://doi.org/10.3390/biomimetics9110710 - 19 Nov 2024
Viewed by 273
Abstract
Snake robots require autonomous localization and mapping capabilities for field applications. However, the characteristics of their motion, such as large turning angles and fast rotation speeds, can lead to issues like drift or even failure in positioning and map building. In response to [...] Read more.
Snake robots require autonomous localization and mapping capabilities for field applications. However, the characteristics of their motion, such as large turning angles and fast rotation speeds, can lead to issues like drift or even failure in positioning and map building. In response to this situation, this paper starts from the gait motion characteristics of the snake robot itself, proposing an improved gait motion method and a tightly coupled method based on IMU and visual information to solve the problem of poor algorithm convergence caused by head-shaking in snake robot SLAM. Firstly, the adaptability of several typical gaits of the snake robot to SLAM methods was evaluated. Secondly, the serpentine gait was selected as the object of gait improvement, and a head stability control method for the snake robot was proposed, thereby reducing the interference of the snake robot’s motion on the sensors. Thirdly, a visual–inertial tightly coupled SLAM method for the snake robot’s serpentine gait and Arc-Rolling gait was proposed, and the method was verified to enhance the robustness of the visual SLAM algorithm and improve the positioning and mapping accuracy of the snake robot. Finally, experiments proved that the methods proposed in this paper can effectively improve the accuracy of positioning and map building for snake robots. Full article
Show Figures

Figure 1

Figure 1
<p>Principle of head stability control.</p>
Full article ">Figure 2
<p>Angular differential variation.</p>
Full article ">Figure 3
<p>Comparison of SLAM simulation under the serpentine gait.</p>
Full article ">Figure 4
<p>Comparison of SLAM simulation under the arc-rolling gait.</p>
Full article ">Figure 5
<p>The experimental system of the snake robot.</p>
Full article ">Figure 6
<p>Comparison of localization and mapping under the serpentine gait.</p>
Full article ">Figure 7
<p>Comparison of localization and mapping under the two methods.</p>
Full article ">Figure 8
<p>Comparison of localization and mapping under the arc-rolling gait.</p>
Full article ">Figure 9
<p>Localization and mapping results under the arc-rolling gait.</p>
Full article ">
21 pages, 2496 KiB  
Review
Transportation Mode Detection Using Learning Methods and Self-Contained Sensors: Review
by Ilhem Gharbi, Fadoua Taia-Alaoui, Hassen Fourati, Nicolas Vuillerme and Zebo Zhou
Sensors 2024, 24(22), 7369; https://doi.org/10.3390/s24227369 - 19 Nov 2024
Viewed by 250
Abstract
Due to increasing traffic congestion, travel modeling has gained importance in the development of transportion mode detection (TMD) strategies over the past decade. Nowadays, recent smartphones, equipped with integrated inertial measurement units (IMUs) and embedded algorithms, can play a crucial role in such [...] Read more.
Due to increasing traffic congestion, travel modeling has gained importance in the development of transportion mode detection (TMD) strategies over the past decade. Nowadays, recent smartphones, equipped with integrated inertial measurement units (IMUs) and embedded algorithms, can play a crucial role in such development. In particular, obtaining much more information on the transportation modes used by users through smartphones is very challenging due to the variety of the data (accelerometers, magnetometers, gyroscopes, proximity sensors, etc.), the standardization issue of datasets and the pertinence of learning methods for that purpose. Reviewing the latest progress on TMD systems is important to inform readers about recent datasets used in detection, best practices for classification issues and the remaining challenges that still impact the detection performances. Existing TMD review papers until now offer overviews of applications and algorithms without tackling the specific issues faced with real-world data collection and classification. Compared to these works, the proposed review provides some novelties such as an in-depth analysis of the current state-of-the-art techniques in TMD systems, relying on recent references and focusing particularly on the major existing problems, and an evaluation of existing methodologies for detecting travel modes using smartphone IMUs (including dataset structures, sensor data types, feature extraction, etc.). This review paper can help researchers to focus their efforts on the main problems and challenges identified. Full article
Show Figures

Figure 1

Figure 1
<p>Processing pipeline for predicting the transportation modes.</p>
Full article ">Figure 2
<p>Transforming time series (raw sensor data) into feature space through the segmentation (window partitioning in red) and computation of features (feature extraction (FE)) [<a href="#B35-sensors-24-07369" class="html-bibr">35</a>].</p>
Full article ">Figure 3
<p>Resultant acceleration in Tram [<a href="#B31-sensors-24-07369" class="html-bibr">31</a>].</p>
Full article ">Figure 4
<p>Resultant acceleration in Walk [<a href="#B31-sensors-24-07369" class="html-bibr">31</a>].</p>
Full article ">Figure 5
<p>Resultant acceleration in Car [<a href="#B31-sensors-24-07369" class="html-bibr">31</a>].</p>
Full article ">Figure 6
<p>Resultant acceleration in Motorcycle [<a href="#B31-sensors-24-07369" class="html-bibr">31</a>].</p>
Full article ">Figure 7
<p>Sensor placement for the perscido dataset [<a href="#B23-sensors-24-07369" class="html-bibr">23</a>].</p>
Full article ">Figure 8
<p>Sensor placement for the SHL dataset [<a href="#B27-sensors-24-07369" class="html-bibr">27</a>].</p>
Full article ">Figure 9
<p>Android applications: (<b>a</b>) Phyphox, (<b>b</b>) Physics toolbox suite and (<b>c</b>) Sensorlogger.</p>
Full article ">
13 pages, 2889 KiB  
Article
Assessing Changes in Motor Function and Mobility in Individuals with Parkinson’s Disease After 12 Sessions of Patient-Specific Adaptive Dynamic Cycling
by Younguk Kim, Brittany E. Smith, Lara Shigo, Aasef G. Shaikh, Kenneth A. Loparo and Angela L. Ridgel
Sensors 2024, 24(22), 7364; https://doi.org/10.3390/s24227364 - 19 Nov 2024
Viewed by 226
Abstract
Background and Purpose: This pilot randomized controlled trial evaluated the effects of 12 sessions of patient-specific adaptive dynamic cycling (PSADC) versus non-adaptive cycling (NA) on motor function and mobility in individuals with Parkinson’s disease (PD), using inertial measurement unit (IMU) sensors for objective [...] Read more.
Background and Purpose: This pilot randomized controlled trial evaluated the effects of 12 sessions of patient-specific adaptive dynamic cycling (PSADC) versus non-adaptive cycling (NA) on motor function and mobility in individuals with Parkinson’s disease (PD), using inertial measurement unit (IMU) sensors for objective assessment. Methods: Twenty-three participants with PD (13 in the PSADC group and 10 in the NA group) completed the study over a 4-week period. Motor function was measured using the Kinesia™ sensors and the MDS-UPDRS Motor III, while mobility was assessed with the TUG test using OPAL IMU sensors. Results: The PSADC group showed significant improvements in MDS-UPDRS Motor III scores (t = 5.165, p < 0.001) and dopamine-sensitive symptoms (t = 4.629, p = 0.001), whereas the NA group did not improve. Both groups showed non-significant improvements in TUG time. IMU sensors provided continuous, quantitative, and unbiased measurements of motor function and mobility, offering a more precise and objective tracking of improvements over time. Conclusions: PSADC demonstrated enhanced treatment effects on PD motor function compared to NA while also reducing variability in individual responses. The integration of IMU sensors was essential for precise monitoring, supporting the potential of a data-driven, individualized exercise approach to optimize treatment outcomes for individuals with PD. Full article
(This article belongs to the Special Issue Advanced Wearable Sensor for Human Movement Monitoring)
Show Figures

Figure 1

Figure 1
<p>CONSORT flowchart illustrating the progression of participants through the phases of the clinical trial. Twenty-four participants were randomized into two groups: the PSADC group (<span class="html-italic">n</span> = 14) and the NA group (<span class="html-italic">n</span> = 10). All participants in the NA group completed the NA dynamic cycling intervention and the follow-up visit. In the PSADC group, 13 participants completed both the PSADC and the follow-up visit, while one participant discontinued the study.</p>
Full article ">Figure 2
<p>(<b>A</b>) MDS-UPDRS Motor III score changes for the PSADC (black circles) and NA (white squares) groups. (<b>B</b>) Changes between groups. The PSADC group showed improvement, as indicated by a decrease in scores, and the NA group showed a slight increase. Error bars = standard deviation. ***, <span class="html-italic">p</span> &lt; 0.001. (<b>C</b>) MDS-UPDRS Motor III Score change histogram. Improvements are shown as negative values, and worsening is illustrated as positive. PSADC = black bars, NA = gray bars.</p>
Full article ">Figure 3
<p>(<b>A</b>) MDS-UPDRS Motor III Dopamine Sensitive Symptom Scores. The PSADC group (black circles) exhibited a decrease in symptoms post-intervention, while the NA group (white squares) exhibited a slight increase. Error bars represent the standard deviation, highlighting the variability within each group. The decrease in the PSADC group was statistically significant (*** <span class="html-italic">p</span> &lt; 0.001). (<b>B</b>) MDS-UPDRS Motor III Dopamine Less-Sensitive Symptom Scores. Both the PSADC (black circles) and NA (white squares) groups show minimal changes.</p>
Full article ">Figure 4
<p>(<b>A</b>) Movement speed score: The PSADC group improved in movement speed after the intervention. Conversely, the NA group shows a worsening of symptoms. Error bars = standard deviation. (<b>B</b>) Movement rhythms score: The total score for movement speed, rhythm, and amplitude is 12 points each. A decrease in score (improvement) is observed in the PSADC group post-intervention. NA group scores were unchanged. (<b>C</b>) Movement amplitude score: Movement amplitude scores show a significant increase in the PSADC group post-intervention compared to the NA group (***, <span class="html-italic">p</span> &lt; 0.001).</p>
Full article ">Figure 5
<p>(<b>A</b>) TUG test time: The PSADC group (white squares) and the NA group (black circles) showed a reduction in time at post-intervention. (<b>B</b>) Turn velocity: Turn velocity shows an increase for the PSADC group, while the NA did not change. Error bars = standard deviation.</p>
Full article ">
22 pages, 12893 KiB  
Article
Research on Visual–Inertial Measurement Unit Fusion Simultaneous Localization and Mapping Algorithm for Complex Terrain in Open-Pit Mines
by Yuanbin Xiao, Wubin Xu, Bing Li, Hanwen Zhang, Bo Xu and Weixin Zhou
Sensors 2024, 24(22), 7360; https://doi.org/10.3390/s24227360 - 18 Nov 2024
Viewed by 386
Abstract
As mining technology advances, intelligent robots in open-pit mining require precise localization and digital maps. Nonetheless, significant pitch variations, uneven highways, and rocky surfaces with minimal texture present substantial challenges to the precision of feature extraction and positioning in traditional visual SLAM systems, [...] Read more.
As mining technology advances, intelligent robots in open-pit mining require precise localization and digital maps. Nonetheless, significant pitch variations, uneven highways, and rocky surfaces with minimal texture present substantial challenges to the precision of feature extraction and positioning in traditional visual SLAM systems, owing to the intricate terrain features of open-pit mines. This study proposes an improved SLAM technique that integrates visual and Inertial Measurement Unit (IMU) data to address these challenges. The method incorporates a point–line feature fusion matching strategy to enhance the quality and stability of line feature extraction. It integrates an enhanced Line Segment Detection (LSD) algorithm with short segment culling and approximate line merging techniques. The combination of IMU pre-integration and visual feature restrictions is executed inside a tightly coupled visual–inertial framework utilizing a sliding window approach for back-end optimization, enhancing system robustness and precision. Experimental results demonstrate that the suggested method improves RMSE accuracy by 36.62% and 26.88% on the MH and VR sequences of the EuRoC dataset, respectively, compared to ORB-SLAM3. The improved SLAM system significantly reduces trajectory drift in the simulated open-pit mining tests, improving localization accuracy by 40.62% and 61.32%. The results indicate that the proposed method demonstrates significance. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>The system framework diagram. This procedure encompasses data input, front-end visual–inertial odometry, closed-loop detection, back-end optimization, and mapping; The red box in the data input section represents the sparse textured slope.</p>
Full article ">Figure 2
<p>An example diagram of the line feature extraction optimization method. The efficacy of line segment identification is enhanced by implementing short line elimination and approximate line segment amalgamation procedures.</p>
Full article ">Figure 3
<p>(<b>a</b>) Flowchart of improved LSD line feature detection algorithm; (<b>b</b>) schematic diagram of similar line feature merging.</p>
Full article ">Figure 4
<p>Visual observation model and IMU schematic diagram. The IMU data must be integrated and calculated in discrete time due to the fact that its data acquisition frequency is significantly higher than that of the camera. Consequently, a unified data format is necessary to ensure close coupling of the data. This diagram uses hollow circles, hollow triangles, green stars, and black squares to represent image frames, keyframes, IMU data, and the pre-integration process.</p>
Full article ">Figure 5
<p>Marginalization model. The relationship model between the camera and the landmark locations during the marginalization process.</p>
Full article ">Figure 6
<p>Histogram for the performance comparison of the line feature extraction algorithm: (<b>a</b>) the average time required to derive line features; (<b>b</b>) the average number of line feature extractions.</p>
Full article ">Figure 7
<p>Line feature extraction algorithm performance comparison: (<b>a</b>) the LSD algorithm’s effect on line feature extraction; (<b>b</b>) the enhanced LSD method. Utilizing short segment elimination and approximate line merging techniques markedly eliminates redundant short line features while preserving the longer line segments essential for localization precision. The red box highlights the comparison section between the two images, with the green dots and lines representing the extracted point and line features from the images, respectively.</p>
Full article ">Figure 8
<p>Histogram of absolute trajectory error. The histogram illustrates that, in the MH sequence, the absolute trajectory error of the enhanced algorithm is less than that of other algorithms, whereas, in the VR sequence, the enhanced algorithm performs comparably to or better than the perfect algorithm.</p>
Full article ">Figure 9
<p>Analysis results of trajectory error comparison: (<b>a</b>) comparison of the trajectory for Sequence MH_04_difficult; (<b>b</b>) comparison of difficult trajectories in Sequence V2_03. The black boxes and red arrows in the figure are used to enlarge key areas and mark trajectory deviations, highlighting the accuracy differences among different algorithms in these regions.</p>
Full article ">Figure 10
<p>Results of absolute pose inaccuracy for each series: (<b>a</b>) Sequence MH_04_difficult; (<b>b</b>) Sequence MH_05_difficult; (<b>c</b>) Sequence V1_02_medium; (<b>d</b>) Sequence V1_03_difficult. The color-coded line represents varying levels of Absolute Pose Error (APE) along the trajectory, with red indicating higher error and blue indicating lower error, highlighting accuracy differences across segments.</p>
Full article ">Figure 11
<p>Three-dimensional point cloud maps: (<b>a</b>) Sequence MH_05_difficult; (<b>b</b>) Sequence V1_03_difficult. The figure shows a 3D mapping visualization where the green lines represent the estimated trajectory, red points indicate mapped features, and black points show additional environmental points.</p>
Full article ">Figure 12
<p>Experimental mining intelligent robot platform: (<b>a</b>) left view; (<b>b</b>) front view.</p>
Full article ">Figure 13
<p>Scene 1: circular open-pit excavation: (<b>a</b>) real-world scene; (<b>b</b>) diagram of movement trajectory. In (<b>a</b>), the red arrows represent the motion trajectory of the mapping robot. In (<b>b</b>), points A, B, and C represent key checkpoints along the closed-loop path.</p>
Full article ">Figure 14
<p>Comparison of trajectory errors in Scenario 1.</p>
Full article ">Figure 15
<p>Analysis of experimental outcomes in Scenario 1: (<b>a</b>) comparison of 2D plane trajectories; (<b>b</b>) absolute trajectory error of data series.</p>
Full article ">Figure 16
<p>Scene 2: Uneven road conditions in an open-pit mine: (<b>a</b>) real-world scene; (<b>b</b>) diagram of movement trajectory. In (<b>a</b>), the red arrows represent the motion trajectory of the mapping robot. In (<b>b</b>), points A, B, and C represent key checkpoints along the path.</p>
Full article ">Figure 17
<p>Comparison of trajectory errors in Scenario 2.</p>
Full article ">
24 pages, 5276 KiB  
Article
An Improved LKF Integrated Navigation Algorithm Without GNSS Signal for Vehicles with Fixed-Motion Trajectory
by Haosu Zhang, Zihao Wang, Shiyin Zhou, Zhiying Wei, Jianming Miao, Lingji Xu and Tao Liu
Electronics 2024, 13(22), 4498; https://doi.org/10.3390/electronics13224498 - 15 Nov 2024
Viewed by 526
Abstract
Without a GNSS (global navigation satellite system) signal, the integrated navigation system in vehicles with a fixed trajectory (e.g., railcars) is limited to the use of micro-electromechanical system-inertial navigation system (MEMS-INS) and odometer (ODO). Due to the significant measurement error of the MEMS [...] Read more.
Without a GNSS (global navigation satellite system) signal, the integrated navigation system in vehicles with a fixed trajectory (e.g., railcars) is limited to the use of micro-electromechanical system-inertial navigation system (MEMS-INS) and odometer (ODO). Due to the significant measurement error of the MEMS inertial device and the inability of ODO to output attitude, the positioning error is generally large. To address this problem, this paper presents a new integrated navigation algorithm based on a dynamically constrained Kalman model. By analyzing the dynamics of a railcar, several new observations have been investigated, including errors of up and lateral velocity, centripetal acceleration, centripetal D-value (difference value), and an up-gyro bias. The state transition matrix and observation matrix for the error state model are represented. To improve navigation accuracy, virtual noise technology is applied to correct errors of up and lateral velocity. The vehicle-running experiment conducted within 240 s demonstrates that the positioning error rate of the dead-reckoning method based on MEMS-INS is 83.5%, whereas the proposed method exhibits a rate of 4.9%. Therefore, the accuracy of positioning can be significantly enhanced. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the <span class="html-italic">b</span> frame of a railcar.</p>
Full article ">Figure 2
<p>(<b>a</b>) A schematic diagram of the railcar running in a straight line. (<b>b</b>) A schematic diagram of the turning movement of the railcar.</p>
Full article ">Figure 3
<p>A diagrammatic sketch of the angular speed of rotation around the <span class="html-italic">OZ<sub>b</sub></span> axis of the railcar. The solid arrow represents the previous true direction, the short dashed arrow represents the assumed translation direction at the current moment, and the long dashed line segment arrow represents the current true direction.</p>
Full article ">Figure 4
<p>A flow chart of an integrated navigation algorithm applicable to the railcar in SNRE.</p>
Full article ">Figure 5
<p>Physical photo of FOG-INS.</p>
Full article ">Figure 6
<p>MEMS-INS (<b>a</b>) Schematic diagram. (<b>b</b>) Physical photo. (<b>c</b>) Diagram of structural composition.</p>
Full article ">Figure 6 Cont.
<p>MEMS-INS (<b>a</b>) Schematic diagram. (<b>b</b>) Physical photo. (<b>c</b>) Diagram of structural composition.</p>
Full article ">Figure 7
<p>Output of the up-gyroscope in MEMS-IMU.</p>
Full article ">Figure 8
<p>Speed curve calculated by FOG-IMU/GNSS integrated navigation.</p>
Full article ">Figure 9
<p>(<b>a</b>) Track graph. (<b>b</b>) Enlarged graph near the star. (<b>c</b>) Enlarged view near the curved road segment. (<b>d</b>) Enlarged graph near the end. (<b>e</b>) Track graph.</p>
Full article ">Figure 10
<p>(<b>a</b>) Attitude angles calculated by MEMS-INS/GNSS combination navigation. (<b>b</b>) Misalignment angles calculated by MEMS-INS/GNSS combination navigation.</p>
Full article ">
15 pages, 11432 KiB  
Article
A Triangular Structure Constraint for Pedestrian Positioning with Inertial Sensors Mounted on Foot and Shank
by Jianyu Wang, Jing Liang, Chao Wang, Wanwei Tang, Mingzhe Wei and Yiling Fan
Electronics 2024, 13(22), 4496; https://doi.org/10.3390/electronics13224496 - 15 Nov 2024
Viewed by 291
Abstract
To suppress pedestrian positioning drift, a velocity constraint commonly known as zero-velocity update (ZUPT) is widely used. However, it cannot correct the error in the non-zero velocity interval (non-ZVI) or observe heading errors. In addition, the positioning accuracy will be further affected when [...] Read more.
To suppress pedestrian positioning drift, a velocity constraint commonly known as zero-velocity update (ZUPT) is widely used. However, it cannot correct the error in the non-zero velocity interval (non-ZVI) or observe heading errors. In addition, the positioning accuracy will be further affected when a velocity error occurs in the ZVI (e.g., foot tremble). In this study, the foot, ankle, and shank were regarded as a triangular structure. Consequently, an angle constraint was established by utilizing the sum of the internal angles. Moreover, in contrast to the traditional ZUPT algorithm, a velocity constraint method combined with Coriolis theorem was constructed. Magnetometer measurements were used to correct heading. Three groups of experiments with different trajectories were carried out. The ZUPT method of the single inertial measurement unit (IMU) and the distance constraint method of dual IMUs were employed for comparisons. The experimental results showed that the proposed method had high accuracy in positioning. Furthermore, the constraints built by the lower limb structure were applied to the whole gait cycle (ZVI and non-ZVI). Full article
(This article belongs to the Special Issue Intelligent Perception and Control for Robotics)
Show Figures

Figure 1

Figure 1
<p>Connecting rod structure of the foot and shank. The circles represent joints.</p>
Full article ">Figure 2
<p>Gait characteristics.</p>
Full article ">Figure 3
<p>The ZVI judgment method of GLRT. (<b>a</b>) Using a threshold for standing detection; (<b>b</b>) partial enlarged view.</p>
Full article ">Figure 4
<p>System flow chart.</p>
Full article ">Figure 5
<p>Data acquisition. (<b>a</b>) Hardware circuit; (<b>b</b>) sensor installation mode.</p>
Full article ">Figure 6
<p>Total station.</p>
Full article ">Figure 7
<p>Estimation results of rectangular trajectory.</p>
Full article ">Figure 8
<p>Estimation results of irregular trajectory.</p>
Full article ">Figure 9
<p>Estimation results of stairs.</p>
Full article ">Figure 10
<p>Positioning error ranges of different methods.</p>
Full article ">Figure 11
<p>End-to-end error ranges of different methods.</p>
Full article ">Figure 12
<p>Height error ranges of different methods.</p>
Full article ">
23 pages, 4323 KiB  
Article
LIMUNet: A Lightweight Neural Network for Human Activity Recognition Using Smartwatches
by Liangliang Lin, Junjie Wu, Ran An, Song Ma, Kun Zhao and Han Ding
Appl. Sci. 2024, 14(22), 10515; https://doi.org/10.3390/app142210515 - 15 Nov 2024
Viewed by 496
Abstract
The rise of mobile communication, low-power chips, and the Internet of Things has made smartwatches increasingly popular. Equipped with inertial measurement units (IMUs), these devices can recognize user activities through artificial intelligence (AI) analysis of sensor data. However, most existing AI-based activity recognition [...] Read more.
The rise of mobile communication, low-power chips, and the Internet of Things has made smartwatches increasingly popular. Equipped with inertial measurement units (IMUs), these devices can recognize user activities through artificial intelligence (AI) analysis of sensor data. However, most existing AI-based activity recognition algorithms require significant computational power and storage, making them unsuitable for low-power devices like smartwatches. Additionally, discrepancies between training data and real-world data often hinder model generalization and performance. To address these challenges, we propose LIMUNet and its smaller variant LIMUNet-Tiny—lightweight neural networks designed for human activity recognition on smartwatches. LIMUNet utilizes depthwise separable convolutions and residual blocks to reduce computational complexity and parameter count. It also incorporates a dual attention mechanism specifically tailored to smartwatch sensor data, improving feature extraction without sacrificing efficiency. Experiments on the PAMAP2 and LIMU datasets show that the LIMUNet improves recognition accuracy by 2.9% over leading lightweight models while reducing parameters by 88.3% and computational load by 58.4%. Compared to other state-of-the-art models, LIMUNet achieves a 9.6% increase in accuracy, with a 60% reduction in parameters and a 57.8% reduction in computational cost. LIMUNet-Tiny further reduces parameters by 75% and computational load by 80%, making it even more suitable for resource-constrained devices. Full article
(This article belongs to the Special Issue Mobile Computing and Intelligent Sensing)
Show Figures

Figure 1

Figure 1
<p>Random signal frames before and after filtering.</p>
Full article ">Figure 2
<p>LIMUNet network architecture.</p>
Full article ">Figure 3
<p>Residual bottleneck layer.</p>
Full article ">Figure 4
<p>Channel attention mechanism.</p>
Full article ">Figure 5
<p>Dual attention mechanism.</p>
Full article ">Figure 6
<p>Correspondence between activities and waveforms in the LIMU dataset.</p>
Full article ">Figure 7
<p>Distribution of data in LIMU across different types of behaviors and users.</p>
Full article ">Figure 8
<p>Training curves for different datasets.</p>
Full article ">Figure 9
<p>Confusion matrices for different datasets.</p>
Full article ">Figure 10
<p>The impact of window size. (<b>a</b>) The impact of window size on accuracy in the PAMAPL2 and LIMU datasets. (<b>b</b>) The impact of window size <span class="html-italic">L</span> on FLOPS for the LIMU datasets.</p>
Full article ">Figure 11
<p>LIMUNet (N = 2), LIMUNet-Tiny (N = 1), and LIMUNet-More (N = 3) the accuracy and degree of lightweight design.</p>
Full article ">
16 pages, 4667 KiB  
Article
State Estimation for Quadruped Robots on Non-Stationary Terrain via Invariant Extended Kalman Filter and Disturbance Observer
by Mingfei Wan, Daoguang Liu, Jun Wu, Li Li, Zhangjun Peng and Zhigui Liu
Sensors 2024, 24(22), 7290; https://doi.org/10.3390/s24227290 - 14 Nov 2024
Viewed by 440
Abstract
Quadruped robots possess significant mobility in complex and uneven terrains due to their outstanding stability and flexibility, making them highly suitable in rescue missions, environmental monitoring, and smart agriculture. With the increasing use of quadruped robots in more demanding scenarios, ensuring accurate and [...] Read more.
Quadruped robots possess significant mobility in complex and uneven terrains due to their outstanding stability and flexibility, making them highly suitable in rescue missions, environmental monitoring, and smart agriculture. With the increasing use of quadruped robots in more demanding scenarios, ensuring accurate and stable state estimation in complex environments has become particularly important. Existing state estimation algorithms relying on multi-sensor fusion, such as those using IMU, LiDAR, and visual data, often face challenges on non-stationary terrains due to issues like foot-end slippage or unstable contact, leading to significant state drift. To tackle this problem, this paper introduces a state estimation algorithm that integrates an invariant extended Kalman filter (InEKF) with a disturbance observer, aiming to estimate the motion state of quadruped robots on non-stationary terrains. Firstly, foot-end slippage is modeled as a deviation in body velocity and explicitly included in the state equations, allowing for a more precise representation of how slippage affects the state. Secondly, the state update process integrates both foot-end velocity and position observations to improve the overall accuracy and comprehensiveness of the estimation. Lastly, a foot-end contact probability model, coupled with an adaptive covariance adjustment strategy, is employed to dynamically modulate the influence of the observations. These enhancements significantly improve the filter’s robustness and the accuracy of state estimation in non-stationary terrain scenarios. Experiments conducted with the Jueying Mini quadruped robot on various non-stationary terrains show that the enhanced InEKF method offers notable advantages over traditional filters in compensating for foot-end slippage and adapting to different terrains. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>Test environments.</p>
Full article ">Figure 2
<p>Foot slipping scenarios of a quadruped robot during ground contact.</p>
Full article ">Figure 3
<p>Estimation of foot contact probability during unstable contact events, with (<b>a</b>) representing right front leg and (<b>b</b>) left rear leg.</p>
Full article ">Figure 4
<p>The position estimates of the quadruped robot in the X, Y, and Z directions on different terrains, with (<b>a</b>–<b>c</b>) depicting the position estimate for rugged slope terrain, (<b>d</b>–<b>f</b>) for shallow grass terrain, and (<b>g</b>–<b>i</b>) for deep grass terrain.</p>
Full article ">Figure 4 Cont.
<p>The position estimates of the quadruped robot in the X, Y, and Z directions on different terrains, with (<b>a</b>–<b>c</b>) depicting the position estimate for rugged slope terrain, (<b>d</b>–<b>f</b>) for shallow grass terrain, and (<b>g</b>–<b>i</b>) for deep grass terrain.</p>
Full article ">Figure 5
<p>Pitch and roll angle estimation of the quadruped robot on different terrains, with (<b>a</b>,<b>d</b>) depicting the estimate for rugged slope terrain, (<b>b</b>,<b>e</b>) for shallow grass terrain, and (<b>c</b>,<b>f</b>) for deep grass terrain.</p>
Full article ">Figure 5 Cont.
<p>Pitch and roll angle estimation of the quadruped robot on different terrains, with (<b>a</b>,<b>d</b>) depicting the estimate for rugged slope terrain, (<b>b</b>,<b>e</b>) for shallow grass terrain, and (<b>c</b>,<b>f</b>) for deep grass terrain.</p>
Full article ">
24 pages, 1146 KiB  
Article
Walk Longer! Using Wearable Inertial Sensors to Uncover Which Gait Aspects Should Be Treated to Increase Walking Endurance in People with Multiple Sclerosis
by Ilaria Carpinella, Rita Bertoni, Denise Anastasi, Rebecca Cardini, Tiziana Lencioni, Maurizio Ferrarin, Davide Cattaneo and Elisa Gervasoni
Sensors 2024, 24(22), 7284; https://doi.org/10.3390/s24227284 - 14 Nov 2024
Viewed by 270
Abstract
Reduced walking endurance is common in people with multiple sclerosis (PwMS), leading to reduced social participation and increased fall risk. This highlights the importance of identifying which gait aspects should be mostly targeted by rehabilitation to maintain/increase walking endurance in this population. A [...] Read more.
Reduced walking endurance is common in people with multiple sclerosis (PwMS), leading to reduced social participation and increased fall risk. This highlights the importance of identifying which gait aspects should be mostly targeted by rehabilitation to maintain/increase walking endurance in this population. A total of 56 PwMS and 24 healthy subjects (HSs) executed the 6 min walk test (6 MWT), a clinical measure of walking endurance, wearing three inertial sensors (IMUs) on their shanks and lower back. Five IMU-based digital metrics descriptive of different gait domains, i.e., double support duration, trunk sway, gait regularity, symmetry, and local dynamic instability, were computed. All metrics demonstrated moderate–high ability to discriminate between HSs and PwMS (AUC: 0.79–0.91) and were able to detect differences between PwMS at minimal (PwMSmFR) and moderate–high fall risk (PwMSFR). Compared to PwMSmFR, PwMSFR walked with a prolonged double support phase (+100%), larger trunk sway (+23%), lower stride regularity (−32%) and gait symmetry (−18%), and higher local dynamic instability (+24%). Normative cut-off values were provided for all metrics to help clinicians in detecting abnormal scores at an individual level. The five metrics, entered into a multiple linear regression model with 6 MWT distance as the dependent variable, showed that gait regularity and the three metrics most related to dynamic balance (i.e., double support duration, trunk sway, and local dynamic instability) were significant independent contributors to 6 MWT distance, while gait symmetry was not. While double support duration and local dynamic instability were independently associated with walking endurance in both PwMSmFR and PwMSFR, gait regularity and trunk sway significantly contributed to 6 MWT distance only in PwMSmFR and PwMSFR, respectively. Taken together, the present results allowed us to provide hints for tailored rehabilitation exercises aimed at specifically improving walking endurance in PwMS. Full article
(This article belongs to the Collection Sensors for Gait, Human Movement Analysis, and Health Monitoring)
Show Figures

Figure 1

Figure 1
<p>Pearson’s correlation coefficient <span class="html-italic">r</span> between six-minute walk test distance and IMU-based digital metrics descriptive of the gait domains reported on the left. The metric showing the highest correlation for each domain is reported in dark violet. * <span class="html-italic">p</span> &lt; 0.05. Reg.: regularity; CV: coefficient of variation; iHR: improved Harmonic Ratio; ∆: absolute difference between right and left side; T<sub>step</sub>: step duration; T<sub>stride</sub>: stride duration; T<sub>stance</sub>: stance duration; T<sub>swing</sub>: swing duration; T<sub>dsupp</sub>: double support duration; nRMS: normalized root mean square of trunk acceleration; sLyE<sub>stride/step</sub>: short-term Lyapunov exponent computed over one stride/step; Mod.: trunk acceleration modulus; AP: antero-posterior; ML: medio-lateral; VT: vertical.</p>
Full article ">Figure 2
<p>Digital metrics descriptive of gait in healthy subjects (HSs) and people with MS (PwMS). Bold line: median; Box: interquartile range; Whisker: range. *** <span class="html-italic">p</span>&lt; 0.001 (HSs vs. PwMS, Mann–Whitney U Test). iHR: improved Harmonic Ratio; nRMS: normalized root mean square of trunk acceleration; sLyE<sub>step</sub>: short-term Lyapunov exponent computed over one step; Mod.: trunk acceleration modulus; AP: antero-posterior; ML: medio-lateral; AUC: Area Under the Receiver Operating Characteristic (ROC) Curve, mean (95% confidence interval).</p>
Full article ">
18 pages, 2688 KiB  
Article
Deep Learning and IoT-Based Ankle–Foot Orthosis for Enhanced Gait Optimization
by Ferdous Rahman Shefa, Fahim Hossain Sifat, Jia Uddin, Zahoor Ahmad, Jong-Myon Kim and Muhammad Golam Kibria
Healthcare 2024, 12(22), 2273; https://doi.org/10.3390/healthcare12222273 - 14 Nov 2024
Viewed by 428
Abstract
Background/Objectives: This paper proposes a method for managing gait imbalances by integrating the Internet of Things (IoT) and machine learning technologies. Ankle–foot orthosis (AFO) devices are crucial medical braces that align the lower leg, ankle, and foot, offering essential support for individuals with [...] Read more.
Background/Objectives: This paper proposes a method for managing gait imbalances by integrating the Internet of Things (IoT) and machine learning technologies. Ankle–foot orthosis (AFO) devices are crucial medical braces that align the lower leg, ankle, and foot, offering essential support for individuals with gait imbalances by assisting weak or paralyzed muscles. This research aims to revolutionize medical orthotics through IoT and machine learning, providing a sophisticated solution for managing gait issues and enhancing patient care with personalized, data-driven insights. Methods: The smart ankle–foot orthosis (AFO) is equipped with a surface electromyography (sEMG) sensor to measure muscle activity and an Inertial Measurement Unit (IMU) sensor to monitor gait movements. Data from these sensors are transmitted to the cloud via fog computing for analysis, aiming to identify distinct walking phases, whether normal or aberrant. This involves preprocessing the data and analyzing it using various machine learning methods, such as Random Forest, Decision Tree, Support Vector Machine (SVM), Artificial Neural Network (ANN), Long Short-Term Memory (LSTM), and Transformer models. Results: The Transformer model demonstrates exceptional performance in classifying walking phases based on sensor data, achieving an accuracy of 98.97%. With this preprocessed data, the model can accurately predict and measure improvements in patients’ walking patterns, highlighting its effectiveness in distinguishing between normal and aberrant phases during gait analysis. Conclusions: These predictive capabilities enable tailored recommendations regarding the duration and intensity of ankle–foot orthosis (AFO) usage based on individual recovery needs. The analysis results are sent to the physician’s device for validation and regular monitoring. Upon approval, the comprehensive report is made accessible to the patient, ensuring continuous progress tracking and timely adjustments to the treatment plan. Full article
(This article belongs to the Special Issue Smart and Digital Health)
Show Figures

Figure 1

Figure 1
<p>Working Prototype of a Smart AFO on a patient.</p>
Full article ">Figure 2
<p>System Architecture of Machine Learning and IoT-Driven Smart AFO.</p>
Full article ">Figure 3
<p>(<b>a</b>) Patient’s gastrocnemius muscle data. (<b>b</b>) Patient’s accelerometer data plotting. (<b>c</b>) Patient’s EMG data plotting.</p>
Full article ">Figure 4
<p>Data Denoising Procedure.</p>
Full article ">Figure 5
<p>(<b>a</b>) Original signal. (<b>b</b>) Unrectified and absolute rectified signal. (<b>c</b>) Denoised signal.</p>
Full article ">Figure 6
<p>Model Accuracies Comparison.</p>
Full article ">
Back to TopTop