Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (44)

Search Parameters:
Keywords = finger motion tracking

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 1037 KiB  
Article
Optimisation and Comparison of Markerless and Marker-Based Motion Capture Methods for Hand and Finger Movement Analysis
by Valentin Maggioni, Christine Azevedo-Coste, Sam Durand and François Bailly
Sensors 2025, 25(4), 1079; https://doi.org/10.3390/s25041079 - 11 Feb 2025
Viewed by 407
Abstract
Ensuring the accurate tracking of hand and fingers movements is an ongoing challenge for upper limb rehabilitation assessment, as the high number of degrees of freedom and segments in the limited volume of the hand makes this a difficult task. The objective of [...] Read more.
Ensuring the accurate tracking of hand and fingers movements is an ongoing challenge for upper limb rehabilitation assessment, as the high number of degrees of freedom and segments in the limited volume of the hand makes this a difficult task. The objective of this study is to evaluate the performance of two markerless approaches (the Leap Motion Controller and the Google MediaPipe API) in comparison to a marker-based one, and to improve the precision of the markerless methods by introducing additional data processing algorithms fusing multiple recording devices. Fifteen healthy participants were instructed to perform five distinct hand movements while being recorded by the three motion capture methods simultaneously. The captured movement data from each device was analyzed using a skeletal model of the hand through the inverse kinematics method of the OpenSim software. Finally, the root mean square errors of the angles formed by each finger segment were calculated for the markerless and marker-based motion capture methods to compare their accuracy. Our results indicate that the MediaPipe-based setup is more accurate than the Leap Motion Controller-based one (average root mean square error of 10.9° versus 14.7°), showing promising results for the use of markerless-based methods in clinical applications. Full article
(This article belongs to the Collection Sensors for Gait, Human Movement Analysis, and Health Monitoring)
Show Figures

Figure 1

Figure 1
<p>Marker sets for the optoelectronic system (blue), Leap Motion (yellow), and MediaPipe (green), and finger joint names.</p>
Full article ">Figure 2
<p>Experimental setup for the concurrent recording with the 3 motion capture methods.</p>
Full article ">Figure 3
<p>Summary of the steps of the study, from the experimental measurements to the comparison of the results, with the corresponding sections.</p>
Full article ">Figure 4
<p>Boxplot of the root mean square error of the hand joints for the MediaPipe-based method in green and the Leap Motion method in yellow, depending on the hand motion related to each of the five experimental tasks. The values in each box are the mean of the root mean square error computed over each joint of the hand for a specific participant and a specific task. As such, each box contains 15 values (one per participant).</p>
Full article ">Figure 5
<p>(<b>a</b>) Bar graph of the occlusion percentage of the makers depending on the hand motion. The displayed values correspond to the percentage of the time in which a marker was occluded during the entire duration of a trial, averaged over all markers of the hand and all participants. The errors bars correspond to the standard error. (<b>b</b>) Bar graph of the range of motion of the joints depending on the hand motion and measurement method with Leap Motion in yellow, MediaPipe in green, and the traditional motion capture in blue. The displayed values correspond to the difference between the maximum and minimum angle of a joint across the duration of a trial, averaged over all joints and all participants. The error bars correspond to the standard error. (<b>c</b>) Bar graph of the anatomical error (in purple) and reprojection error (in green) computed during the triangulation step of the MediaPipe-based method depending on the number of cameras in the chosen subset. The error was computed for each trial, then averaged over all trial and participants. In total, 51 trials included 4 cameras, 12 trials included 3 cameras, and 12 trials included 2 cameras. The errors bars correspond to the standard error.</p>
Full article ">
20 pages, 25073 KiB  
Article
Development of 6DOF Hardware-in-the-Loop Ground Testbed for Autonomous Robotic Space Debris Removal
by Ahmad Al Ali, Bahador Beigomi and Zheng H. Zhu
Aerospace 2024, 11(11), 877; https://doi.org/10.3390/aerospace11110877 - 25 Oct 2024
Viewed by 1085
Abstract
This paper presents the development of a hardware-in-the-loop ground testbed featuring active gravity compensation via software-in-the-loop integration, specially designed to support research in autonomous robotic removal of space debris. The testbed is designed to replicate six degrees of freedom (6DOF) motion maneuvering to [...] Read more.
This paper presents the development of a hardware-in-the-loop ground testbed featuring active gravity compensation via software-in-the-loop integration, specially designed to support research in autonomous robotic removal of space debris. The testbed is designed to replicate six degrees of freedom (6DOF) motion maneuvering to accurately simulate the dynamic behaviors of free-floating robotic manipulators and free-tumbling space debris under microgravity conditions. The testbed incorporates two industrial 6DOF robotic manipulators, a three-finger robotic gripper, and a suite of sensors, including cameras, force/torque sensors, and tactile tensors. Such a setup provides a robust platform for testing and validating technologies related to autonomous tracking, capture, and post-capture stabilization within the context of active space debris removal missions. Preliminary experimental results have demonstrated advancements in motion control, computer vision, and sensor fusion. This facility is positioned to become an essential resource for the development and validation of robotic manipulators in space, offering substantial improvements to the effectiveness and reliability of autonomous capture operations in space missions. Full article
(This article belongs to the Special Issue Space Mechanisms and Robots)
Show Figures

Figure 1

Figure 1
<p>Space robotic manipulator—Canadarm [<a href="#B3-aerospace-11-00877" class="html-bibr">3</a>].</p>
Full article ">Figure 2
<p>(<b>A</b>) Air-bearing testbed at York University [<a href="#B6-aerospace-11-00877" class="html-bibr">6</a>], (<b>B</b>) air-bearing testbed at the Polish Academy of Sciences [<a href="#B7-aerospace-11-00877" class="html-bibr">7</a>], (<b>C</b>) HIL testbed at Shenzhen Space Technology Center [<a href="#B12-aerospace-11-00877" class="html-bibr">12</a>], (<b>D</b>) European proximity operations simulator at the German Aerospace Center [<a href="#B13-aerospace-11-00877" class="html-bibr">13</a>], (<b>E</b>) CSA SPDM task verification facility [<a href="#B14-aerospace-11-00877" class="html-bibr">14</a>], (<b>F</b>) dual robotic testbed at Tsinghua University [<a href="#B15-aerospace-11-00877" class="html-bibr">15</a>,<a href="#B16-aerospace-11-00877" class="html-bibr">16</a>], (<b>G</b>) MTVF at the China Academy of Space Technology [<a href="#B16-aerospace-11-00877" class="html-bibr">16</a>].</p>
Full article ">Figure 3
<p>(<b>a</b>) Shenzhen Space Technology Center [<a href="#B12-aerospace-11-00877" class="html-bibr">12</a>], (<b>b</b>) German Aerospace Center [<a href="#B13-aerospace-11-00877" class="html-bibr">13</a>], (<b>c</b>) China Academy of Space Technology [<a href="#B15-aerospace-11-00877" class="html-bibr">15</a>], and (<b>d</b>) Tsinghua University [<a href="#B16-aerospace-11-00877" class="html-bibr">16</a>].</p>
Full article ">Figure 4
<p>Schematic of dual-robot HIL testbed.</p>
Full article ">Figure 5
<p>(<b>A</b>) FANUC manipulator, (<b>B</b>) robotic gripper, (<b>C</b>) ATI force/load sensor.</p>
Full article ">Figure 6
<p>Flowchart illustrating how the simulated motion can be applied to real-world execution.</p>
Full article ">Figure 7
<p>ATI Sensor Frame.</p>
Full article ">Figure 8
<p>The target’s free-floating motion disturbed by an external force.</p>
Full article ">Figure 9
<p>Robotiq three-finger gripper movement.</p>
Full article ">Figure 10
<p>(<b>a</b>) Tactile sensor, (<b>b</b>) Intel camera.</p>
Full article ">Figure 11
<p>Full-scale monitoring of all angles.</p>
Full article ">Figure 12
<p>Mock-up satellite and components.</p>
Full article ">Figure 13
<p>The 6DOF hardware-in-the-loop ground testbed.</p>
Full article ">Figure 14
<p>(<b>A</b>) Yolo feature recognition, (<b>B</b>) AI computer vision for target tracking result.</p>
Full article ">Figure 15
<p>Skeletal representation of the 6DOF simulation environment.</p>
Full article ">Figure 16
<p>Motion equivalence.</p>
Full article ">Figure 17
<p>Joint positions of robot B to deliver the target motion.</p>
Full article ">Figure 18
<p>Mock-up satellite tumbling in space.</p>
Full article ">Figure 19
<p>ATI force/load sensor: force values.</p>
Full article ">Figure 20
<p>ATI force/load sensor: torque values.</p>
Full article ">Figure 21
<p>Robot A/gripper joint positions.</p>
Full article ">Figure 22
<p>Robot A/gripper cartesian positions.</p>
Full article ">Figure 23
<p>Gripper capture of target.</p>
Full article ">Figure 24
<p>Full debris capture mission.</p>
Full article ">Figure 25
<p>Camera’s field of view and bounding box during the pre-capture phase.</p>
Full article ">
20 pages, 5140 KiB  
Article
MOVING: A Multi-Modal Dataset of EEG Signals and Virtual Glove Hand Tracking
by Enrico Mattei, Daniele Lozzi, Alessandro Di Matteo, Alessia Cipriani, Costanzo Manes and Giuseppe Placidi
Sensors 2024, 24(16), 5207; https://doi.org/10.3390/s24165207 - 11 Aug 2024
Viewed by 2536
Abstract
Brain–computer interfaces (BCIs) are pivotal in translating neural activities into control commands for external assistive devices. Non-invasive techniques like electroencephalography (EEG) offer a balance of sensitivity and spatial-temporal resolution for capturing brain signals associated with motor activities. This work introduces MOVING, a Multi-Modal [...] Read more.
Brain–computer interfaces (BCIs) are pivotal in translating neural activities into control commands for external assistive devices. Non-invasive techniques like electroencephalography (EEG) offer a balance of sensitivity and spatial-temporal resolution for capturing brain signals associated with motor activities. This work introduces MOVING, a Multi-Modal dataset of EEG signals and Virtual Glove Hand Tracking. This dataset comprises neural EEG signals and kinematic data associated with three hand movements—open/close, finger tapping, and wrist rotation—along with a rest period. The dataset, obtained from 11 subjects using a 32-channel dry wireless EEG system, also includes synchronized kinematic data captured by a Virtual Glove (VG) system equipped with two orthogonal Leap Motion Controllers. The use of these two devices allows for fast assembly (∼1 min), although introducing more noise than the gold standard devices for data acquisition. The study investigates which frequency bands in EEG signals are the most informative for motor task classification and the impact of baseline reduction on gesture recognition. Deep learning techniques, particularly EEGnetV4, are applied to analyze and classify movements based on the EEG data. This dataset aims to facilitate advances in BCI research and in the development of assistive devices for people with impaired hand mobility. This study contributes to the repository of EEG datasets, which is continuously increasing with data from other subjects, which is hoped to serve as benchmarks for new BCI approaches and applications. Full article
Show Figures

Figure 1

Figure 1
<p>EEG raw data before (<b>a</b>) and after (<b>b</b>) a cleaning band-pass [1–45 Hz] filtering. In (<b>a</b>), the high noise level makes it difficult to visualize high-amplitude brain signals. Vertical lines represent the triggers for rest, fixation cross, and open/close movement, respectively.</p>
Full article ">Figure 2
<p>VG system while tracking the hand movements. The hand positioning system is united with the one of the VG.</p>
Full article ">Figure 3
<p>Model based on data collected by VG. The left part shows the vertical view; the right part shows the horizontal view. The Cartesian reference system is reported in the center.</p>
Full article ">Figure 4
<p>Acquisition environment scheme (<b>a</b>) and real acquisition environment (<b>b</b>) used in this work.</p>
Full article ">Figure 5
<p>The analyzed movements. The dotted line represents the movement acquired but not analyzed in this work. The class “movement” is created by merging the open/close and wrist rotation classes.</p>
Full article ">Figure 6
<p>The used hand movement protocol.</p>
Full article ">Figure 7
<p>Instructions shown to participants for both MI and ME. The flow stops when 8 repetitions are reached.</p>
Full article ">Figure 8
<p>Preprocessing for sample generation and training. The colored boxes represent the parameters that change to explore different frequency bands (in <span style="color: #ff8000">orange</span>) and the baseline reduction method (in <span style="color: #ffc7c1">pink</span>), for each combination of movement/rest. The movement class (<span style="color: #a31d80">violet</span> box) represents the merging of the open/close and wrist rotation. In the lower part of the scheme, the training process of the model is described.</p>
Full article ">Figure 9
<p>x (<b>a</b>), y (<b>b</b>), and z (<b>c</b>) components of the fingertip trajectory for the Horizontal LMC. All fingertips are reported in a single plot.</p>
Full article ">Figure 10
<p>x (<b>a</b>), y (<b>b</b>), and z (<b>c</b>) components of the fingertip trajectory for the Vertical LMC. All fingertips are reported in a single plot.</p>
Full article ">Figure 11
<p>x (<b>a</b>), y (<b>b</b>), and z (<b>c</b>) components of the fingertip velocity for the Horizontal LMC. All fingertips are reported in a single plot.</p>
Full article ">Figure 12
<p>x (<b>a</b>), y (<b>b</b>), and z (<b>c</b>) components of the fingertip velocity for the Vertical LMC. All fingertips are reported in a single plot.</p>
Full article ">Figure 13
<p>Spatial distribution of PSD for each task in a single subject. The timeline is the same as used in <a href="#sensors-24-05207-f009" class="html-fig">Figure 9</a>, <a href="#sensors-24-05207-f010" class="html-fig">Figure 10</a>, <a href="#sensors-24-05207-f011" class="html-fig">Figure 11</a> and <a href="#sensors-24-05207-f012" class="html-fig">Figure 12</a>.</p>
Full article ">
14 pages, 8050 KiB  
Article
Soft Robotic Bilateral Rehabilitation System for Hand and Wrist Joints
by Tanguy Ridremont, Inderjeet Singh, Baptiste Bruzek, Veysel Erel, Alexandra Jamieson, Yixin Gu, Rochdi Merzouki and Muthu B. J. Wijesundara
Machines 2024, 12(5), 288; https://doi.org/10.3390/machines12050288 - 25 Apr 2024
Cited by 2 | Viewed by 2654
Abstract
Upper limb functionality is essential to perform activities of daily living. It is critical to investigate neurorehabilitation therapies in order to improve upper limb functionality in post-stroke patients. This paper presents a soft-robotic bilateral system to provide rehabilitation therapy for hand and wrist [...] Read more.
Upper limb functionality is essential to perform activities of daily living. It is critical to investigate neurorehabilitation therapies in order to improve upper limb functionality in post-stroke patients. This paper presents a soft-robotic bilateral system to provide rehabilitation therapy for hand and wrist joints. A sensorized glove that tracks finger and wrist joint movements is worn on the healthy limb, which guides the movement of the paretic limb. The input of sensors from the healthy limb is provided to the soft robotic exoskeleton attached to the paretic limb to mimic the motion. A proportional derivative flow-based control algorithm is used to perform bilateral therapy. To test the feasibility of the developed system, two different applications are performed experimentally: (1) Wrist exercise with a dumbbell, and (2) Object pick-and-place task. The initial tests of the developed system verified its capability to perform bilateral therapy. Full article
(This article belongs to the Special Issue Design Methodology for Soft Mechanisms, Machines, and Robots)
Show Figures

Figure 1

Figure 1
<p>Bilateral therapy system diagram: this system comprises a hand-and-wrist sensor glove (HWSG), a soft robotic exoskeleton called hand-and-wrist exoskeleton (HWE), and a pneumatic control unit.</p>
Full article ">Figure 2
<p>Schematic diagram of pneumatic control unit for soft robotic bilateral therapy system.</p>
Full article ">Figure 3
<p>Proportional derivative (PD) flow-based control algorithm diagram for soft robotic bilateral system.</p>
Full article ">Figure 4
<p>Control parameters estimation (<b>a</b>) experimental recorded wrist angle, (<b>b</b>) output of the state–space model compared with experimental values, (<b>c</b>) simulation model of control with estimated state–space model.</p>
Full article ">Figure 5
<p>Soft robotic bilateral therapy system: (<b>a</b>) finger and wrist actuators, (<b>b</b>) HWSG and HWE.</p>
Full article ">Figure 6
<p>Five steps (<b>a</b>–<b>e</b>) in series to perform wrist exercise using a dumbbell.</p>
Full article ">Figure 7
<p>Five steps (<b>a</b>–<b>e</b>) in series to perform object pick-and-place task.</p>
Full article ">Figure 8
<p>Comparison of angular movement of the HWSG with the HWE for (<b>a</b>) little, (<b>b</b>) ring, (<b>c</b>) middle, (<b>d</b>) index, (<b>e</b>) thumb fingers, and (<b>f</b>) wrist, while performing flexion/extension.</p>
Full article ">Figure 9
<p>Sensor data from the HWSG while performing two reps of wrist exercise with a dumbbell.</p>
Full article ">Figure 10
<p>Comparison of angular movement of the HWSG with the HWE for (<b>a</b>) little, (<b>b</b>) ring, (<b>c</b>) middle, (<b>d</b>) index, (<b>e</b>) thumb fingers, and (<b>f</b>) wrist, while performing wrist exercise using a dumbbell.</p>
Full article ">Figure 11
<p>Sensor data from the HWSG while performing the object pick-and-place task.</p>
Full article ">Figure 12
<p>Comparison of angular movement of the HWSG with the HWE for (<b>a</b>) little, (<b>b</b>) ring, (<b>c</b>) middle, (<b>d</b>) index, (<b>e</b>) thumb fingers, and (<b>f</b>) wrist, while performing the pick-and-place task.</p>
Full article ">
27 pages, 3090 KiB  
Review
Biomechanical Assessment Methods Used in Chronic Stroke: A Scoping Review of Non-Linear Approaches
by Marta Freitas, Francisco Pinho, Liliana Pinho, Sandra Silva, Vânia Figueira, João Paulo Vilas-Boas and Augusta Silva
Sensors 2024, 24(7), 2338; https://doi.org/10.3390/s24072338 - 6 Apr 2024
Viewed by 2044
Abstract
Non-linear and dynamic systems analysis of human movement has recently become increasingly widespread with the intention of better reflecting how complexity affects the adaptability of motor systems, especially after a stroke. The main objective of this scoping review was to summarize the non-linear [...] Read more.
Non-linear and dynamic systems analysis of human movement has recently become increasingly widespread with the intention of better reflecting how complexity affects the adaptability of motor systems, especially after a stroke. The main objective of this scoping review was to summarize the non-linear measures used in the analysis of kinetic, kinematic, and EMG data of human movement after stroke. PRISMA-ScR guidelines were followed, establishing the eligibility criteria, the population, the concept, and the contextual framework. The examined studies were published between 1 January 2013 and 12 April 2023, in English or Portuguese, and were indexed in the databases selected for this research: PubMed®, Web of Science®, Institute of Electrical and Electronics Engineers®, Science Direct® and Google Scholar®. In total, 14 of the 763 articles met the inclusion criteria. The non-linear measures identified included entropy (n = 11), fractal analysis (n = 1), the short-term local divergence exponent (n = 1), the maximum Floquet multiplier (n = 1), and the Lyapunov exponent (n = 1). These studies focused on different motor tasks: reaching to grasp (n = 2), reaching to point (n = 1), arm tracking (n = 2), elbow flexion (n = 5), elbow extension (n = 1), wrist and finger extension upward (lifting) (n = 1), knee extension (n = 1), and walking (n = 4). When studying the complexity of human movement in chronic post-stroke adults, entropy measures, particularly sample entropy, were preferred. Kinematic assessment was mainly performed using motion capture systems, with a focus on joint angles of the upper limbs. Full article
(This article belongs to the Special Issue Biomedical Electronics and Wearable Systems)
Show Figures

Figure 1

Figure 1
<p>Flow diagram for the scoping review process adapted from the PRISMA-ScR statement [<a href="#B26-sensors-24-02338" class="html-bibr">26</a>].</p>
Full article ">Figure 2
<p>Characterization of the tasks.</p>
Full article ">Figure 3
<p>Types of assessment instruments.</p>
Full article ">Figure 4
<p>Kinetic variables.</p>
Full article ">Figure 5
<p>EMG variables.</p>
Full article ">Figure 6
<p>Kinematic variables.</p>
Full article ">Figure 7
<p>Non-linear measures.</p>
Full article ">
24 pages, 26785 KiB  
Article
Whole-Body Teleoperation Control of Dual-Arm Robot Using Sensor Fusion
by Feilong Wang, Furong Chen, Yanling Dong, Qi Yong, Xiaolong Yang, Long Zheng, Xinming Zhang and Hang Su
Biomimetics 2023, 8(8), 591; https://doi.org/10.3390/biomimetics8080591 - 5 Dec 2023
Cited by 6 | Viewed by 2852
Abstract
As human–robot interaction and teleoperation technologies advance, anthropomorphic control of humanoid arms has garnered increasing attention. However, accurately translating sensor-detected arm motions to the multi-degree freedom of a humanoid robotic arm is challenging, primarily due to occlusion issues with single-sensor setups, which reduce [...] Read more.
As human–robot interaction and teleoperation technologies advance, anthropomorphic control of humanoid arms has garnered increasing attention. However, accurately translating sensor-detected arm motions to the multi-degree freedom of a humanoid robotic arm is challenging, primarily due to occlusion issues with single-sensor setups, which reduce recognition accuracy. To overcome this problem, we propose a human-like arm control strategy based on multi-sensor fusion. We defined the finger bending angle to represent finger posture and employed a depth camera to capture arm movement. Consequently, we developed an arm movement tracking system and achieved anthropomorphic control of the imitation human arm. Finally, we verified our proposed method’s effectiveness through a series of experiments, evaluating the system’s robustness and real-time performance. The experimental results show that this control strategy can control the motion of the humanoid arm stably, and maintain a high recognition accuracy in the face of complex situations such as occlusion. Full article
Show Figures

Figure 1

Figure 1
<p>Physical drawing of an anthropomorphic dual-arm robot. (<b>a</b>) Left view; (<b>b</b>) front view.</p>
Full article ">Figure 2
<p>Arm tracking system construction. (<b>a</b>) System overview; (<b>b</b>) integrated display system; (<b>c</b>) data processing system; (<b>d</b>) signal acquisition section; (<b>e</b>) Leap Motion controller and Kinect depth camera.</p>
Full article ">Figure 3
<p>Spatial position diagram of arms and sensors.</p>
Full article ">Figure 4
<p>Software system description diagram based on a single arm.</p>
Full article ">Figure 5
<p>The correspondence between a human arm and an anthropomorphic arm.</p>
Full article ">Figure 6
<p>The hand motion is transferred to the underactuated bionic hand through the coordinate system of hand motion. The skeleton point data captured by the Leap Motion controller was converted into the hand coordinate system to calculate the bending angle. Establishing a spatial coordinate system for capturing arm movements with the Kinect depth camera as the origin, we can calculate joint angles to drive the humanoid arm.</p>
Full article ">Figure 7
<p>Algorithm flowchart of the safety control strategy based on pressure sensors, where, upon meeting the conditions after evaluation, it gains absolute control.</p>
Full article ">Figure 8
<p>(<b>a</b>) The change curve of the angle between the arm and the front side of the body; (<b>b</b>) the change curve of the angle between the upper arm and the lower arm. Both have undergone Kalman filtering.</p>
Full article ">Figure 9
<p>(<b>a</b>) Comparison of <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>M</mi> <mi>S</mi> <mi>E</mi> </mrow> </semantics></math> variations for the angle between the arm and the front side of the body; (<b>b</b>) comparison of <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>M</mi> <mi>S</mi> <mi>E</mi> </mrow> </semantics></math> variations for the angle between the upper arm and the lower arm.</p>
Full article ">Figure 10
<p>(<b>a</b>) The curve depicting the change in the smoothness index of the angle between the arm and the front side of the body; (<b>b</b>) the curve illustrating the change in the smoothness index of the angle between the upper arm and the lower arm.</p>
Full article ">Figure 11
<p>Experimental results of hand tracking under multi-sensor framework.</p>
Full article ">Figure 12
<p>The trajectory smoothness index change curves of finger bending angles and thumb abduction-adduction angles over time. (<b>a</b>) The change curve of the thumb bending angle trajectory smoothness index; (<b>b</b>) the change curve of the thumb abduction-adduction angle trajectory smoothness index; (<b>c</b>) the change curve of the index finger bending angle trajectory smoothness index; (<b>d</b>) the change curve of the middle finger bending angle trajectory smoothness index; (<b>e</b>) the change curve of the ring finger bending angle trajectory smoothness index; (<b>f</b>) the change curve of the little finger bending angle trajectory smoothness index.</p>
Full article ">Figure 13
<p>Occlusion experiment and results.</p>
Full article ">Figure 14
<p>Comparison experiment between force control and non-force control scenarios. The numbers 1 and 2 represent Scenes 1 and 2, respectively.</p>
Full article ">
19 pages, 2949 KiB  
Article
Sensor Fusion-Based Anthropomorphic Control of a Robotic Arm
by Furong Chen, Feilong Wang, Yanling Dong, Qi Yong, Xiaolong Yang, Long Zheng, Yi Gao and Hang Su
Bioengineering 2023, 10(11), 1243; https://doi.org/10.3390/bioengineering10111243 - 24 Oct 2023
Cited by 5 | Viewed by 3702
Abstract
The main goal of this research is to develop a highly advanced anthropomorphic control system utilizing multiple sensor technologies to achieve precise control of a robotic arm. Combining Kinect and IMU sensors, together with a data glove, we aim to create a multimodal [...] Read more.
The main goal of this research is to develop a highly advanced anthropomorphic control system utilizing multiple sensor technologies to achieve precise control of a robotic arm. Combining Kinect and IMU sensors, together with a data glove, we aim to create a multimodal sensor system for capturing rich information of human upper body movements. Specifically, the four angles of upper limb joints are collected using the Kinect sensor and IMU sensor. In order to improve the accuracy and stability of motion tracking, we use the Kalman filter method to fuse the Kinect and IMU data. In addition, we introduce data glove technology to collect the angle information of the wrist and fingers in seven different directions. The integration and fusion of multiple sensors provides us with full control over the robotic arm, giving it flexibility with 11 degrees of freedom. We successfully achieved a variety of anthropomorphic movements, including shoulder flexion, abduction, rotation, elbow flexion, and fine movements of the wrist and fingers. Most importantly, our experimental results demonstrate that the anthropomorphic control system we developed is highly accurate, real-time, and operable. In summary, the contribution of this study lies in the creation of a multimodal sensor system capable of capturing and precisely controlling human upper limb movements, which provides a solid foundation for the future development of anthropomorphic control technologies. This technology has a wide range of application prospects and can be used for rehabilitation in the medical field, robot collaboration in industrial automation, and immersive experience in virtual reality environments. Full article
Show Figures

Figure 1

Figure 1
<p>System structure.</p>
Full article ">Figure 2
<p>Demonstration of relative angles of the robotic arm.</p>
Full article ">Figure 3
<p>The method of calculating the joint angle from the bone points collected by Kinect.</p>
Full article ">Figure 4
<p>IMU axis.</p>
Full article ">Figure 5
<p>Kinematic model of the right upper limb.</p>
Full article ">Figure 6
<p>Data glove.</p>
Full article ">Figure 7
<p>Wireless goniometer of Biometrics Ltd. (Ynysddu, UK).</p>
Full article ">Figure 8
<p>IMU system and goniometer.</p>
Full article ">Figure 9
<p>Comparison of IMU systems and goniometers for wrist flexion or extension and ulnar deviation.</p>
Full article ">Figure 10
<p>Wearing position of the IMU system.</p>
Full article ">Figure 11
<p>Movable quadrant of operation.</p>
Full article ">Figure 12
<p>Action A: shoulder flexion.</p>
Full article ">Figure 13
<p>Action B: shoulder abduction.</p>
Full article ">Figure 14
<p>Action C: elbow flexion.</p>
Full article ">Figure 15
<p>Action D: shoulder rotation with 90° shoulder abduction.</p>
Full article ">Figure 16
<p>Data glove grip data.</p>
Full article ">Figure 17
<p>Operator movement group.</p>
Full article ">Figure 18
<p>Robot anthropomorphic grasping.</p>
Full article ">Figure 19
<p>(<b>a</b>) Trajectory of robotic arm in griping experiment. (<b>b</b>) Index finger pressure in griping experiment.</p>
Full article ">
13 pages, 4441 KiB  
Article
Study on the Design and Performance of a Glove Based on the FBG Array for Hand Posture Sensing
by Hongcheng Rao, Binbin Luo, Decao Wu, Pan Yi, Fudan Chen, Shenghui Shi, Xue Zou, Yuliang Chen and Mingfu Zhao
Sensors 2023, 23(20), 8495; https://doi.org/10.3390/s23208495 - 16 Oct 2023
Cited by 8 | Viewed by 1868
Abstract
This study introduces a new wearable fiber-optic sensor glove. The glove utilizes a flexible material, polydimethylsiloxane (PDMS), and a silicone tube to encapsulate fiber Bragg gratings (FBGs). It is employed to enable the self-perception of hand posture, gesture recognition, and the prediction of [...] Read more.
This study introduces a new wearable fiber-optic sensor glove. The glove utilizes a flexible material, polydimethylsiloxane (PDMS), and a silicone tube to encapsulate fiber Bragg gratings (FBGs). It is employed to enable the self-perception of hand posture, gesture recognition, and the prediction of grasping objects. The investigation employs the Support Vector Machine (SVM) approach for predicting grasping objects. The proposed fiber-optic sensor glove can concurrently monitor the motion of 14 hand joints comprising 5 metacarpophalangeal joints (MCP), 5 proximal interphalangeal joints (PIP), and 4 distal interphalangeal joints (DIP). To expand the measurement range of the sensors, a sinusoidal layout incorporates the FBG array into the glove. The experimental results indicate that the wearable sensing glove can track finger flexion within a range of 0° to 100°, with a modest minimum measurement error (Error) of 0.176° and a minimum standard deviation (SD) of 0.685°. Notably, the glove accurately detects hand gestures in real-time and even forecasts grasping actions. The fiber-optic smart glove technology proposed herein holds promising potential for industrial applications, including object grasping, 3D displays via virtual reality, and human–computer interaction. Full article
(This article belongs to the Special Issue Fiber Grating Sensors and Applications)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Sensor package structure, (<b>b</b>) strain response versus package thickness, (<b>c</b>) over-all elastic strain simulation (1 mm), (<b>d</b>) response of axial (<span class="html-italic">Z</span>-axis) elastic strain (1 mm).</p>
Full article ">Figure 2
<p>(<b>a</b>) Schematic diagram of sensor integration (serial number is FBGs’ number), (<b>b</b>) attitude reconfiguration system.</p>
Full article ">Figure 3
<p>(<b>a</b>) Sensitivities of the sensing units and (<b>b</b>) the corresponding residual error plot for the index finger.</p>
Full article ">Figure 4
<p>Repeatability test, (<b>a</b>) DIP, (<b>b</b>) PIP, (<b>c</b>) MCP of index finger (inset demonstrates a comparison of the agreement between the angle measurements obtained through FBG and IMU during a single trial).</p>
Full article ">Figure 4 Cont.
<p>Repeatability test, (<b>a</b>) DIP, (<b>b</b>) PIP, (<b>c</b>) MCP of index finger (inset demonstrates a comparison of the agreement between the angle measurements obtained through FBG and IMU during a single trial).</p>
Full article ">Figure 5
<p>(<b>a</b>) Wavelength drifts for Corwin’s gesture, (<b>b</b>) the benchmark gesture.</p>
Full article ">Figure 6
<p>Stability for static gesture (Gesture 1).</p>
Full article ">Figure 7
<p>Prediction of grasping objects using SVM (Support Vector Machine), (<b>a</b>) industrial hammer, (<b>b</b>) vise, (<b>c</b>) claw hammer, and (<b>d</b>) industrial disk.</p>
Full article ">
13 pages, 2409 KiB  
Article
Lightweight Soft Robotic Glove with Whole-Hand Finger Motion Tracking for Hand Rehabilitation in Virtual Reality
by Fengguan Li, Jiahong Chen, Zhitao Zhou, Jiefeng Xie, Zishu Gao, Yuxiang Xiao, Pei Dai, Chanchan Xu, Xiaojie Wang and Yitong Zhou
Biomimetics 2023, 8(5), 425; https://doi.org/10.3390/biomimetics8050425 - 14 Sep 2023
Cited by 11 | Viewed by 3529
Abstract
Soft robotic gloves have attracted significant interest in hand rehabilitation in the past decade. However, current solutions are still heavy and lack finger-state monitoring and versatile treatment options. To address this, we present a lightweight soft robotic glove actuated by twisted string actuators [...] Read more.
Soft robotic gloves have attracted significant interest in hand rehabilitation in the past decade. However, current solutions are still heavy and lack finger-state monitoring and versatile treatment options. To address this, we present a lightweight soft robotic glove actuated by twisted string actuators (TSA) that provides whole-hand finger motion tracking. We have developed a virtual reality environment for hand rehabilitation training, allowing users to interact with various virtual objects. Fifteen small inertial measurement units are placed on the glove to predict finger joint angles and track whole-hand finger motion. We performed TSA experiments to identify design and control rules, by understanding how their response varies with input load and voltages. Grasping experiments were conducted to determine the grasping force and range of motion. Finally, we showcase an application of the rehabilitation glove in a Unity-based VR interface, which can actuate the operator’s fingers to grasp different virtual objects. Full article
(This article belongs to the Special Issue Advanced Service Robots: Exoskeleton Robots)
Show Figures

Figure 1

Figure 1
<p>Overview of the proposed system.</p>
Full article ">Figure 2
<p>(<b>a</b>) 3D model and prototype of the TSA actuation system; (<b>b</b>) the principle of TSA; (<b>c</b>) back view of the prototype glove.</p>
Full article ">Figure 3
<p>Loading test setup of the twisted string actuator.</p>
Full article ">Figure 4
<p>The relationship between the traveling distance and time under different loads.</p>
Full article ">Figure 5
<p>(<b>a</b>) A 3D model of the prosthetic hand; (<b>b</b>) experimental set-up of the test of the relationship between the grip force and the traveling distance.</p>
Full article ">Figure 6
<p>The relationship between grip force and travelling distance.</p>
Full article ">Figure 7
<p>Illustration of full ROM of the middle finger.</p>
Full article ">Figure 8
<p>Full ROM movement time at different voltages.</p>
Full article ">Figure 9
<p>Flowchart of the proposed system.</p>
Full article ">Figure 10
<p>(<b>a</b>) Layout of the IMU system; (<b>b</b>) coordinate frames established to calculate the index finger posture.</p>
Full article ">Figure 11
<p>The proposed VR rehabilitation training system: (<b>a</b>) gripping a ball; (<b>b</b>) gripping a bottle.</p>
Full article ">
16 pages, 7953 KiB  
Article
A Projected AR Serious Game for Shoulder Rehabilitation Using Hand-Finger Tracking and Performance Metrics: A Preliminary Study on Healthy Subjects
by Rosanna M. Viglialoro, Giuseppe Turini, Marina Carbone, Sara Condino, Virginia Mamone, Nico Coluccia, Stefania Dell’Agli, Gabriele Morucci, Larisa Ryskalin, Vincenzo Ferrari and Marco Gesi
Electronics 2023, 12(11), 2516; https://doi.org/10.3390/electronics12112516 - 2 Jun 2023
Cited by 5 | Viewed by 2043
Abstract
Research studies show that serious games can increase patient motivation regardless of age or illness and be an affordable and promising solution with respect to conventional physiotherapy. In this paper, we present the latest evolution of our system for shoulder rehabilitation based on [...] Read more.
Research studies show that serious games can increase patient motivation regardless of age or illness and be an affordable and promising solution with respect to conventional physiotherapy. In this paper, we present the latest evolution of our system for shoulder rehabilitation based on hand-finger tracking and projected augmented reality. This version integrates metrics to assess patient performance, monitors the game progress, and allows the selection of the game visualization mode (standard on-screen or projected augmented reality). Additionally, the new software tracks the velocity, acceleration, and normalized jerk of the arm-hand movements of the user. Specifically, sixteen healthy volunteers (eight technical and eight rehabilitation experts) tested our current prototype. The results showed that the serious game is engaging, its design is ergonomically sound, and the overall system could be a useful tool in shoulder rehabilitation. However, clinical validation is needed to assess that the serious game has the same effects as the selected therapy. This is the preliminary step toward laying the foundation for future studies that investigate abnormalities in shoulder movements by using hand-finger tracking. Full article
(This article belongs to the Special Issue Emerging E-health Applications and Medical Information Systems)
Show Figures

Figure 1

Figure 1
<p>Overview of our projected AR serious game system for shoulder rehabilitation: a photo during the preliminary tests (<b>left</b>, natural light), and the complete experimental setup of our system (<b>right</b>), including a laptop (a), a rubber desk pad (b), the LMC (c), and a mini projector (d).</p>
Full article ">Figure 2
<p>Photos of the preliminary testing of our system, showing the game panel during different stages of rehabilitation exercises: single arc trajectory at game start (<b>a</b>), single arc trajectory during gameplay (<b>b</b>), double arc trajectory almost completed (<b>c</b>), and double arc trajectory completed with the hidden painting uncovered (<b>d</b>).</p>
Full article ">Figure 3
<p>Examples of the user interface of the system: the Login Panel (<b>a</b>), the Config Panel (<b>b</b>), the Report Panel (<b>c</b>), and the Game Panel during gameplay (<b>d</b>–<b>f</b>).</p>
Full article ">Figure 4
<p>System architecture of our “Painting Discovery” serious game for shoulder rehabilitation, including all software and hardware modules, all data transfers, and all the user and module interactions.</p>
Full article ">Figure 5
<p>Rehabilitation and technical experts’ opinions on engagement of our system.</p>
Full article ">Figure 6
<p>Rehabilitation and technical experts’ opinions on ergonomics of our system.</p>
Full article ">Figure 7
<p>Rehabilitation experts’ opinions on the acceptance and feasibility of our system in rehabilitation therapies.</p>
Full article ">Figure 8
<p>Representative examples of the original speed (<b>top</b>), acceleration (<b>center</b>), and jerk (<b>bottom</b>) data at the left and the corresponding filtered data at the right of a healthy participant performing one of the rehabilitation exercises.</p>
Full article ">Figure 9
<p>Normal distribution of the average speed (m/s) values calculated on the first three levels first game levels.</p>
Full article ">Figure 10
<p>User study results with the first subject (orange line) and the second subject (green line) performing double arc exercise.</p>
Full article ">
27 pages, 21777 KiB  
Article
Trajectory Control in Discrete-Time Nonlinear Coupling Dynamics of a Soft Exo-Digit and a Human Finger Using Input–Output Feedback Linearization
by Umme Kawsar Alam, Kassidy Shedd and Mahdi Haghshenas-Jaryani
Automation 2023, 4(2), 164-190; https://doi.org/10.3390/automation4020011 - 31 May 2023
Cited by 4 | Viewed by 2731
Abstract
This paper presents a quasi-static model-based control algorithm for controlling the motion of a soft robotic exo-digit with three independent actuation joints physically interacting with the human finger. A quasi-static analytical model of physical interaction between the soft exo-digit and a human finger [...] Read more.
This paper presents a quasi-static model-based control algorithm for controlling the motion of a soft robotic exo-digit with three independent actuation joints physically interacting with the human finger. A quasi-static analytical model of physical interaction between the soft exo-digit and a human finger model was developed. Then, the model was presented as a nonlinear discrete-time multiple-input multiple-output (MIMO) state-space representation for the control system design. Input–output feedback linearization was utilized and a control input was designed to linearize the input–output, where the input is the actuation pressure of an individual soft actuator, and the output is the pose of the human fingertip. The asymptotic stability of the nonlinear discrete-time system for trajectory tracking control is discussed. A soft robotic exoskeleton digit (exo-digit) and a 3D-printed human-finger model integrated with IMU sensors were used for the experimental test setup. An Arduino-based electro-pneumatic control hardware was developed to control the actuation pressure of the soft exo-digit. The effectiveness of the controller was examined through simulation studies and experimental testing for following different pose trajectories corresponding to the human finger pose during the activities of daily living. The model-based controller was able to follow the desired trajectories with a very low average root-mean-square error of 2.27 mm in the x-direction, 2.75 mm in the y-direction, and 3.90 degrees in the orientation of the human finger distal link about the z-axis. Full article
(This article belongs to the Collection Smart Robotics for Automation)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Physical setup of a soft robotic exo-digit and a 3D-printed human finger model and (<b>b</b>) a schematic of the soft robotic digit interacting with a human finger.</p>
Full article ">Figure 2
<p>Workspace of the human finger model and the soft exo-digit.</p>
Full article ">Figure 3
<p>Feedback linearization control block diagram for pressure-based pose control.</p>
Full article ">Figure 4
<p>(<b>a</b>) Control system block diagram for pressure-based pose control and (<b>b</b>) schematics of the electro-pneumatic diagram with control components and (<b>c</b>) the actual control setup where each component labeled according to the corresponding number in <a href="#automation-04-00011-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 5
<p>Variation of the pressure of the first soft actuator segment with respect to the configuration space of the soft robotic exo-digit.</p>
Full article ">Figure 6
<p>Variation of the pressure of the second soft actuator segment with respect to the configuration space of the soft robotic exo-digit.</p>
Full article ">Figure 7
<p>Variation of the pressure of the third soft actuator segment with respect to the configuration space of the soft robotic exo-digit.</p>
Full article ">Figure 8
<p>(<b>a</b>) The first desired trajectory of the tip of the human finger in the Cartesian space and (<b>b</b>) corresponding joint angles.</p>
Full article ">Figure 9
<p>Variation of the controlled pressure of (<b>a</b>) the first soft actuator segment, (<b>b</b>) the second soft actuator segment and (<b>c</b>) the third soft actuator segment with respect to the configuration space of the soft robotic exo-digit in following the first desired trajectory (see <a href="#automation-04-00011-f008" class="html-fig">Figure 8</a>).</p>
Full article ">Figure 10
<p>(<b>a</b>) The second desired trajectory of the tip of the human finger in the Cartesian space and (<b>b</b>) corresponding joint angles.</p>
Full article ">Figure 11
<p>Variation of the controlled pressure of (<b>a</b>) the first soft actuator segment, (<b>b</b>) the second soft actuator segment and (<b>c</b>) the third soft actuator segment with respect to the configuration space of the soft robotic exo-digit in following the second desired trajectory (see <a href="#automation-04-00011-f010" class="html-fig">Figure 10</a>).</p>
Full article ">Figure 12
<p>Close-up picture of the human finger model in the experimental setup.</p>
Full article ">Figure 13
<p>The actual and desired joint angle of (<b>a</b>) MCP, (<b>b</b>) PIP, (<b>c</b>) DIP of experiment-1.</p>
Full article ">Figure 14
<p>The actual and desired trajectory of the fingertip of experiment-1.</p>
Full article ">Figure 15
<p>The actual and desired joint angle of (<b>a</b>) MCP, (<b>b</b>) PIP, (<b>c</b>) DIP of experiment-2.</p>
Full article ">Figure 15 Cont.
<p>The actual and desired joint angle of (<b>a</b>) MCP, (<b>b</b>) PIP, (<b>c</b>) DIP of experiment-2.</p>
Full article ">Figure 16
<p>The actual and desired trajectory of the fingertip of experiment-2.</p>
Full article ">Figure A1
<p>Variation of <math display="inline"><semantics> <mrow> <mrow> <mo>∥</mo> </mrow> <msub> <mi>L</mi> <mi>G</mi> </msub> <mi mathvariant="bold">h</mi> <mrow> <mo>(</mo> <msub> <mi mathvariant="bold">x</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mrow> <mo>∥</mo> </mrow> </mrow> </semantics></math> of with respect to <math display="inline"><semantics> <msub> <mi>q</mi> <mn>2</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>q</mi> <mn>3</mn> </msub> </semantics></math> for different values of <math display="inline"><semantics> <msub> <mi>q</mi> <mn>1</mn> </msub> </semantics></math>.</p>
Full article ">Figure A2
<p>2D plot of surface plots in <a href="#automation-04-00011-f0A1" class="html-fig">Figure A1</a>, the red arrow shows the discontinuity in <math display="inline"><semantics> <mrow> <mrow> <mo>∥</mo> </mrow> <msub> <mi>L</mi> <mi>G</mi> </msub> <mi mathvariant="bold">h</mi> <mrow> <mo>(</mo> <msub> <mi mathvariant="bold">x</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mrow> <mo>∥</mo> </mrow> </mrow> </semantics></math> for that region of the configuration space.</p>
Full article ">
17 pages, 2072 KiB  
Article
BiomacEMG: A Pareto-Optimized System for Assessing and Recognizing Hand Movement to Track Rehabilitation Progress
by Rytis Maskeliūnas, Robertas Damaševičius, Vidas Raudonis, Aušra Adomavičienė, Juozas Raistenskis and Julius Griškevičius
Appl. Sci. 2023, 13(9), 5744; https://doi.org/10.3390/app13095744 - 6 May 2023
Cited by 1 | Viewed by 2693
Abstract
One of the most difficult components of stroke therapy is regaining hand mobility. This research describes a preliminary approach to robot-assisted hand motion therapy. Our objectives were twofold: First, we used machine learning approaches to determine and describe hand motion patterns in healthy [...] Read more.
One of the most difficult components of stroke therapy is regaining hand mobility. This research describes a preliminary approach to robot-assisted hand motion therapy. Our objectives were twofold: First, we used machine learning approaches to determine and describe hand motion patterns in healthy people. Surface electrodes were used to collect electromyographic (EMG) data from the forearm’s flexion and extension muscles. The time and frequency characteristics were used as parameters in machine learning algorithms to recognize seven hand gestures and track rehabilitation progress. Eight EMG sensors were used to capture each contraction of the arm muscles during one of the seven actions. Feature selection was performed using the Pareto front. Our system was able to reconstruct the kinematics of hand/finger movement and simulate the behaviour of every motion pattern. Analysis has revealed that gesture categories substantially overlap in the feature space. The correlation of the computed joint trajectories based on EMG and the monitored hand movement was 0.96 on average. Moreover, statistical research conducted on various machine learning setups revealed a 92% accuracy in measuring the precision of finger motion patterns. Full article
Show Figures

Figure 1

Figure 1
<p>Data capture procedure with interpolation of 8 sensing zones and 7 gestures (A–G corresponding to motions M1–M7).</p>
Full article ">Figure 2
<p>Summary of EMG signal values for gestures 1–7.</p>
Full article ">Figure 3
<p>Projections of the EMG signals (for motions M1–M7) to the three principle components P1, P2 and P3.</p>
Full article ">Figure 4
<p>Confusion matrix of the classification results.</p>
Full article ">
14 pages, 3244 KiB  
Article
Dual-Mode Stretchable Sensor Array with Integrated Capacitive and Mechanoluminescent Sensor Unit for Static and Dynamic Strain Mapping
by Song Wang, Xiaohui Yi, Ye Zhang, Zhiyi Gao, Ziyin Xiang, Yuwei Wang, Yuanzhao Wu, Yiwei Liu, Jie Shang and Run-Wei Li
Chemosensors 2023, 11(5), 270; https://doi.org/10.3390/chemosensors11050270 - 2 May 2023
Cited by 3 | Viewed by 2173
Abstract
Electronic skin (e-skin) has the potential to detect large-scale strain, which is typically achieved by integrating multiple strain sensors into an array. However, the latency and limited resolution of sensing have hindered its large-scale sensing applications. Here, we have developed a high-resolution detection [...] Read more.
Electronic skin (e-skin) has the potential to detect large-scale strain, which is typically achieved by integrating multiple strain sensors into an array. However, the latency and limited resolution of sensing have hindered its large-scale sensing applications. Here, we have developed a high-resolution detection sensing system capable of detecting static and dynamic strain with a simple fabrication process by combining capacitive and mechanoluminescent (ML) sensor units. An elastic polydimethylsiloxane (PDMS) composite film doped with ZnS:Cu and BaTiO3(BT) particles are fabricated as the functional film of the capacitive sensor. In contrast, the transparent electrode was fabricated on the surface of the as-prepared film. By incorporating BT nanoparticles into the elastic substrate, the ML intensity of the ZnS:Cu was improved up to 2.89 times that without BT addition, and the sensitivity of the capacitive sensor was increased as well. The capacitive part of the sensor presented a GF of 0.9 and good stability, while the ML part exhibited excellent performance, making it suitable for both static and dynamic sensing. Furthermore, the strain sensor integrated by 10 × 10 sensing units is demonstrated to detect large-scale strain with high resolution. Moreover, finger joint strain distribution tracking is achieved by attaching the strain sensor unit to the finger joint. With these characteristics, the e-skin may have great potential for bio-motion monitoring and human-computer interaction applications. Full article
(This article belongs to the Special Issue Flexible Electronic Devices and Systems for Sensing Applications)
Show Figures

Figure 1

Figure 1
<p>Conceptual illustration of dual-mode e-skin. (<b>a</b>) Structure schematic of the e-skin. An ML layer is sandwiched between two stretchable transparent AgNW/PDMS electrodes—an illustration of the different deformation of the e-skin with the different sensing modes. Dynamic strain corresponds to light intensity (left), static strain, and capacitance value. (<b>b</b>) The schematic illustration of enhancement of stress and capacitance by nanoparticle-doped matrix.</p>
Full article ">Figure 2
<p>Fabrication process from the functional layer to the e-skin with double sensing mode. The above schematic of the preparation process of both the light emitting layer and dielectric layer by introducing BT into ML film. The below schematic of the preparation process of the e-skin integrated transparent electrode and the functional layer.</p>
Full article ">Figure 3
<p>Optical and electrical characteristics of e-skin by nanodoped matrix modification. (<b>a</b>) Simplified structure of electronic skin (<b>left</b>), SEM images of the surface morphology of AgNWs/PDMS and cross-sectional morphology of functional layer (<b>right</b>). (<b>b</b>) Frequency dependence of dielectric constant with different BT nanoparticles loading weight fractions (<b>c</b>) Strain dependence of capacitance of strain sensors based on PDMS/ZnS:Cu dielectric layer with different content of BT. (<b>d</b>) The luminescence intensity of the as-prepared sensor with increasing contents of the BT nanoparticle. (<b>e</b>) The luminescence intensity of the as-prepared sensor with different contents of the BT nanoparticle under 20% strain.</p>
Full article ">Figure 4
<p>Performance characterization of capacitance sensor with 20% BT contents in the dielectric layer. (<b>a</b>) relative capacitance changes versus tensile strain for stretching. (<b>b</b>) Relative capacitances change of the strain sensor upon small strains (from 0.1 to 0.3%). (<b>c</b>) relative capacitance changes of the strain sensor under stretching-holding (0−30%). (<b>d</b>) Response stability of the sensor after 2100 cycles from 0 to 30% strain test.</p>
Full article ">Figure 5
<p>Optical characteristics of the ML from the 20% BT nanodoped e-skin. (<b>a</b>) Luminescence intensity as the strain varies from 0 to 40% under 600 mm/min. (<b>b</b>) Luminescence intensity as the stress varies from 0.44 N to 1.62 N under 600 mm/min. (<b>c</b>) Luminescence spectra as the velocity of the tensile strain varies. (<b>d</b>) Stability and repeatability test of ML emitting by nine cycles under 0–30% stretching and releasing.</p>
Full article ">Figure 6
<p>Application of e-skin for monitoring of various strains as well as strain distribution sense. (<b>a1</b>) Response capacitance change of the sensor induced by the bending of fingers. Inset: Graph showing a sensor pasted on the finger with increasing bending degree. (<b>a2</b>) The grayscale of the real-time captured image from the ML emission generated by the same three bending degrees. (<b>b1</b>) Schematic diagram of the pressing in strain distribution tests. Inset: Photography of the sensor array based on the e-skin. strain distribution generated by (<b>b2</b>) the Optical platform, (<b>b3</b>) the Capacitive sensor array and (<b>b4</b>) The image of real-time ML in the dark.</p>
Full article ">
15 pages, 9175 KiB  
Article
Soft Robotic Glove with Sensing and Force Feedback for Rehabilitation in Virtual Reality
by Fengguan Li, Jiahong Chen, Guanpeng Ye, Siwei Dong, Zishu Gao and Yitong Zhou
Biomimetics 2023, 8(1), 83; https://doi.org/10.3390/biomimetics8010083 - 15 Feb 2023
Cited by 13 | Viewed by 6414
Abstract
Many diseases, such as stroke, arthritis, and spinal cord injury, can cause severe hand impairment. Treatment options for these patients are limited by expensive hand rehabilitation devices and dull treatment procedures. In this study, we present an inexpensive soft robotic glove for hand [...] Read more.
Many diseases, such as stroke, arthritis, and spinal cord injury, can cause severe hand impairment. Treatment options for these patients are limited by expensive hand rehabilitation devices and dull treatment procedures. In this study, we present an inexpensive soft robotic glove for hand rehabilitation in virtual reality (VR). Fifteen inertial measurement units are placed on the glove for finger motion tracking, and a motor—tendon actuation system is mounted onto the arm and exerts forces on fingertips via finger-anchoring points, providing force feedback to fingers so that the users can feel the force of a virtual object. A static threshold correction and complementary filter are used to calculate the finger attitude angles, hence computing the postures of five fingers simultaneously. Both static and dynamic tests are performed to validate the accuracy of the finger-motion-tracking algorithm. A field-oriented-control-based angular closed-loop torque control algorithm is adopted to control the force applied to the fingers. It is found that each motor can provide a maximum force of 3.14 N within the tested current limit. Finally, we present an application of the haptic glove in a Unity-based VR interface to provide the operator with haptic feedback while squeezing a soft virtual ball. Full article
(This article belongs to the Special Issue Biomimetic Soft Robotics)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Architecture of the proposed soft robotic glove system for rehabilitation. (<b>b</b>) Prototype of the integrated soft robotic glove. (<b>c</b>) Demonstration of grasping a virtual ball in a home-built VR scene.</p>
Full article ">Figure 2
<p>(<b>a</b>) Side view of the design drawing and top view of the prototype for an integrated FPC with three IMUs. (<b>b</b>) Placement of the IMUs on the glove. (<b>c</b>) Prototype of the data glove integrated with finger tracking hardware.</p>
Full article ">Figure 3
<p>(<b>a</b>) 3D rendering of the force feedback assembly including motor assembly, glove layout, and component integration. (<b>b</b>) Prototype of the integrated soft robotic glove.</p>
Full article ">Figure 4
<p>Execution flowchart of software program for the VR hand training system.</p>
Full article ">Figure 5
<p>Flowchart of complementary filter.</p>
Full article ">Figure 6
<p>IMU orientation and angles of finger interphalangeal joints for the thumb and index finger.</p>
Full article ">Figure 7
<p>Fundamental steps of field-oriented control (FOC).</p>
Full article ">Figure 8
<p>(<b>a</b>) Flowchart of FOC-based torque control. (<b>b</b>) Flowchart of FOC-based angular closed-loop torque control.</p>
Full article ">Figure 9
<p>Open state, semi-closed state, and closed states for static tests.</p>
Full article ">Figure 10
<p>Comparison of benchmark and our IMU results for dynamic tests for (<b>a</b>) the thumb and (<b>b</b>) index finger.</p>
Full article ">Figure 11
<p>Experimental setup to determine the relationship between the current limit and feedback force.</p>
Full article ">Figure 12
<p>Resulting feedback force versus current limit. Each “*” represents one data point obtained from the test shown in <a href="#biomimetics-08-00083-f011" class="html-fig">Figure 11</a>.</p>
Full article ">Figure 13
<p>Real-time VR scene application.</p>
Full article ">
28 pages, 913 KiB  
Article
Self-Calibrating Magnetometer-Free Inertial Motion Tracking of 2-DoF Joints
by Daniel Laidig, Ive Weygers and Thomas Seel
Sensors 2022, 22(24), 9850; https://doi.org/10.3390/s22249850 - 15 Dec 2022
Cited by 10 | Viewed by 3558
Abstract
Human motion analysis using inertial measurement units (IMUs) has recently been shown to provide accuracy similar to the gold standard, optical motion capture, but at lower costs and while being less restrictive and time-consuming. However, IMU-based motion analysis requires precise knowledge of the [...] Read more.
Human motion analysis using inertial measurement units (IMUs) has recently been shown to provide accuracy similar to the gold standard, optical motion capture, but at lower costs and while being less restrictive and time-consuming. However, IMU-based motion analysis requires precise knowledge of the orientations in which the sensors are attached to the body segments. This knowledge is commonly obtained via time-consuming and error-prone anatomical calibration based on precisely defined poses or motions. In the present work, we propose a self-calibrating approach for magnetometer-free joint angle tracking that is suitable for joints with two degrees of freedom (DoF), such as the elbow, ankle, and metacarpophalangeal finger joints. The proposed methods exploit kinematic constraints in the angular rates and the relative orientations to simultaneously identify the joint axes and the heading offset. The experimental evaluation shows that the proposed methods are able to estimate plausible and consistent joint axes from just ten seconds of arbitrary elbow joint motion. Comparison with optical motion capture shows that the proposed methods yield joint angles with similar accuracy as a conventional IMU-based method while being much less restrictive. Therefore, the proposed methods improve the practical usability of IMU-based motion tracking in many clinical and biomedical applications. Full article
(This article belongs to the Collection Sensors for Gait, Human Movement Analysis, and Health Monitoring)
Show Figures

Figure 1

Figure 1
<p>Anatomical calibration, also called sensor-to-segment calibration, is the task of determining how the IMUs are attached to the body segments. More precisely, the rotations between the IMU coordinate systems <math display="inline"><semantics> <mrow> <mi mathvariant="script">S</mi> <msub> <mrow/> <mrow> <mn>1</mn> <mo>,</mo> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>, defined by the sensor housing, and the corresponding body segments <math display="inline"><semantics> <mrow> <mi mathvariant="script">B</mi> <msub> <mrow/> <mrow> <mn>1</mn> <mo>,</mo> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>, determined by anatomical axes such as the joint axes <math display="inline"><semantics> <msub> <mi mathvariant="bold">j</mi> <mrow> <mn>1</mn> <mo>,</mo> <mn>2</mn> </mrow> </msub> </semantics></math>, have to be determined. Conventional methods rely on precisely defined calibration movements and poses, whereas the proposed methods use kinematic constraints to derive this information from arbitrary joint motion.</p>
Full article ">Figure 2
<p>Anatomical model of the elbow joint. The humeroulnar joint is a hinge joint with the rotation axes <math display="inline"><semantics> <msub> <mi mathvariant="bold">j</mi> <mn>1</mn> </msub> </semantics></math>, allowing for flexion and extension (FE). The radioulnar joint also has one degree of freedom (<math display="inline"><semantics> <msub> <mi mathvariant="bold">j</mi> <mn>2</mn> </msub> </semantics></math>) and allows for pronation and supination (PS). In this contribution, we refer to the combined joint with two degrees of freedom as <span class="html-italic">elbow joint</span>.</p>
Full article ">Figure 3
<p>(<b>a</b>) Geometric kinematic model of the elbow joint. Inertial sensors <math display="inline"><semantics> <mrow> <mi mathvariant="script">S</mi> <msub> <mrow/> <mn>1</mn> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi mathvariant="script">S</mi> <msub> <mrow/> <mn>2</mn> </msub> </mrow> </semantics></math> are placed in arbitrary orientation on the upper arm <math display="inline"><semantics> <mrow> <mi mathvariant="script">B</mi> <msub> <mrow/> <mn>1</mn> </msub> </mrow> </semantics></math> and forearm <math display="inline"><semantics> <mrow> <mi mathvariant="script">B</mi> <msub> <mrow/> <mn>2</mn> </msub> </mrow> </semantics></math>. Upper arm and forearm are connected by two hinge joints that allow for FE (<math display="inline"><semantics> <msub> <mi mathvariant="bold">j</mi> <mn>1</mn> </msub> </semantics></math>) and PS (<math display="inline"><semantics> <msub> <mi mathvariant="bold">j</mi> <mn>2</mn> </msub> </semantics></math>). (<b>b</b>) View onto the <math display="inline"><semantics> <msub> <mi mathvariant="bold">j</mi> <mn>1</mn> </msub> </semantics></math>-<math display="inline"><semantics> <msub> <mi mathvariant="bold">j</mi> <mn>2</mn> </msub> </semantics></math> plane. The fixed rotation between FE and PS is called <span class="html-italic">carrying angle</span>.</p>
Full article ">Figure 4
<p>Two spherical parametrizations are used to represent the joint axes <math display="inline"><semantics> <msub> <mi mathvariant="bold">j</mi> <mi mathvariant="normal">i</mi> </msub> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> </mrow> </semantics></math>, with two parameters each, <math display="inline"><semantics> <msub> <mi>θ</mi> <mi>i</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>φ</mi> <mi>i</mi> </msub> </semantics></math>. To avoid the derivative becoming close to zero, we convert the respective axis to Cartesian coordinates and then to the other representation whenever <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> <mo form="prefix">sin</mo> </mrow> <msub> <mi>θ</mi> <mi>i</mi> </msub> <mrow> <mo>|</mo> <mo>&lt;</mo> <mn>0.5</mn> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Variability angle <math display="inline"><semantics> <msub> <mi>ε</mi> <mi>w</mi> </msub> </semantics></math> and misalignment angle <math display="inline"><semantics> <mi>α</mi> </semantics></math> used to evaluate the axis estimation results. <math display="inline"><semantics> <msub> <mi>ε</mi> <mi>i</mi> </msub> </semantics></math> is the angle between the estimated axis for a single window and the mean estimate. <math display="inline"><semantics> <mi>α</mi> </semantics></math> is the angle between the mean estimate and the axis obtained by careful manual sensor attachment. For a good anatomical calibration method, <math display="inline"><semantics> <msub> <mi>ε</mi> <mi>i</mi> </msub> </semantics></math> should be small, showing that the estimates are consistent, and <math display="inline"><semantics> <mi>α</mi> </semantics></math> should be within 30°, showing that the estimates are plausible.</p>
Full article ">Figure 6
<p>Consistency and plausibility results for the first experiment with the (<b>a</b>) rotation-based constraint and the (<b>b</b>) orientation-based constraint, for two motion types and for five human subjects and a mechanical joint (m). The proposed methods estimate plausible axes for all subjects and all motions. The rotation-based constraint yields more consistent estimates than the orientation-based constraint, and the simple motion leads to better results than the complex motion.</p>
Full article ">Figure 7
<p>3D visualization of the estimation results for an exemplary trial (Subject 2, simple motion, rotation-based constraint). The joint axis estimates from all windows agree well (blue arrows). The PS axis agrees very well with the expected value (red arrow), while for the FE axis there is a misalignment of 17°, most likely due to imprecise manual sensor attachment.</p>
Full article ">Figure 8
<p>Investigation into the variability of the FE axis estimates (Subject 2, complex motion, rotation-based constraint). (<b>a</b>) 3D visualization of the axis estimates for all windows. (<b>b</b>) Plot of the estimated heading offset <math display="inline"><semantics> <mi>δ</mi> </semantics></math> and the angle of the FE axis in the (approximately horizontal) <span class="html-italic">y</span>-<span class="html-italic">z</span>-plane of the upper arm IMU coordinate system. There is an obvious correlation, indicating that without sufficient upper arm movement, the kinematic constraint does not allow for distinguishing between a heading rotation of the joint axis and a heading offset between the sensor orientations.</p>
Full article ">Figure 9
<p>Joint angle estimation errors for all trials with a conventional 9D approach and with the proposed plug-and-play magnetometer-free methods, using OMC-based angles as ground truth. The numbers below the axis labels indicate the mean root mean square error (RMSE) for all 16 trials. The proposed method with the rotation-based constraint yields the same accuracy as the much more restrictive conventional 9D method.</p>
Full article ">Figure 10
<p>Joint angle trajectories for an exemplary (<b>a</b>) drinking and (<b>b</b>) pick-and-place trial obtained with the proposed IMU-based method (and the rotation-based constraint), the conventional 9D IMU-based approach, and the OMC ground truth. While being much less restrictive, the proposed method is able to obtain FE and PS joint angles that agree well with the angles obtained with the other two methods.</p>
Full article ">Figure 11
<p>Standard deviation of the carrying angle for all trials with the different angle calculation methods. The proposed method induces the smallest variation in the assumed-to-be-constant carrying angle. This indicates that the estimated joint axes describe the functional motion axes better than the axes obtained via careful manual IMU placement (conventional IMU) and via placing markers on anatomical landmarks (OMC ground truth).</p>
Full article ">Figure A1
<p>Variability of the obtained axis estimates (mean and 99th-percentile of <math display="inline"><semantics> <msub> <mi>ε</mi> <mi>w</mi> </msub> </semantics></math>, relative to minimum value) for different values of the cutoff frequency of the soft tissue motion artifact reduction low pass filter. Low-pass filtering of the angular rates increases the consistency of the axis estimates, but for too low cutoff frequencies, important information is lost, and the deviations increase. Choosing a cutoff frequency of 5<math display="inline"><semantics> <mi>Hz</mi> </semantics></math> gives robust estimates.</p>
Full article ">Figure A2
<p>Variability of the obtained axis estimates (mean and 99th-percentile of <math display="inline"><semantics> <msub> <mi>ε</mi> <mi>w</mi> </msub> </semantics></math>, relative to minimum value) for different values of the window duration and the sample selection frequency for the (<b>a</b>) rotation-based constraint and the (<b>b</b>) orientation-based constraint. In general, using more data (long windows at high sampling rates) leads to more consistent estimates but increases inconvenience for the subject and processing time.</p>
Full article ">
Back to TopTop