Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Sensing, Estimating, and Analyzing Human Movements for Human–Robot Interaction

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: closed (31 August 2024) | Viewed by 32638

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China
Interests: artificial intelligence; computer vision; video code; machine learning

E-Mail Website
Guest Editor
AI Research Institute, Harbin Institute of Technology, Shenzhen 518055, China
Interests: artificial intelligence; computer science and engineering
School of Mechatronics Engineering, Harbin Institute of Technology, Harbin 150001, China
Interests: biomedical signal processing; wearable robots

Special Issue Information

Dear Colleagues,

Recent advances in human–robot interaction (HRI) are playing an increasingly pivotal role in a wide spectrum of robots, ranging from household to industrial, and from virtual interaction to closely physical collaboration. Due to the core function in HRI systems, numerous efforts and intensive attentions are paid to sensing, estimating, and analyzing the continuous and high-dimensional human movements so as to semantically decode and reflect motor intent and even latent beliefs of human motor control. The purpose of this Special Issue is therefore to describe the state of the art in human neuromuscular and cognitive behaviors reflected by human movements and to present the challenges associated with leveraging such knowledge in human-centered design and control of HRI systems.

This Special Issue aims to present the latest results and emerging algorithmic techniques of sensing, estimating, and analyzing human movements in human–robot interaction. This fits the scope of Sensors as algorithms are used to process the information collected by sensors and sensor networks.

Prof. Dr. Feng Jiang
Prof. Dr. Jie Liu
Dr. Chunzhi Yi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Human–robot interaction
  • Human movement analysis
  • Human augmentation
  • Inner belief estimation
  • Neuromuscular control
  • Human intent perception
  • Bio-inspired design and control of robots

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 7112 KiB  
Article
Design and Evaluation of a Novel Variable Stiffness Hip Joint Exoskeleton
by Tao Yang, Chifu Yang, Feng Jiang and Bowen Tian
Sensors 2024, 24(20), 6693; https://doi.org/10.3390/s24206693 - 17 Oct 2024
Viewed by 634
Abstract
An exoskeleton is a wearable device with human–machine interaction characteristics. An ideal exoskeleton should have kinematic and kinetic characteristics similar to those of the wearer. Most traditional exoskeletons are driven by rigid actuators based on joint torque or position control algorithms. In order [...] Read more.
An exoskeleton is a wearable device with human–machine interaction characteristics. An ideal exoskeleton should have kinematic and kinetic characteristics similar to those of the wearer. Most traditional exoskeletons are driven by rigid actuators based on joint torque or position control algorithms. In order to achieve better human–robot interaction, flexible actuators have been introduced into exoskeletons. However, exoskeletons with fixed stiffness cannot adapt to changing stiffness requirements during assistance. In order to achieve collaborative control of stiffness and torque, a bionic variable stiffness hip joint exoskeleton (BVS-HJE) is designed in this article. The exoskeleton proposed in this article is inspired by the muscles that come in agonist–antagonist pairs, whose actuators are arranged in an antagonistic form on both sides of the hip joint. Compared with other exoskeletons, it has antagonistic actuators with variable stiffness mechanisms, which allow the stiffness control of the exoskeleton joint independent of force (or position) control. A BVS-HJE model was established to study its variable stiffness and static characteristics. Based on the characteristics of the BVS-HJE, a control strategy is proposed that can achieve independent adjustment of joint torque and joint stiffness. In addition, the variable stiffness mechanism can estimate the output force based on the established mathematical model through an encoder, thus eliminating the additional force sensors in the control process. Finally, the variable stiffness properties of the actuator and the controllability of joint stiffness and joint torque were verified through experiments. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the hip joint exoskeleton mechanism.</p>
Full article ">Figure 2
<p>Schematic diagram of the variable stiffness mechanism.</p>
Full article ">Figure 3
<p>Schematic diagram of the BVS-HJE.</p>
Full article ">Figure 4
<p>Simplified model of the variable stiffness mechanism. (<b>a</b>) The side with pulley blocks; (<b>b</b>) the side with springs.</p>
Full article ">Figure 5
<p>Schematic diagram of the stress situation for the inner cable.</p>
Full article ">Figure 6
<p>Control algorithm diagram of the BVS-HJE.</p>
Full article ">Figure 7
<p>The simulation results. (<b>a</b>) Distribution of the output force; (<b>b</b>) calculation of the deformation variables.</p>
Full article ">Figure 8
<p>The simulation results. (<b>a</b>) Comparison curves for the output torque; (<b>b</b>) comparison curves for the output stiffness.</p>
Full article ">Figure 9
<p>The designed BVS-HJE system.</p>
Full article ">Figure 10
<p>The results of the stiffness coefficient measurement experiment.</p>
Full article ">Figure 11
<p>The experimental results of the reciprocating force loading experiment. (<b>a</b>) The relationship between the deformation variable and the mechanism rotation angle; (<b>b</b>) the relationship between the output force and the mechanism rotation angle.</p>
Full article ">Figure 12
<p>The experimental results of the reciprocating force loading experiment. (<b>a</b>) The relationship between the output force and the deformation variable; (<b>b</b>) the relationship between the output stiffness and the mechanism rotation angle.</p>
Full article ">Figure 13
<p>The results of the experiment with a human. (<b>a</b>) The relationship between the deformation variable and the mechanism rotation angle; (<b>b</b>) the relationship between the output force and the mechanism rotation angle.</p>
Full article ">Figure 14
<p>The results of the experiment with a human. (<b>a</b>) The relationship between the output force and the deformation variable; (<b>b</b>) the relationship between the output stiffness and the mechanism rotation angle.</p>
Full article ">Figure 15
<p>The dynamic curves under the coordinated control of the BVS-HJE. (<b>a</b>) The dynamic curve of the torque response; (<b>b</b>) the joint stiffness.</p>
Full article ">Figure 16
<p>Experimental results of the output force of the variable stiffness actuators on both the flexion and extension sides. (<b>a</b>) Flexion force; (<b>b</b>) extension force.</p>
Full article ">Figure 17
<p>Experiment results of passive compliance control. (<b>a</b>) Tracking curve of the joint trajectory; (<b>b</b>) the curve of the joint stiffness.</p>
Full article ">Figure 18
<p>Experimental results of the output force for passive compliance control on both the flexion and extension sides. (<b>a</b>) Flexion force; (<b>b</b>) extension force.</p>
Full article ">
13 pages, 2744 KiB  
Article
An Embedded Electromyogram Signal Acquisition Device
by Changjia Lu, Xin Xu, Yingjie Liu, Dan Li, Yue Wang, Wenhao Xian, Changbing Chen, Baichun Wei and Jin Tian
Sensors 2024, 24(13), 4106; https://doi.org/10.3390/s24134106 - 24 Jun 2024
Viewed by 1207
Abstract
In this study, we design an embedded surface EMG acquisition device to conveniently collect human surface EMG signals, pursue more intelligent human–computer interactions in exoskeleton robots, and enable exoskeleton robots to synchronize with or even respond to user actions in advance. The device [...] Read more.
In this study, we design an embedded surface EMG acquisition device to conveniently collect human surface EMG signals, pursue more intelligent human–computer interactions in exoskeleton robots, and enable exoskeleton robots to synchronize with or even respond to user actions in advance. The device has the characteristics of low cost, miniaturization, and strong compatibility, and it can acquire eight-channel surface EMG signals in real time while retaining the possibility of expanding the channel. This paper introduces the design and function of the embedded EMG acquisition device in detail, which includes the use of wired transmission to adapt to complex electromagnetic environments, light signals to indicate signal strength, and an embedded processing chip to reduce signal noise and perform filtering. The test results show that the device can effectively collect the original EMG signal, which provides a scheme for improving the level of human–computer interactions and enhancing the robustness and intelligence of exoskeleton equipment. The development of this device provides a new possibility for the intellectualization of exoskeleton systems and reductions in their cost. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of surface electromyography signals.</p>
Full article ">Figure 2
<p>Overall design framework of embedded surface EMG acquisition device.</p>
Full article ">Figure 3
<p>Hardware design framework of embedded surface EMG acquisition device.</p>
Full article ">Figure 4
<p>Screenshot of several circuits. (<b>a</b>) Schematic diagram of ADS1298 circuit structure. (<b>b</b>) Circuit design of second-order passive filter.</p>
Full article ">Figure 5
<p>LED driver circuit.</p>
Full article ">Figure 6
<p>Software design process framework.</p>
Full article ">Figure 7
<p>Partial data display after filtering.</p>
Full article ">Figure 7 Cont.
<p>Partial data display after filtering.</p>
Full article ">
18 pages, 9319 KiB  
Article
Mapping Method of Human Arm Motion Based on Surface Electromyography Signals
by Yuanyuan Zheng, Gang Zheng, Hanqi Zhang, Bochen Zhao and Peng Sun
Sensors 2024, 24(9), 2827; https://doi.org/10.3390/s24092827 - 29 Apr 2024
Cited by 4 | Viewed by 1346
Abstract
This paper investigates a method for precise mapping of human arm movements using sEMG signals. A multi-channel approach captures the sEMG signals, which, combined with the accurately calculated joint angles from an Inertial Measurement Unit, allows for action recognition and mapping through deep [...] Read more.
This paper investigates a method for precise mapping of human arm movements using sEMG signals. A multi-channel approach captures the sEMG signals, which, combined with the accurately calculated joint angles from an Inertial Measurement Unit, allows for action recognition and mapping through deep learning algorithms. Firstly, signal acquisition and processing were carried out, which involved acquiring data from various movements (hand gestures, single-degree-of-freedom joint movements, and continuous joint actions) and sensor placement. Then, interference signals were filtered out through filters, and the signals were preprocessed using normalization and moving averages to obtain sEMG signals with obvious features. Additionally, this paper constructs a hybrid network model, combining Convolutional Neural Networks and Artificial Neural Networks, and employs a multi-feature fusion algorithm to enhance the accuracy of gesture recognition. Furthermore, a nonlinear fitting between sEMG signals and joint angles was established based on a backpropagation neural network, incorporating momentum term and adaptive learning rate adjustments. Finally, based on the gesture recognition and joint angle prediction model, prosthetic arm control experiments were conducted, achieving highly accurate arm movement prediction and execution. This paper not only validates the potential application of sEMG signals in the precise control of robotic arms but also lays a solid foundation for the development of more intuitive and responsive prostheses and assistive devices. Full article
Show Figures

Figure 1

Figure 1
<p>Distribution of sEMG signal acquisition points.</p>
Full article ">Figure 2
<p>The MYO bracelet wearing position and schematic diagram of gesture action: (<b>a</b>) Clenching the palm. (<b>b</b>) Flipping up the palm. (<b>c</b>) Flipping down the palm.</p>
Full article ">Figure 3
<p>IMU wearing mode.</p>
Full article ">Figure 4
<p>Experimental action flow chart.</p>
Full article ">Figure 5
<p>Schematic diagram of single group action flow.</p>
Full article ">Figure 6
<p>Decomposition example of abduction and raising of the arm.</p>
Full article ">Figure 7
<p>Decomposition example of cross arm with palm clenched.</p>
Full article ">Figure 8
<p>Original sEMG signal with more noise.</p>
Full article ">Figure 9
<p>Filtered sEMG signal waveform.</p>
Full article ">Figure 10
<p>sEMG signal after feature extraction.</p>
Full article ">Figure 11
<p>sEMG eigenvalue extraction and normalization processing results.</p>
Full article ">Figure 12
<p>Network structure of the gesture recognition algorithm.</p>
Full article ">Figure 13
<p>Gesture recognition results.</p>
Full article ">Figure 14
<p>The neural network structure depicted in the figure is based on the research conducted in this paper.</p>
Full article ">Figure 15
<p>Error curve of each joint movement: (<b>a</b>) Shoulder abduction/adduction. (<b>b</b>) Shoulder flexion/extension. (<b>c</b>) Elbow flexion/extension. (<b>d</b>) Forearm rotation. (<b>e</b>) Shoulder rotation.</p>
Full article ">Figure 16
<p>Comparison of target and predicted angles during joint movement: (<b>a</b>) Shoulder abduction/adduction. (<b>b</b>) Shoulder flexion/extension. (<b>c</b>) Elbow flexion/extension. (<b>d</b>) Forearm rotation. (<b>e</b>) Shoulder rotation.</p>
Full article ">Figure 17
<p>Intersection curve of each joint (the combined action of abduction and raising of the arm): (<b>a</b>) Shoulder abduction/adduction. (<b>b</b>) Elbow flexion/extension. (<b>c</b>) Forearm rotation. (<b>d</b>) Shoulder flexion/extension and rotation.</p>
Full article ">Figure 18
<p>Intersection curve of each joint (the combined action of the cross arm with palm clenched): (<b>a</b>) Shoulder abduction/adduction. (<b>b</b>) Elbow flexion/extension. (<b>c</b>) Forearm rotation. (<b>d</b>) Shoulder flexion/extension and rotation.</p>
Full article ">Figure 19
<p>Physical drawing of the humanoid manipulator.</p>
Full article ">Figure 20
<p>Schematic diagram of test system.</p>
Full article ">Figure 21
<p>Experiment on control of humanoid robot arm: (<b>a</b>) Perform water pouring action. (<b>b</b>) Restore initial state action.</p>
Full article ">
24 pages, 4796 KiB  
Article
sEMG-Based Robust Recognition of Grasping Postures with a Machine Learning Approach for Low-Cost Hand Control
by Marta C. Mora, José V. García-Ortiz and Joaquín Cerdá-Boluda
Sensors 2024, 24(7), 2063; https://doi.org/10.3390/s24072063 - 23 Mar 2024
Cited by 2 | Viewed by 1428
Abstract
The design and control of artificial hands remains a challenge in engineering. Popular prostheses are bio-mechanically simple with restricted manipulation capabilities, as advanced devices are pricy or abandoned due to their difficult communication with the hand. For social robots, the interpretation of human [...] Read more.
The design and control of artificial hands remains a challenge in engineering. Popular prostheses are bio-mechanically simple with restricted manipulation capabilities, as advanced devices are pricy or abandoned due to their difficult communication with the hand. For social robots, the interpretation of human intention is key for their integration in daily life. This can be achieved with machine learning (ML) algorithms, which are barely used for grasping posture recognition. This work proposes an ML approach to recognize nine hand postures, representing 90% of the activities of daily living in real time using an sEMG human–robot interface (HRI). Data from 20 subjects wearing a Myo armband (8 sEMG signals) were gathered from the NinaPro DS5 and from experimental tests with the YCB Object Set, and they were used jointly in the development of a simple multi-layer perceptron in MATLAB, with a global percentage success of 73% using only two features. GPU-based implementations were run to select the best architecture, with generalization capabilities, robustness-versus-electrode shift, low memory expense, and real-time performance. This architecture enables the implementation of grasping posture recognition in low-cost devices, aimed at the development of affordable functional prostheses and HRI for social robots. Full article
Show Figures

Figure 1

Figure 1
<p>Examples of the grasp taxonomy used in this work, described in <a href="#sensors-24-02063-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 2
<p>Myo armband and recognized hand postures from proprietary software Myo Connect version 1.0.1.</p>
Full article ">Figure 3
<p>Grasping postures from Ninapro DS5 (with ID). Figures extracted from [<a href="#B38-sensors-24-02063" class="html-bibr">38</a>].</p>
Full article ">Figure 4
<p>Objects from the YCB set used in the grasping experiments associated with a grasp type.</p>
Full article ">Figure 5
<p>Snapshot of the data collection application.</p>
Full article ">Figure 6
<p>Architecture of the multi-layer perceptron proposed for grasping posture recognition.</p>
Full article ">Figure 7
<p>MLP for the recognition of 9 grasping postures with two features (<span class="html-italic">MAV</span> and Sk).</p>
Full article ">Figure 8
<p>Confusion matrix for the solution neural network with a subject not used for training.</p>
Full article ">Figure A1
<p>Setup for the grasping experiments: table with the hand rest position (A) and initial (B) and final (C) object positions for grasping.</p>
Full article ">
19 pages, 3767 KiB  
Article
A Generative Model to Embed Human Expressivity into Robot Motions
by Pablo Osorio, Ryusuke Sagawa, Naoko Abe and Gentiane Venture
Sensors 2024, 24(2), 569; https://doi.org/10.3390/s24020569 - 16 Jan 2024
Cited by 2 | Viewed by 1885
Abstract
This paper presents a model for generating expressive robot motions based on human expressive movements. The proposed data-driven approach combines variational autoencoders and a generative adversarial network framework to extract the essential features of human expressive motion and generate expressive robot motion accordingly. [...] Read more.
This paper presents a model for generating expressive robot motions based on human expressive movements. The proposed data-driven approach combines variational autoencoders and a generative adversarial network framework to extract the essential features of human expressive motion and generate expressive robot motion accordingly. The primary objective was to transfer the underlying expressive features from human to robot motion. The input to the model consists of the robot task defined by the robot’s linear velocities and angular velocities and the expressive data defined by the movement of a human body part, represented by the acceleration and angular velocity. The experimental results show that the model can effectively recognize and transfer expressive cues to the robot, producing new movements that incorporate the expressive qualities derived from the human input. Furthermore, the generated motions exhibited variability with different human inputs, highlighting the ability of the model to produce diverse outputs. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the proposed framework. Light blue highlights the components related to the robot’s task, <math display="inline"><semantics> <msub> <mi mathvariant="bold">x</mi> <mi mathvariant="bold">R</mi> </msub> </semantics></math>. Pink represents everything connected to human movement, <math display="inline"><semantics> <msub> <mi mathvariant="bold">x</mi> <mi mathvariant="bold">H</mi> </msub> </semantics></math>. Additionally to <math display="inline"><semantics> <msub> <mi mathvariant="bold">x</mi> <mi mathvariant="bold">H</mi> </msub> </semantics></math>, there is another input from the human: the neutral movement, which is defined as <math display="inline"><semantics> <msub> <mi mathvariant="bold">x</mi> <mi mathvariant="bold">NH</mi> </msub> </semantics></math>. Two blocks are shown in dotted lines: one was used during the training (blue) and the other during the inference stage (turquoise). The blocks that compose the framework’s generator are feature extraction (dark blue) and feature combination (red). The latent space, i.e., the Variational Autoencoder (VAE) encoder output, of the neutral motion is represented by <math display="inline"><semantics> <msub> <mi mathvariant="bold">z</mi> <mi mathvariant="bold">NH</mi> </msub> </semantics></math>. Simultaneously, we represent the human expressive movement latent representation as <math display="inline"><semantics> <msub> <mi mathvariant="bold">z</mi> <mi mathvariant="bold">H</mi> </msub> </semantics></math>, and <math display="inline"><semantics> <msub> <mi mathvariant="bold">z</mi> <mi mathvariant="bold">HS</mi> </msub> </semantics></math> corresponds to the latent features obtained by subtracting the neutral latent representation from the expressive latent representation. <math display="inline"><semantics> <msub> <mi mathvariant="bold">z</mi> <mi mathvariant="bold">R</mi> </msub> </semantics></math> represents the latent space of the robot task, and <math display="inline"><semantics> <mover accent="false"> <msub> <mi mathvariant="bold">x</mi> <mi mathvariant="bold">R</mi> </msub> <mo stretchy="false">^</mo> </mover> </semantics></math> is the output of the generator. The new expressive robot motion has an expressive latent space denoted by <math display="inline"><semantics> <mover accent="false"> <msub> <mi mathvariant="bold">z</mi> <mi mathvariant="bold">HS</mi> </msub> <mo stretchy="false">^</mo> </mover> </semantics></math>, which was obtained by passing <math display="inline"><semantics> <mover accent="false"> <msub> <mi mathvariant="bold">x</mi> <mi mathvariant="bold">R</mi> </msub> <mo stretchy="false">^</mo> </mover> </semantics></math> through the human’s VAE encoder. Additionally, the parameter <math display="inline"><semantics> <mi>λ</mi> </semantics></math> acts as an expressive gain, which can be tuned to increase or decrease the expressive content from the generated motion as required.</p>
Full article ">Figure 2
<p>Network output distribution and representation analysis. (<b>A</b>) Kernel density of the Space Laban Effort quality for human (dark purple), robot (purple), and generated outputs at <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> (light blue), <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math> (orange), <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> (mint green). Increasing <math display="inline"><semantics> <mi>λ</mi> </semantics></math> makes the generated dataset more like the human, retaining robot features. (<b>B</b>) t-SNE plots of human data and network outputs at varying <math display="inline"><semantics> <mi>λ</mi> </semantics></math>. Emotion labels: sad (blue), angry (green), and happy (yellow). With a rising <math display="inline"><semantics> <mi>λ</mi> </semantics></math>, the sad emotion clustering becomes clearer in the generated output.</p>
Full article ">Figure 3
<p>Similarity analysis. (<b>A</b>) Jensen–Shannon distance of Laban Effort qualities between generated and human datasets. As <math display="inline"><semantics> <mi>λ</mi> </semantics></math> increases, the Time and Space qualities converge. (<b>B</b>) Jensen–Shannon distance for Laban Effort qualities between generated and robot datasets. Time and Space drift apart with increasing <math display="inline"><semantics> <mi>λ</mi> </semantics></math>, while Flow remains stable and Weight decreases. (<b>C</b>) Cosine similarity between network output and robot motion; higher <math display="inline"><semantics> <mi>λ</mi> </semantics></math> values diminish similarity. (<b>D</b>) The mean squared error between the network output and robot motion; increasing <math display="inline"><semantics> <mi>λ</mi> </semantics></math> amplifies discrepancies.</p>
Full article ">Figure 4
<p>Effect of <math display="inline"><semantics> <mi>λ</mi> </semantics></math> values in generated trajectories. Trajectories for <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> (light blue), <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math> (orange), and <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> (mint green) on different robots; base task in purple. (<b>A</b>) Double pendulum shows no <math display="inline"><semantics> <mi>λ</mi> </semantics></math> variation. (<b>B</b>) Robot arm modifies the task at <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and loses it as <math display="inline"><semantics> <mi>λ</mi> </semantics></math> rises. (<b>C</b>) Mobile base alters task at <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and deviates more with higher <math display="inline"><semantics> <mi>λ</mi> </semantics></math>.</p>
Full article ">Figure 5
<p>Effect of emotion labels in generated trajectories. For each emotion: sad (blue), angry (green), and happy (yellow), from the human dataset, movements were generated across morphologies, with base tasks in purple. (<b>A</b>) Double pendulum: varied trajectories by emotion. (<b>B</b>) Robot arm: similar paths, but different end positions. (<b>C</b>) Mobile base: distinct paths for each emotion, covering more task space.</p>
Full article ">Figure 6
<p>Experimental setup. Experimental setup for (<b>A</b>) the mobile base and (<b>B</b>) 5DoF robot arm.</p>
Full article ">
20 pages, 1562 KiB  
Article
Sensing the Intentions to Speak in VR Group Discussions
by Jiadong Chen, Chenghao Gu, Jiayi Zhang, Zhankun Liu and Shin‘ichi Konomi
Sensors 2024, 24(2), 362; https://doi.org/10.3390/s24020362 - 7 Jan 2024
Cited by 1 | Viewed by 1605
Abstract
While virtual reality (VR) technologies enable remote communication through the use of 3D avatars, it is often difficult to foster engaging group discussions without addressing the limitations to the non-verbal communication among distributed participants. In this paper, we discuss a technique to detect [...] Read more.
While virtual reality (VR) technologies enable remote communication through the use of 3D avatars, it is often difficult to foster engaging group discussions without addressing the limitations to the non-verbal communication among distributed participants. In this paper, we discuss a technique to detect the intentions to speak in group discussions by tapping into intricate sensor data streams from VR headsets and hand-controllers. To this end, we developed a prototype VR group discussion app equipped with comprehensive sensor data-logging functions and conducted an experiment of VR group discussions (N = 24). We used the quantitative and qualitative experimental data to analyze participants’ experiences of group discussions in relation to the temporal patterns of their different speaking intentions. We then propose a sensor-based mechanism for detecting speaking intentions by employing a sampling strategy that considers the temporal patterns of speaking intentions, and we verify the feasibility of our approach in group discussion settings. Full article
Show Figures

Figure 1

Figure 1
<p>The virtual environment used in the experiment (<b>a</b>). Discussion taking place in the virtual environment (<b>b</b>).</p>
Full article ">Figure 2
<p>Interface of VGG Image Annotator. Participants annotate the time when speaking began in the active speak row. In the speaking intention line, the time when the intention to speak arose is annotated. The end time of the label does not require adjustment.</p>
Full article ">Figure 3
<p>Flowchart of the experimental procedure.</p>
Full article ">Figure 4
<p>The segmentation results of utterance for a group (each row represents a participant). The colored sections indicate that the respective participant is speaking.</p>
Full article ">Figure 5
<p>Participants’ responses to questions Q1–Q3. Horizontal axis is the number of participants.</p>
Full article ">Figure 6
<p>Analysis of participant-annotated labels. (<b>a</b>) Number of actively initiated speaking sequences; (<b>b</b>) distribution of intervals; (<b>c</b>) box plot of intervals. In Figures (<b>b</b>,<b>c</b>), the “interval” means the time gap between a participant forming the intent to speak and actually beginning to speak.</p>
Full article ">Figure 6 Cont.
<p>Analysis of participant-annotated labels. (<b>a</b>) Number of actively initiated speaking sequences; (<b>b</b>) distribution of intervals; (<b>c</b>) box plot of intervals. In Figures (<b>b</b>,<b>c</b>), the “interval” means the time gap between a participant forming the intent to speak and actually beginning to speak.</p>
Full article ">Figure 7
<p>Sampling process. The motion data represent the data obtained from the sensor of the participant’s VR device. These data along with the relational features result in 63 dimensions. The utterance indicates the participant’s utterance units, where red indicates the unit labeled by the participant as actively speaking. In the light red region of length 1.5 s, we sample positive samples. In the light gray region of 1.5 s, we sample negative samples.</p>
Full article ">Figure 8
<p>ROC curves of models (<b>a</b>) Sensor data + relational features; (<b>b</b>) Only sensor data. “AUC”: AUROC.</p>
Full article ">
24 pages, 5599 KiB  
Article
Comparative Analysis of the Clustering Quality in Self-Organizing Maps for Human Posture Classification
by Lisiane Esther Ekemeyong Awong and Teresa Zielinska
Sensors 2023, 23(18), 7925; https://doi.org/10.3390/s23187925 - 15 Sep 2023
Cited by 8 | Viewed by 2210
Abstract
The objective of this article is to develop a methodology for selecting the appropriate number of clusters to group and identify human postures using neural networks with unsupervised self-organizing maps. Although unsupervised clustering algorithms have proven effective in recognizing human postures, many works [...] Read more.
The objective of this article is to develop a methodology for selecting the appropriate number of clusters to group and identify human postures using neural networks with unsupervised self-organizing maps. Although unsupervised clustering algorithms have proven effective in recognizing human postures, many works are limited to testing which data are correctly or incorrectly recognized. They often neglect the task of selecting the appropriate number of groups (where the number of clusters corresponds to the number of output neurons, i.e., the number of postures) using clustering quality assessments. The use of quality scores to determine the number of clusters frees the expert to make subjective decisions about the number of postures, enabling the use of unsupervised learning. Due to high dimensionality and data variability, expert decisions (referred to as data labeling) can be difficult and time-consuming. In our case, there is no manual labeling step. We introduce a new clustering quality score: the discriminant score (DS). We describe the process of selecting the most suitable number of postures using human activity records captured by RGB-D cameras. Comparative studies on the usefulness of popular clustering quality scores—such as the silhouette coefficient, Dunn index, Calinski–Harabasz index, Davies–Bouldin index, and DS—for posture classification tasks are presented, along with graphical illustrations of the results produced by DS. The findings show that DS offers good quality in posture recognition, effectively following postural transitions and similarities. Full article
Show Figures

Figure 1

Figure 1
<p>Silhouette score plots: (<b>a</b>) 3 clusters, (<b>b</b>) 4 clusters, (<b>c</b>) 5 clusters, and (<b>d</b>) 6 clusters.</p>
Full article ">Figure 2
<p>Dunn index for 3, 4, 5, and 6 clusters.</p>
Full article ">Figure 3
<p>DB index for 3, 4, 5, and 6 clusters.</p>
Full article ">Figure 4
<p>The Calinski-Harabasz index for 3, 4, 5, and 6 clusters.</p>
Full article ">Figure 5
<p>Quantization errors for TA: (<b>a</b>) training data, and (<b>b</b>) testing data.</p>
Full article ">Figure 6
<p>Discriminant score plot of the NN trained for 3 outputs, using the cosine distance: (<b>a</b>) TA activity, (<b>b</b>) UA activity.</p>
Full article ">Figure 7
<p>Discriminant score plot of the NN trained for 4 outputs, using the cosine distance: (<b>a</b>) TA activity, (<b>b</b>) UA activity.</p>
Full article ">Figure 7 Cont.
<p>Discriminant score plot of the NN trained for 4 outputs, using the cosine distance: (<b>a</b>) TA activity, (<b>b</b>) UA activity.</p>
Full article ">Figure 8
<p>Discriminant score plot of the NN trained for 5 outputs, using the cosine distance: (<b>a</b>) TA activity, (<b>b</b>) UA activity.</p>
Full article ">Figure 9
<p>Discriminant score plot of the NN trained for 6 outputs, using the cosine distance: (<b>a</b>) TA activity, (<b>b</b>) UA activity.</p>
Full article ">Figure 10
<p>Discriminant score plot of the NN trained for 3 outputs, using the Euclidean distance: (<b>a</b>) TA activity, (<b>b</b>) UA activity.</p>
Full article ">Figure 11
<p>Discriminant score plot of the NN trained for 4 outputs, using the Euclidean distance: (<b>a</b>) TA activity, (<b>b</b>) UA activity.</p>
Full article ">Figure 11 Cont.
<p>Discriminant score plot of the NN trained for 4 outputs, using the Euclidean distance: (<b>a</b>) TA activity, (<b>b</b>) UA activity.</p>
Full article ">Figure 12
<p>Discriminant score plot of the NN trained for 5 outputs, using the Euclidean distance: (<b>a</b>) TA activity, (<b>b</b>) UA activity.</p>
Full article ">Figure 13
<p>Discriminant score plot of the NN trained for 6 outputs, using the Euclidean distance: (<b>a</b>) TA activity, (<b>b</b>) UA activity.</p>
Full article ">Figure 14
<p>Comparative discriminant score plot for TA clusters: (<b>a</b>) cosine, (<b>b</b>) Euclidean distances.</p>
Full article ">Figure 14 Cont.
<p>Comparative discriminant score plot for TA clusters: (<b>a</b>) cosine, (<b>b</b>) Euclidean distances.</p>
Full article ">
18 pages, 6705 KiB  
Article
A Multi-Target Localization and Vital Sign Detection Method Using Ultra-Wide Band Radar
by Jingwen Zhang, Qingjie Qi, Huifeng Cheng, Lifeng Sun, Siyun Liu, Yue Wang and Xinlei Jia
Sensors 2023, 23(13), 5779; https://doi.org/10.3390/s23135779 - 21 Jun 2023
Cited by 7 | Viewed by 2139
Abstract
Life detection technology using ultra-wideband (UWB) radar is a non-contact, active detection technology, which can be used to search for survivors in disaster rescues. The existing multi-target detection method based on UWB radar echo signals has low accuracy and has difficulty extracting breathing [...] Read more.
Life detection technology using ultra-wideband (UWB) radar is a non-contact, active detection technology, which can be used to search for survivors in disaster rescues. The existing multi-target detection method based on UWB radar echo signals has low accuracy and has difficulty extracting breathing and heartbeat information at the same time. Therefore, this paper proposes a new multi-target localization and vital sign detection method using ultra-wide band radar. A target recognition and localization method based on permutation entropy (PE) and K means++ clustering is proposed to determine the number and position of targets in the environment. An adaptive denoising method for vital sign extraction based on ensemble empirical mode decomposition (EEMD) and wavelet analysis (WA) is proposed to reconstruct the breathing and heartbeat signals of human targets. A heartbeat frequency extraction method based on particle swarm optimization (PSO) and stochastic resonance (SR) is proposed to detect the heartbeat frequency of human targets. Experimental results show that the PE—K means++ method can successfully recognize and locate multiple human targets in the environment, and its average relative error is 1.83%. Using the EEMD–WA method can effectively filter the clutter signal, and the average relative error of the reconstructed respiratory signal frequency is 4.27%. The average relative error of heartbeat frequency detected by the PSO–SR method was 6.23%. The multi-target localization and vital sign detection method proposed in this paper can effectively recognize all human targets in the multi-target scene and provide their accurate location and vital signs information. This provides a theoretical basis for the technical system of emergency rescue and technical support for post-disaster rescue. Full article
Show Figures

Figure 1

Figure 1
<p>Specific flowchart of proposed method.</p>
Full article ">Figure 2
<p>The curves of permutation entropy, sample entropy and fuzzy entropy when the three volunteers were 3, 6 and 9 m away from the radar.</p>
Full article ">Figure 3
<p>Detailed flow chart of adaptive stochastic resonance based on PSO.</p>
Full article ">Figure 4
<p>Potential function of bistable stochastic resonance system.</p>
Full article ">Figure 5
<p>(<b>a</b>) Diagram of the UWB radar system (<b>b</b>) Self-developed experimental UWB radar system.</p>
Full article ">Figure 6
<p>Photo of the experiment site.</p>
Full article ">Figure 7
<p>In the first experiment, when the two volunteers were 5 and 10 m away from the radar: (<b>a</b>) Raw radar data (<b>b</b>) Radar data after preprocessing.</p>
Full article ">Figure 8
<p>In the first experiment, when the two volunteers were 5 and 10 m away from the radar: (<b>a</b>) The PE curve before and after smoothing (<b>b</b>) Results after binary processing.</p>
Full article ">Figure 9
<p>In the first experiment, when the two volunteers were 5 and 10 m away from the radar: (<b>a</b>) Results of clustering performance evaluation (<b>b</b>) Results of target recognition and location.</p>
Full article ">Figure 10
<p>In the first experiment, when the two volunteers were 5 and 10 m away from the radar: EEMD decomposition results (<b>a</b>) Target A (<b>b</b>) Target B.</p>
Full article ">Figure 11
<p>In the first experiment, when the two volunteers were 5 and 10 m away from the radar: The main frequency of each IMF (<b>a</b>) Target A (<b>b</b>) Target B.</p>
Full article ">Figure 12
<p>In the first experiment, when the two volunteers were 5 and 10 m away from the radar: (<b>a</b>) Reconstructed respiratory signal of target A (<b>b</b>) Reconstructed heartbeat signal of target A (<b>c</b>) Reconstructed respiratory signal of target B (<b>d</b>) Reconstructed heartbeat signal of target B.</p>
Full article ">Figure 12 Cont.
<p>In the first experiment, when the two volunteers were 5 and 10 m away from the radar: (<b>a</b>) Reconstructed respiratory signal of target A (<b>b</b>) Reconstructed heartbeat signal of target A (<b>c</b>) Reconstructed respiratory signal of target B (<b>d</b>) Reconstructed heartbeat signal of target B.</p>
Full article ">Figure 13
<p>In the first experiment, when the two volunteers were 5 and 10 m away from the radar: (<b>a</b>) Respiratory signal spectrum of target A (<b>b</b>) Respiratory signal spectrum of target B.</p>
Full article ">Figure 14
<p>In the first experiment, when the two volunteers were 5 and 10 m away from the radar: (<b>a</b>) Heartbeat signal spectrum of target A (<b>b</b>) Heartbeat signal spectrum of target B.</p>
Full article ">
35 pages, 15515 KiB  
Article
Design and Evaluation of an Alternative Control for a Quad-Rotor Drone Using Hand-Gesture Recognition
by Siavash Khaksar, Luke Checker, Bita Borazjan and Iain Murray
Sensors 2023, 23(12), 5462; https://doi.org/10.3390/s23125462 - 9 Jun 2023
Cited by 3 | Viewed by 1542
Abstract
Gesture recognition is a mechanism by which a system recognizes an expressive and purposeful action made by a user’s body. Hand-gesture recognition (HGR) is a staple piece of gesture-recognition literature and has been keenly researched over the past 40 years. Over this time, [...] Read more.
Gesture recognition is a mechanism by which a system recognizes an expressive and purposeful action made by a user’s body. Hand-gesture recognition (HGR) is a staple piece of gesture-recognition literature and has been keenly researched over the past 40 years. Over this time, HGR solutions have varied in medium, method, and application. Modern developments in the areas of machine perception have seen the rise of single-camera, skeletal model, hand-gesture identification algorithms, such as media pipe hands (MPH). This paper evaluates the applicability of these modern HGR algorithms within the context of alternative control. Specifically, this is achieved through the development of an HGR-based alternative-control system capable of controlling of a quad-rotor drone. The technical importance of this paper stems from the results produced during the novel and clinically sound evaluation of MPH, alongside the investigatory framework used to develop the final HGR algorithm. The evaluation of MPH highlighted the Z-axis instability of its modelling system which reduced the landmark accuracy of its output from 86.7% to 41.5%. The selection of an appropriate classifier complimented the computationally lightweight nature of MPH whilst compensating for its instability, achieving a classification accuracy of 96.25% for eight single-hand static gestures. The success of the developed HGR algorithm ensured that the proposed alternative-control system could facilitate intuitive, computationally inexpensive, and repeatable drone control without requiring specialised equipment. Full article
Show Figures

Figure 1

Figure 1
<p>HGR data-acquisition categories.</p>
Full article ">Figure 2
<p>Hand-gesture modelling methods.</p>
Full article ">Figure 3
<p>Investigation structure flow chart.</p>
Full article ">Figure 4
<p>Sign language example hand-gesture set [<a href="#B23-sensors-23-05462" class="html-bibr">23</a>].</p>
Full article ">Figure 5
<p>MPH output displaying the 21 coloured landmarks overlayed onto the input image and the text based joint angle measurement model displayed through terminal.</p>
Full article ">Figure 6
<p>CVZ output displaying the 21 red landmarks overlayed onto the input image and the text based joint angle measurement model displayed through terminal.</p>
Full article ">Figure 7
<p>Confusion Matrix 1—bounds-based classifier accuracy.</p>
Full article ">Figure 8
<p>Confusion Matrix 2—linear-regression classifier accuracy.</p>
Full article ">Figure 9
<p>Confusion Matrix 3—ANN classifier accuracy.</p>
Full article ">Figure 10
<p>Confusion Matrix 4—SVM classifier accuracy.</p>
Full article ">Figure 11
<p>Comparison of tested classifiers computational performance.</p>
Full article ">Figure 12
<p>Test flight paths.</p>
Full article ">Figure A1
<p>Three-dimensional joint-calculation method.</p>
Full article ">Figure A2
<p>Two-dimensional joint-calculation method.</p>
Full article ">Figure A3
<p>TELLO drone [<a href="#B38-sensors-23-05462" class="html-bibr">38</a>].</p>
Full article ">
17 pages, 2198 KiB  
Article
Vision-Based Efficient Robotic Manipulation with a Dual-Streaming Compact Convolutional Transformer
by Hao Guo, Meichao Song, Zhen Ding, Chunzhi Yi and Feng Jiang
Sensors 2023, 23(1), 515; https://doi.org/10.3390/s23010515 - 3 Jan 2023
Cited by 2 | Viewed by 2747
Abstract
Learning from visual observation for efficient robotic manipulation is a hitherto significant challenge in Reinforcement Learning (RL). Although the collocation of RL policies and convolution neural network (CNN) visual encoder achieves high efficiency and success rate, the method general performance for multi-tasks is [...] Read more.
Learning from visual observation for efficient robotic manipulation is a hitherto significant challenge in Reinforcement Learning (RL). Although the collocation of RL policies and convolution neural network (CNN) visual encoder achieves high efficiency and success rate, the method general performance for multi-tasks is still limited to the efficacy of the encoder. Meanwhile, the increasing cost of the encoder optimization for general performance could debilitate the efficiency advantage of the original policy. Building on the attention mechanism, we design a robotic manipulation method that significantly improves the policy general performance among multitasks with the lite Transformer based visual encoder, unsupervised learning, and data augmentation. The encoder of our method could achieve the performance of the original Transformer with much less data, ensuring efficiency in the training process and intensifying the general multi-task performances. Furthermore, we experimentally demonstrate that the master view outperforms the other alternative third-person views in the general robotic manipulation tasks when combining the third-person and egocentric views to assimilate global and local visual information. After extensively experimenting with the tasks from the OpenAI Gym Fetch environment, especially in the Push task, our method succeeds in 92% versus baselines that of 65%, 78% for the CNN encoder, 81% for the ViT encoder, and with fewer training steps. Full article
Show Figures

Figure 1

Figure 1
<p>Architectural overview. A pair of observations are sampled from the replay buffer, and observations are then transformed into query and key observations by data augmentation. The data-augmented observations are encoded with the query and key encoders, respectively, for pretraining the encoder with contrastive unsupervised learning. The query encoder with initialized parameters is continually updated to train the agent in RL algorithm with data augmentation.</p>
Full article ">Figure 2
<p>ViT encoder architecture. Observations from each view are divided into respective process channels. The images are encoded to patches and be tokenized by pooling and reshaping for the Transformer, and the output vectors of each Transformer encoder are data fused into a single vector, which is extracted features by the final linear layer. (<b>a</b>) DCCT visual encoder; (<b>b</b>) Transformer encoder.</p>
Full article ">Figure 3
<p>Experimental environments. Agents are trained in OpenAI Gym Fetch simulated environment, including three robotic manipulation tasks: Reach, Pick-and-Place, and Push. (<b>a</b>) Reach; (<b>b</b>) Pick-and-Place; (<b>c</b>) Push.</p>
Full article ">Figure 4
<p>Performance of patching method ablation. Training performances (success rate) of ConvLayer, 8 × 8 and 16 × 16 patching methods in Push task. Mean and std. deviation of 25 seeds.</p>
Full article ">Figure 5
<p>Performance of Transformer encoder ablation. Training performances (success rate) of with-Transformer encoder and without-Transformer encoder in Reach task. Mean and std. deviation of 25 seeds.</p>
Full article ">Figure 6
<p>Performance of pre-training iterations ablation. Training performances (success rate) of with pre-training iteration times in Pick-and-Place task. Mean and std. deviation of 25 seeds.</p>
Full article ">Figure 7
<p>Attention maps. The visualization of attention maps from the Transformer encoder. For each subgraph, each column demonstrates task category: (i) Reach, (ii) Pick-and-Place, and (iii) Push, respectively. The upper row exhibits the raw image of a specific view, and the lower row exhibits the corresponding attention maps. (<b>a</b>) comparison of master views; (<b>b</b>) comparison of OTS views.</p>
Full article ">Figure 8
<p>Performance of third-person view ablation. Policy performances (success rate) of dual-camera view with master view and OTS view in whole three tasks; statistical data of 25 seeds.</p>
Full article ">
18 pages, 9221 KiB  
Article
Cross-Modal Reconstruction for Tactile Signal in Human–Robot Interaction
by Mingkai Chen and Yu Xie
Sensors 2022, 22(17), 6517; https://doi.org/10.3390/s22176517 - 29 Aug 2022
Cited by 2 | Viewed by 1846
Abstract
A human can infer the magnitude of interaction force solely based on visual information because of prior knowledge in human–robot interaction (HRI). A method of reconstructing tactile information through cross-modal signal processing is proposed in this paper. In our method, visual information is [...] Read more.
A human can infer the magnitude of interaction force solely based on visual information because of prior knowledge in human–robot interaction (HRI). A method of reconstructing tactile information through cross-modal signal processing is proposed in this paper. In our method, visual information is added as an auxiliary source to tactile information. In this case, the receiver is only able to determine the tactile interaction force from the visual information provided. In our method, we first process groups of pictures (GOPs) and treat them as the input. Secondly, we use the low-rank foreground-based attention mechanism (LAM) to detect regions of interest (ROIs). Finally, we propose a linear regression convolutional neural network (LRCNN) to infer contact force in video frames. The experimental results show that our cross-modal reconstruction is indeed feasible. Furthermore, compared to other work, our method is able to reduce the complexity of the network and improve the material identification accuracy. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overall architecture of the LRCNN.</p>
Full article ">Figure 2
<p>Preprocessing of video frames.</p>
Full article ">Figure 3
<p>Low-rank foreground-based spatial attention mechanism.</p>
Full article ">Figure 4
<p>Low-rank foreground-based channel attention mechanism.</p>
Full article ">Figure 5
<p>Architecture of the combined network.</p>
Full article ">Figure 6
<p>The network structure of the LRCNN.</p>
Full article ">Figure 7
<p>Dataset collected to estimate interaction forces.</p>
Full article ">Figure 8
<p>Performance of a single frame as input. Where green is the ground truth and red is the predictive force value.</p>
Full article ">Figure 9
<p>Performance results of the LRCNN, where red is the predicted value of the proposed method and green is the ground truth.</p>
Full article ">Figure 10
<p>Loss when estimating the stapler contact force for two networks: (<b>a</b>) single video frame as input and (<b>b</b>) LRCNN.</p>
Full article ">Figure 11
<p>Predicted value and ground truth value results of the estimation of the interaction force using the LRCNN with LAM.</p>
Full article ">Figure 12
<p>Performance when using LSAM separately for (<b>a</b>) the sponge and (<b>b</b>) the stapler.</p>
Full article ">Figure 13
<p>Comparison of the performance of the three networks with a single source of illumination.</p>
Full article ">
10 pages, 1254 KiB  
Article
Human Pulse Detection by a Soft Tactile Actuator
by Zixin Huang, Xinpeng Li, Jiarun Wang, Yi Zhang and Jingfu Mei
Sensors 2022, 22(13), 5047; https://doi.org/10.3390/s22135047 - 5 Jul 2022
Cited by 6 | Viewed by 2522
Abstract
Soft sensing technologies offer promising prospects in the fields of soft robots, wearable devices, and biomedical instruments. However, the structural design, fabrication process, and sensing algorithm design of the soft devices confront great difficulties. In this paper, a soft tactile actuator (STA) with [...] Read more.
Soft sensing technologies offer promising prospects in the fields of soft robots, wearable devices, and biomedical instruments. However, the structural design, fabrication process, and sensing algorithm design of the soft devices confront great difficulties. In this paper, a soft tactile actuator (STA) with both the actuation function and sensing function is presented. The tactile physiotherapy finger of the STA was fabricated by a fluid silica gel material. Before pulse detection, the tactile physiotherapy finger was actuated to the detection position by injecting compressed air into its chamber. The pulse detecting algorithm, which realized the pulse detection function of the STA, is presented. Finally, in actual pulse detection experiments, the pulse values of the volunteers detected by using the STA and by employing a professional pulse meter were close, which illustrates the effectiveness of the pulse detecting algorithm of the STA. Full article
Show Figures

Figure 1

Figure 1
<p>Finished product of the STA.</p>
Full article ">Figure 2
<p>Injection molding process of the upper mold.</p>
Full article ">Figure 3
<p>Unsealed finger.</p>
Full article ">Figure 4
<p>Injection molding process of the base mold.</p>
Full article ">Figure 5
<p>Tactile physiotherapy finger.</p>
Full article ">Figure 6
<p>Experimental platform of the STA.</p>
Full article ">Figure 7
<p>Picture of pulse detection with the STA.</p>
Full article ">Figure 8
<p>Stress data of volunteer A.</p>
Full article ">Figure 9
<p>Filtering process using sliding window.</p>
Full article ">Figure 10
<p>Comparison of the binary sequences and the filtered binary sequences within <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>∈</mo> <mfenced separators="" open="[" close="]"> <mrow> <mn>2.5</mn> <mo>,</mo> <mn>5.0</mn> </mrow> </mfenced> </mrow> </semantics></math>.</p>
Full article ">
13 pages, 5163 KiB  
Article
Estimation of Tibiofemoral Joint Contact Forces Using Foot Loads during Continuous Passive Motions
by Yunlong Yang, Huixuan Huang, Junlong Guo, Fei Yu and Yufeng Yao
Sensors 2022, 22(13), 4947; https://doi.org/10.3390/s22134947 - 30 Jun 2022
Cited by 1 | Viewed by 2039
Abstract
Continuous passive motion (CPM) machines are commonly used after various knee surgeries, but information on tibiofemoral forces (TFFs) during CPM cycles is limited. This study aimed to explore the changing trend of TFFs during CPM cycles under various ranges of motion (ROM) and [...] Read more.
Continuous passive motion (CPM) machines are commonly used after various knee surgeries, but information on tibiofemoral forces (TFFs) during CPM cycles is limited. This study aimed to explore the changing trend of TFFs during CPM cycles under various ranges of motion (ROM) and body weights (BW) by establishing a two-dimensional mathematical model. TFFs were estimated by using joint angles, foot load, and leg–foot weight. Eleven healthy male participants were tested with ROM ranging from 0° to 120°. The values of the peak TFFs during knee flexion were higher than those during knee extension, varying nonlinearly with ROM. BW had a significant main effect on the peak TFFs and tibiofemoral shear forces, while ROM had a limited effect on the peak TFFs. No significant interaction effects were observed between BW and ROM for each peak TFF, whereas a strong linear correlation existed between the peak tibiofemoral compressive forces (TFCFs) and the peak resultant TFFs (R2 = 0.971, p < 0.01). The proposed method showed promise in serving as an input for optimizing rehabilitation devices. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Acquisition system used in the experiments. (<b>a</b>) Sketch of the experimental setup; (<b>b</b>) graphical user interface and receiving device.</p>
Full article ">Figure 2
<p>Curve of pressure soles during calibration. (<b>a</b>) Curve of the sensors summed for participant A4; (<b>b</b>) Example curve of foot pressure.</p>
Full article ">Figure 3
<p>CoP distribution of the foot pressure.</p>
Full article ">Figure 4
<p>Free-body diagram of the femur and tibia during CPM cycles. (<b>a</b>) <span class="html-italic">F</span><sub>0</sub>, the force generated by plantar pressure; <span class="html-italic">F</span><sub>1</sub>, the resultant force of <span class="html-italic">F</span><sub>0</sub>; <span class="html-italic">F</span><sub>n</sub>, the normal force imparted by the tibia brace; <span class="html-italic">G</span>, the weight of the leg–foot system; <span class="html-italic">M</span><sub>k</sub>, the passive torque of the knee generated by the tissue; <span class="html-italic">β</span>, the angle of tibia long-axis incline; <span class="html-italic">α</span>, the angle between <span class="html-italic">F<sub>s</sub></span> and <span class="html-italic">G</span>; <span class="html-italic">d</span><sub>n</sub>, the distance from the center of mass of the leg–foot system to the knee axis [<a href="#B36-sensors-22-04947" class="html-bibr">36</a>]; <span class="html-italic">d<sub>G</sub></span>, the moment arm of <span class="html-italic">G</span> around knee axis; <span class="html-italic">ψ</span><sub>k</sub>, the knee flexion angle; <span class="html-italic">F<sub>c</sub></span>, the tibiofemoral compressive force; <span class="html-italic">F<sub>s</sub></span>, the tibiofemoral shear force; <span class="html-italic">ψ</span><sub>h</sub>, the hip flexion angle. (<b>b</b>) Forces and distances for the free-body diagram of the femur and tibia during knee extension.</p>
Full article ">Figure 5
<p>Mean (and standard error) of the estimated resultant TFFs from three subjects during typical CPM cycles. (<b>a</b>–<b>d</b>) represent 60°−120°, respectively; 1 = first major peak during knee flexion; 2 = second major peak during knee extension. The shaded area represents ±1 standard deviation.</p>
Full article ">Figure 5 Cont.
<p>Mean (and standard error) of the estimated resultant TFFs from three subjects during typical CPM cycles. (<b>a</b>–<b>d</b>) represent 60°−120°, respectively; 1 = first major peak during knee flexion; 2 = second major peak during knee extension. The shaded area represents ±1 standard deviation.</p>
Full article ">Figure 6
<p>Mean of the major peak resultant TFFs for four ROM across CPM cycles. All significant results were compared with the ROM of 80° (* <span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">Figure 7
<p>Mean of the estimated TFCFs (solid lines) and TFSFs (dashed lines) from participants during three repeated typical CPM cycles at different ROMs; 1 = First major peak during knee flexion; 2 = second major peak during knee extension. A1 CF represents the compressive forces of A1, and A1 SF represents the shear forces of A1. (<b>a</b>–<b>d</b>) represent 60°–120°, respectively.</p>
Full article ">Figure 8
<p>Measured plantar pressure curves.</p>
Full article ">
19 pages, 6883 KiB  
Article
A Machine Learning Model for Predicting Sit-to-Stand Trajectories of People with and without Stroke: Towards Adaptive Robotic Assistance
by Thomas Bennett, Praveen Kumar and Virginia Ruiz Garate
Sensors 2022, 22(13), 4789; https://doi.org/10.3390/s22134789 - 24 Jun 2022
Cited by 3 | Viewed by 3267
Abstract
Sit-to-stand and stand-to-sit transfers are fundamental daily motions that enable all other types of ambulation and gait. However, the ability to perform these motions can be severely impaired by different factors, such as the occurrence of a stroke, limiting the ability to engage [...] Read more.
Sit-to-stand and stand-to-sit transfers are fundamental daily motions that enable all other types of ambulation and gait. However, the ability to perform these motions can be severely impaired by different factors, such as the occurrence of a stroke, limiting the ability to engage in other daily activities. This study presents the recording and analysis of a comprehensive database of full body biomechanics and force data captured during sit-to-stand-to-sit movements in subjects who have and have not experienced stroke. These data were then used in conjunction with simple machine learning algorithms to predict vertical motion trajectories that could be further employed for the control of an assistive robot. A total of 30 people (including 6 with stroke) each performed 20 sit-to-stand-to-sit actions at two different seat heights, from which average trajectories were created. Weighted k-nearest neighbours and linear regression models were then used on two different sets of key participant parameters (height and weight, and BMI and age), to produce a predicted trajectory. Resulting trajectories matched the true ones for non-stroke subjects with an average R2 score of 0.864±0.134 using k = 3 and 100% seat height when using height and weight parameters. Even among a small sample of stroke patients, balance and motion trends were noticed along with a large within-class variation, showing that larger scale trials need to be run to obtain significant results. The full dataset of sit-to-stand-to-sit actions for each user is made publicly available for further research. Full article
Show Figures

Figure 1

Figure 1
<p>Participant wearing XSens suit, seated on the sensor mat attached to a rigid board on an adjustable plinth, with feet placed on the balance board. This setup shows the seat at 115% knee height. Reference frames are shown for guidance on the following methods and results. Yellow and black tape in the middle of the balance board was to help the participants stand near the centre, although they were allowed to move their feet at will to keep themselves balanced and comfortable.</p>
Full article ">Figure 2
<p>Flow chart depicting data capture procedure for each participant.</p>
Full article ">Figure 3
<p>Flow chart depicting data processing and trajectory prediction for each participant.</p>
Full article ">Figure 4
<p>(<b>a</b>) Average weight placed on each side of the balance board for stroke (<b>top</b>) and non-stroke (<b>bottom</b>) users, 10 sit-to-stand movements, at 100% seat height. (<b>b</b>) Weight placed on each side of the balance board for stroke (<b>top</b>) and non-stroke (<b>bottom</b>) users, stand-to-sit movement, at 100% seat height.</p>
Full article ">Figure 5
<p>Centre of pressure trajectories for stroke and non-stroke participants, with the start position of each line normalised to (0,0). Each colour represents a participant. Single examples highlighted solely for clarity of comparison.</p>
Full article ">Figure 6
<p>Average weight distribution on seat mat during (<b>a</b>) sit-to-stand action, and (<b>b</b>) stand-to-sit actions. Each sensor reading weight is in kg. Percentage progress through movement is highlighted in white.</p>
Full article ">Figure 7
<p>Example of full sit-to-stand action, showing points captured from individual markers.</p>
Full article ">Figure 8
<p>Trajectories predicted by <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>-</mo> <mi>N</mi> <mi>N</mi> </mrow> </semantics></math> and linear regression model imposed over stroke participants’ (labelled <b>S1</b>–<b>S6</b>) recorded trajectories. Two left columns show sit-to-stand and stand-to-sit for 100% seat height. Two right columns show sit-to-stand and stand-to-sit for 115% seat height. Red lines show participants average true trajectory with standard deviations. Blue lines are trajectories predicted by the <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>-</mo> <mi>N</mi> <mi>N</mi> </mrow> </semantics></math> and linear regression model using height and weight as <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>-</mo> <mi>N</mi> <mi>N</mi> </mrow> </semantics></math> coordinates. Green lines are for predicted trajectories using age and BMI as <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>-</mo> <mi>N</mi> <mi>N</mi> </mrow> </semantics></math> coordinates. The <span class="html-italic">y</span> axes on each graph represent the <span class="html-italic">Z</span> position of the mid-shoulder point, and the <span class="html-italic">x</span> axes show percentage completion of the STSTS movement.</p>
Full article ">
21 pages, 6852 KiB  
Article
Assessment of Pain Onset and Maximum Bearable Pain Thresholds in Physical Contact Situations
by Doyeon Han, Moonyoung Park, Junsuk Choi, Heonseop Shin, Donghwan Kim and Sungsoo Rhim
Sensors 2022, 22(8), 2996; https://doi.org/10.3390/s22082996 - 13 Apr 2022
Cited by 7 | Viewed by 2699
Abstract
With the development of robot technology, robot utilization is expanding in industrial fields and everyday life. To employ robots in various fields wherein humans and robots share the same space, human safety must be guaranteed in the event of a human–robot collision. Therefore, [...] Read more.
With the development of robot technology, robot utilization is expanding in industrial fields and everyday life. To employ robots in various fields wherein humans and robots share the same space, human safety must be guaranteed in the event of a human–robot collision. Therefore, criteria and limitations of safety need to be defined and well clarified. In this study, we induced mechanical pain in humans through quasi-static contact by an algometric device (at 29 parts of the human body). A manual apparatus was developed to induce and monitor a force and pressure. Forty healthy men participated voluntarily in the study. Physical quantities were classified based on pain onset and maximum bearable pain. The overall results derived from the trials pertained to the subjective concept of pain, which led to considerable inter-individual variation in the onset and threshold of pain. Based on the results, a quasi-static contact pain evaluation method was established, and biomechanical safety limitations on forces and pressures were formulated. The pain threshold attributed to quasi-static contact can serve as a safety standard for the robots employed. Full article
Show Figures

Figure 1

Figure 1
<p>Test apparatus system consists of (<b>a</b>) algometer part, (<b>b</b>) stationary stanchion part, (<b>c</b>) algometer transfer and rotation system.</p>
Full article ">Figure 2
<p>Elements of algometer part used in this study: (<b>a</b>) clinical trial with algometric system, (<b>b</b>) contact probe for measuring contact pressure, (<b>c</b>) contact probe for measuring contact force.</p>
Full article ">Figure 3
<p>Flow diagram of clinical trial procedure.</p>
Full article ">Figure 4
<p>Pain thresholds for pain onset.</p>
Full article ">Figure 5
<p>Pain threshold for maximum bearable pain.</p>
Full article ">Figure 6
<p>Standard deviation from the pain onset (<b>a</b>), and maximum bearable pain (<b>b</b>) at different body parts.</p>
Full article ">Figure 7
<p>Analyzed heat map results for all subjects. From left to right, the results of the skin reaction (<b>a</b>), skin reaction (<b>b</b>), vascular reaction of the cube-shaped collider (<b>c</b>), and vascular reaction (<b>d</b>) of the cylindrical collider.</p>
Full article ">Figure 8
<p>Representative photographs acquired to analyze the degree of skin and vascular reactions. (<b>a</b>) Skin reaction degree 2 (mild erythema)/vascular reaction degree 1 (petechia), (<b>b</b>) Skin reaction degree 3/vascular reaction degree 3 (purpura), (<b>c</b>) Skin reaction degree 0/vascular reaction degree 3 (purpula).</p>
Full article ">Figure 9
<p>Residualpain after the testing of all 29 pain measurement sites (rigid hexahedral contact probe).</p>
Full article ">Figure 10
<p>Residual pain after the testing of all 29 pain measurement sites (soft cylindrical contact probe).</p>
Full article ">Figure A1
<p>Measurement points and the numbering of each point.</p>
Full article ">
Back to TopTop