Hostname: page-component-78c5997874-dh8gc Total loading time: 0 Render date: 2024-11-17T22:48:16.758Z Has data issue: false hasContentIssue false

A flexible wearable e-skin sensing system for robotic teleoperation

Published online by Cambridge University Press:  16 September 2022

Chuanyu Zhong
Affiliation:
Department of Automation, University of Science and Technology of China, Hefei 230026, China
Shumi Zhao
Affiliation:
Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China
Yang Liu
Affiliation:
Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China
Zhijun Li*
Affiliation:
Department of Automation, University of Science and Technology of China, Hefei 230026, China Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China
Zhen Kan
Affiliation:
Department of Automation, University of Science and Technology of China, Hefei 230026, China
Ying Feng
Affiliation:
College of Automation Science and Engineering, South China University of Technology, Guangzhou 510641, China
*
*Corresponding author. E-mail: zjli@ieee.org
Rights & Permissions [Opens in a new window]

Abstract

Electronic skin (e-skin) is playing an increasingly important role in health detection, robotic teleoperation, and human-machine interaction, but most e-skins currently lack the integration of on-site signal acquisition and transmission modules. In this paper, we develop a novel flexible wearable e-skin sensing system with 11 sensing channels for robotic teleoperation. The designed sensing system is mainly composed of three components: e-skin sensor, customized flexible printed circuit (FPC), and human-machine interface. The e-skin sensor has 10 stretchable resistors distributed at the proximal and metacarpal joints of each finger respectively and 1 stretchable resistor distributed at the purlicue. The e-skin sensor can be attached to the opisthenar, and thanks to its stretchability, the sensor can detect the bent angle of the finger. The customized FPC, with WiFi module, wirelessly transmits the signal to the terminal device with human-machine interface, and we design a graphical user interface based on the Qt framework for real-time signal acquisition, storage, and display. Based on this developed e-skin system and self-developed robotic multi-fingered hand, we conduct gesture recognition and robotic multi-fingered teleoperation experiments using deep learning techniques and obtain a recognition accuracy of 91.22%. The results demonstrate that the developed e-skin sensing system has great potential in human-machine interaction.

Type
Research Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press

1. Introduction

Traditional wearable sensing devices such as myoelectric sensors [Reference Tam, Boukadoum, Campeau-Lecours and Gosselin1], inertial sensors [Reference Shaeffer2, Reference Jain, Semwal and Kaushik3], tactile sensing arrays [Reference Shao, Hu and Visell4, Reference Ohka, Takata, Kobayashi, Suzuki, Morisawa and Yussof5], etc. are uncomfortable and inconvenient due to the large size and suffer from high manufacturing cost. With the continuous breakthrough of sensing technology, electronic skin (e-skin) sensors with desired characteristics begin to emerge [Reference Hammock, Chortos, Tee, Tok and Bao6, Reference Yang, Mun, Kwon, Park, Bao and Park7] which significantly improves the performance of sensing devices.

Currently, e-skin is playing an increasingly important role in the emerging field of wearable sensing devices [Reference Yu, Nassar, Xu, Min, Yang, Dai, Doshi, Huang, Song, Gehlhar, Ames and Gao8]. As a new generation of wearable devices, e-skin is featured of flexibility, light weight, comfortable wearing, and strong adhesion. It can not only be integrated into the robotic system to provide rich and diverse information for robot perception, control, and decision-making [Reference Hua, Sun, Liu, Bao, Yu, Zhai, Pan and Wang9] but also be attached to the human body surfaces to provide diagnosis and monitoring capability [Reference Kim, Kim and Lee10]. Furthermore, e-skin can be closely combined with the current artificial intelligence [Reference Lee, Heo, Kim, Eom, Jung, Kim, Kim, Park, Mo, Kim and Park11].

In the field of robotic intelligent perception [Reference Shih, Shah, Li, Thuruthel, Park, Iida, Bao, Kramer-Bottiglio and Tolley12], e-skin has a wide range of application prospects, and there has been some research progress in this field. In ref. [Reference Gu, Zhang, Xu, Lin, Yu, Chai, Ge, Yang, Shao, Sheng, Zhu and Zhao13], a lightweight and soft capacitive e-skin pressure sensor was designed for neuromyoelectric prosthetic hand, which enhanced the speed and dexterity of the prosthetic hand. In ref. [Reference Boutry, Negre, Jorda, Vardoulis, Chortos, Khatib and Bao14], a soft e-skin, consisting of an array of capacitors and capable of real-time detection of normal and tangential forces, was designed for robotics dexterous manipulation of objects. In ref. [Reference Rahiminejad, Parvizi-Fard, Iskarous, Thakor and Amiri15], a biomimetic circuit was developed for e-skin attached to a prosthetic hand, which allows the prosthetic hand to sense edge stimulus in different directions. In ref. [Reference Liu, Yiu, Song, Huang, Yao, Wong, Zhou, Zhao, Huang, Nejad, Wu, Li, He, Guo, u, Feng, Xie and Yu16], an e-skin integrated with solar cells was developed for proximity sensing and touch recognition in robotic hand. In ref. [Reference Lee, Son, Lee, Kim, Kim, Nguyen, Lee and Cho17], a multimodal e-skin was developed for robotic prosthesis, with sensors capable of simultaneously recognizing materials and textures. However, some current researches focus more on the sensor itself [Reference Liu, Yiu, Song, Huang, Yao, Wong, Zhou, Zhao, Huang, Nejad, Wu, Li, He, Guo, u, Feng, Xie and Yu16, Reference Lee, Son, Lee, Kim, Kim, Nguyen, Lee and Cho17], ignoring the importance of the signal acquisition and analysis. In addition, the e-skin only has a single dimension [Reference Chen, Khamis, Birznieks, Lepora and Redmond18] and acquires little information. Therefore, it is necessary to consider not only the performance of the e-skin, but also the efficiency of signal acquisition.

Besides the intelligent perception of robots, another potential application field of e-skin is human-related health detection and human-machine interaction (HMI) [Reference Jiang, Li, Xu, Xu, Gu and Shull19Reference Liu, Yiu, Song, Huang, Yao, Wong, Zhou, Zhao, Huang, Nejad, Wu, Li, He, Guo, Yu, Feng and Xie24]. In ref. [Reference Jiang, Li, Xu, Xu, Gu and Shull19], a stretchable e-skin patch was designed for gesture recognition, and high recognition accuracy was achieved. In ref. [Reference Tang, Shang and Jiang20], a highly stretchable multilayer electronic tattoo was designed to realize many applications such as temperature regulation, motion monitoring, and robot remote control. In ref. [Reference Sundaram, Kellnhofer, Li, Zhu, Torralba and Matusik21], a flexible and stretchable tactile glove and deep convolutional neural networks were used to identify grasped objects and estimate object weight. In ref. [Reference Gao, Emaminejad, Nyein, Challa, Chen, Peck, Fahad, Ota, Shiraki, Kiriya, Lien, Brooks, Davis and Javey22], a flexible and highly integrated e-skin sensor array was designed for sweat detection, which integrated the functions of signal detection, wireless transmission and processing. In ref. [Reference Li, Liu, Xu, Wang, Shu, Sun, Tang and Wang23], a stretch-sensing device with grating-structured triboelectric nanogenerator was designed to detect the bending or stretching of the spinal, which was helpful for human joint health detection. In ref. [Reference Liu, Yiu, Song, Huang, Yao, Wong, Zhou, Zhao, Huang, Nejad, Wu, Li, He, Guo, Yu, Feng and Xie24], an e-skin with both sensing and feedback functions was developed for applications such as robotic teleoperation, robotic virtual reality, robotic healthcare, etc. In some existing applications [Reference Jiang, Li, Xu, Xu, Gu and Shull19, Reference Li, Liu, Xu, Wang, Shu, Sun, Tang and Wang23], more focus is on the analysis of sensor signals, and there are relatively few real-time applications. Meanwhile, some simple applications of e-skin [Reference Liu, Yiu, Song, Huang, Yao, Wong, Zhou, Zhao, Huang, Nejad, Wu, Li, He, Guo, Yu, Feng and Xie24] lack in-depth analysis and understanding of sensor signals.

To solve the problems of e-skin sensor mentioned above, a wearable e-skin sensing system with the characteristics of lightness, portability, scalability, and adhesion is developed for robotic hand teleoperation in this paper. Unlike some e-skins mentioned earlier, our designed e-skin sensing system integrates efficient sensing, transmission, and processing functions at the same time. First, the fabrication method of e-skin sensor in detail is introduced. Then, a flexible printed circuit (FPC) with small size and low-power consumption is specially customized for effective signal acquisition. The designed FPC wirelessly transmits the data to the remote computer through the WiFi module, and a supporting data display and saving interface based on Qt framework are developed. Next, based on the developed wearable e-skin device, we conduct recognition experiments on 9 types of gesture actions using deep learning techniques. Finally, based on the self-developed 2-DOF robotic hand platform, robotic hand teleoperation experiment is carried out to demonstrate the novelty and potential of the e-skin.

2. E-skin sensor design

The structure of human fingers is delicate and complex, which is a multi-joint system composed of multiple bones. According to the classification of physiological anatomy [Reference Miyata, Kouchi and Kurihara25], the human index finger, middle finger, ring finger, and little finger are similar in structure, which are all composed of three phalanges and one metacarpal bone, while the thumb is composed of two phalanges and one metacarpal bone. The bones are connected through different joints, and the movement of joints contains rich information. Therefore, effective monitoring of joint activities will help to the decoding of hand postures. In this section, we present a design scheme of a multi-channel e-skin sensor for the extraction of hand feature information, including the design principle of e-skin, the manufacturing process of e-skin, and the preliminary test of e-skin.

2.1. E-skin design principle

To fully exploit the potential of e-skin in extracting motion information, a stretchable sensor that fits completely on the skin surface is designed. This sensor consists of 11-channel stretchable resistors attached to the major joints of the fingers and the skin surface, as shown in Fig. 1(a). The resistors 1–10 are distributed at the proximal and metacarpal joints of each finger, respectively, and resistor 11 is distributed at the purlicue. Carbon nanotubes are used as the fabrication material for stretchable resistors.

Figure 1. (a) Schematic diagram of e-skin sensor. (b) Wearing diagram of e-skin sensor. (c) Overall manufacturing process of stretchable e-skin sensor. (d, e) Physical drawing of DB100 microelectronic printer.

2.2. E-skin fabrication scheme

For the e-skin sensor fabrication, a microelectronic printer DB100 from Shanghai Mifang Electronic Technology Co., Ltd is used to print the structure.

The design can be divided into the following steps:

  • The Polydimethylsiloxane (PDMS) is coated on the Polyethylene (PET) membrane and dried in the drying box to form the PDMS substrate.

  • The fabricated PDMS substrate is neatly placed on the operation platform of DB100 printing electronic multifunction printer, as shown in Fig. 1(d), (e).

  • The printing parameters are properly set using the DB100 supporting software to print and draw silver wires on the PDMS substrate.

  • Finally, carbon nanotubes with a concentration of 5% on the stretchable silica gel are uniformly spread on the PDMS substrate by template printing, and 11-channel stretchable e-skin is formed.

The overall fabrication process is shown in Fig. 1(c), and the final fabricated e-skin sensor is shown in Fig. 1(b).

2.3. Performance test of e-skin

After the manufacturing of e-skin, it is necessary to test the stretching and electrical properties of e-skin sensor. Here, we choose to test the stretching resistance of a single channel (the characteristics of other channels are similar). The stretching property of the sensor is tested by Instron 5565 (Instron 5565, Instron (Shanghai) LTD., USA), and the stretching ratio is set to 150%. In addition, the electrical performance of the sensor is detected and output by Keithley (DMM7510, Tektronix). The final cyclic stretching result is shown in Fig. 2. The sensor unit can maintain a relatively stable state under a large number of repeated stretching conditions.

Figure 2. Graph of the electrical properties of the e-skin under cyclic stretching conditions.

3. Wireless transmission interface design

For the e-skin sensor monitoring, we need to customize the data acquisition system to realize the transmission of on-site signal. Wireless external communication plays an important role in the acquisition system, and we choose WiFi as the communication channel. Since traditional printed circuit board is uncomfortable to wear due to its hardness, the scheme of FPC is employed in this work.

3.1. Chip solution selection

The FPC mainly involves the microcontroller unit (MCU), the data wireless transmission module, the power management module, the analog-to-digital conversion (ADC) module, etc., where MCU is the core part. Considering low cost and power consumption, a STM32G431CBT6 chip with LQFP-48 package as the MCU is chosen. This chip has 170 MHz mainstream ARM Cortex-m4 MCU with DSP and FPU, and 128 KB flash memory. At the same time, the built-in ADC modules of STM32 are used to measure the resistance. In addition, it also has abundant peripheral resources such as UART, SPI, Timer, etc., which can fully meet the system requirements.

In order to ensure the reliability and convenience of data transmission, W600-A800, a low-power WiFi chip, is chosen as the data transmission module of our circuit to realize the communication between e-skin sensor and data receiving terminal.

After the selection of MCU and WiFi module, an appropriate power management chip is needed to ensure the normal work of STM32 and WiFi. Here, a widely used rechargeable 3.7 V lithium battery is chosen as the power source. Since the STM32 and WiFi chips need a stable 3.3 V voltage to work properly, a 3.7 V-to-3.3 V chip RT9013-33 is selected to generate 3.3 V voltage. For the convenience of calculation, the TL431 chip is used to output a stable 3 V voltage as the reference voltage to the ADC modules.

3.2. Circuit schematic design

The developed 11-channel resistors of e-skin sensor are measured based on two built-in ADCs in STM32. The overall circuit connection and measurement principle are shown in Fig. 3(a). Here, the resistors are connected in a voltage-dividing manner, and a reference resistor is connected in series with the resistor to be measured for each channel. Since the built-in ADC is 12-bit, its maximum reading value is 4095, corresponding to a 3 V voltage. Thus, the resistance value of each channel can be obtained:

(1) \begin{equation}{R_c} ={{{R_f}} \Big/{\left ({\frac{{4096}}{{{V_{\text{adc}}}}} - 1} \right )}} \end{equation}

where $ R_c$ represents the resistance of each stretchable resistor, $ R_f$ represents the reference resistance welded on the FPC, and $V_{\textrm{adc}}$ represents the reading value of the built-in ADC.

Figure 3. (a) Circuit connection and signal transmission diagram. (b) Overview of the designed FPC.

The circuit schematic diagram of the e-skin monitoring unit is then designed based on above analysis. In addition to the basic chip interface, the SWD interface for program debugging and the sensor interface for connecting the e-skin are reserved. Finally, the physical diagram of the circuit after the welding test is shown in Fig. 3(b).

3.3. Acquisition circuit programming

In order to realize the real-time measurement of e-skin sensor by the designed FPC, the STM32CubeMx software is used to develop specific programs. The configuration of STM32CubeMx software mainly includes the following steps: First, the MCU corresponding to the STM32 is selected and its clock is configured. The clock frequency of all peripheral devices uses the default 170 MHz. The initialization parameters of peripheral modules, such as UART, ADC, Timer, are then configured. Among them, the baud rate of the UART is configured as 115200 for connecting to the WiFi chip, and the two ADCs are configured as multi-channel scan query mode.

To achieve efficient transmission and obtain enough observation data, the timing period of the timer is configured as 10ms; that is, the ADC collects a group of data every 10 ms (the acquisition frequency is 100 Hz). In addition, due to the load of network, 10 groups of data are packaged and sent to receiving terminal every 100 ms. This transmission can avoid packet loss and retransmission caused by frequent data transmission, which makes the transmission process reliable and stable. Finally, the initialization project code based on the keil integrated development environment is generated. The overall processing flow is shown in Fig. 4.

Figure 4. Program flow chart of STM32.

4. Data visualization interface design

In this section, a multi-functional graphical user interface (GUI) based on Qt framework is developed for real-time data interaction, which has multiple functions such as data visualization, multi-process communication, and key operation. In addition, the stability of this GUI in sensor signals processing is also verified by grasping experiments on objects of different shapes and sizes.

4.1. Qt-based GUI design

This GUI consists of three parts: the numerical display, the curve scrolling, and the function buttons, as shown in Fig. 5. The numerical display in the top green box displays the received resistance value in real time, the curve scrolling in the middle red box scrolls and displays the data, and the button under the right bottom includes the control buttons for saving data and enabling WiFi communication.

Figure 5. GUI with data display and save function developed based on Qt framework.

The system requires some configuration, such as IP address, port number, and other network information. Considering that the period of sending data of the FPC is 100ms, and there are 10 groups of measurement values in each period, we take the first group of measurement for display and save all 10 groups of data to the local. In this way, the real-time performance of data transmission is guaranteed.

4.2. Grasp test of different objects

Different gestures of the subjects will lead to the diversity of signals, so that the performance of e-skin system can be verified according to the signal discrimination of different gestures. For verification, we conduct a test in which the subject grasps objects of various sizes and shapes such as hamburger, carton, apple, banana, and water cup.

The obtained grasping test results are shown in Fig. 6. Specifically, for objects of different shapes, such as apple, banana, and water cup, the obtained e-skin response data have obvious differences across multiple channels. In addition, for objects with similar shapes such as apple and hamburger, the obtained grasping response curves are also distinct due to their different sizes. This not only verifies the good performance of e-skin sensor itself but also proves the stability and reliability of the signal acquisition and transmission process.

Figure 6. Resistance response ratio of each channel of the e-skin when grasping different objects.

5. Gesture recognition experiments

In this section, we tested and verified the designed e-skin sensing system. Considering that the e-skin adapts to the hand, we design an interactive experiment for gesture recognition according to its characteristics. Since the signals collected by the system belong to time sequence signals, and the data acquisition frequency can reach 100 Hz, and the frequency of sending data is set to 10 Hz, deep learning techniques such as long short-term memory (LSTM) neural network are used to identify such signals. Before the experiment, the acquired raw signals are preprocessed.

5.1. Signals preprocessing

A sliding window method is used to segment the original signals for constructing training samples and testing samples suitable for LSTM network. For a continuous period of e-skin data, it is intercepted from $T_1$ ms to $T_2$ ms. At this time, the amount of data obtained on each channel of e-skin is $N$ , and the following relationship is satisfied:

(2) \begin{equation} N = \left ({{T_2} -{T_1}} \right ) \cdot F/1000 \end{equation}

where $f$ represents the sampling frequency of e-skin by FPC, and its unit is Hz. Assuming that the number of channels is $C$ , the format of obtained original data meets $x_{\textrm{sample}} \in{R^{N \times C}}$ .

Figure 7. Schematic diagram of sliding window.

After the original data is obtained, sliding window is used to segment the data to construct multiple sub-samples. The specific segmentation method is shown in Fig. 7. Assuming that the length of the sliding window is $ W$ , the sliding step is $ \lambda$ , and the number of sub-samples after segmentation is $ L$ , the relationship is:

(3) \begin{equation} LW - \left ({L - 1} \right )\!\lambda \le N \end{equation}

where the number of sub-samples of e-skin data can be obtained as $ L = \left [{\frac{{N - \lambda }}{{W - \lambda }}} \right ]$ , where $ \left [ \cdot \right ]$ represents the largest integer not greater than the number in it. The format of the resulting sub-images is ${x_{\text{sub-sample}}} \in{R^{W \times C}}$ .

5.2. LSTM neural network algorithm

LSTM network is a special recurrent neural network (RNN). Ordinary RNN cannot deal with the long-term dependency problem of time sequence signals in practice, while LSTM can deal with the long-term dependency problem due to its special structure. In addition, LSTM networks are currently widely used in speech recognition, machine translation, text generation and other fields. Therefore, this network algorithm is chosen to recognize our e-skin signals.

The basic principle of LSTM neural network is shown in Fig. 8. Its internal mechanism is to regulate the information flow through forget gate, input gate, and output gate. Each LSTM cell has its own unit state ${c_t}$ , and LSTM uses forget gate and input gate to control the content of the unit state ${c_t}$ at the current moment.

Figure 8. Schematic diagram of LSTM network.

The forget gate determines how much of the unit state ${c_{t-1}}$ at the previous moment is retained to the current moment ${c_t}$ . The output of the forget gate is:

(4) \begin{equation}{f_t} = \sigma \!\left ({{W_f} \cdot \left [{{h_{t - 1}},{x_t}} \right ] +{b_f}} \right ) \end{equation}

where ${\sigma }$ represents sigmoid activation function, ${W_f}$ is the weight matrix of the forget gate, ${h_{t-1}}$ represents the output of the previous moment, ${x_t}$ represents the input of the current moment, and ${b_f}$ is the bias term of the forget gate.

The input gate determines how much of the input of the network at the current time is retained to the current state ${c_t}$ , and the output of the input gate is:

(5) \begin{equation}{i_t} = \sigma \!\left ({{W_i} \cdot \left [{{h_{t - 1}},{x_t}} \right ] +{b_i}} \right ) \end{equation}

where ${W_i}$ is the weight matrix of the input gate and ${b_i}$ is the bias term of the input gate. In addition, the cell state $\tilde c_t$ describes the current input:

(6) \begin{equation}{\tilde c_t} = \tanh \!\left ({{W_c} \cdot \left [{{h_{t - 1}},{x_t}} \right ] +{b_c}} \right ) \end{equation}

where ${W_c}$ is the weight matrix of the current input, and ${b_c}$ is the bias term of the current input.

The forget gate, the input gate and the state of the current input ${\tilde c_t}$ jointly determine the unit state at the current moment ${c_t}$ ,

(7) \begin{equation}{c_t} ={f_t} \circ{c_{t - 1}} +{i_t} \circ{\tilde c_t} \end{equation}

where $ \circ$ represents vector cross product operation. Through this combination, LSTM adds the current memory to the long-term memory, forming a new memory state ${c_t}$ .

Then, the output gate controls the updated memory state, and the output of output gate is:

(8) \begin{equation}{o_t} = \sigma \!\left ({{W_o} \cdot \left [{{h_{t - 1}},{x_t}} \right ] +{b_o}} \right ) \end{equation}

where $ W_o$ is the weight matrix of the output gate and $ b_o$ is the bias term of the output gate. The output of the LSTM unit $ h_t$ is finally determined by the output gate $ o_t$ and the new memory $ c_t$ :

(9) \begin{equation}{h_t} ={o_t} \circ \tanh \!\left ({{c_t}} \right ) \end{equation}

5.3. Experimental scheme

To verify the potential of the developed e-skin, we design multiple gesture recognition scenes. The gestures dataset in the experiment are divided into static gestures (SG) and dynamic gestures (DG). The SG include four categories, namely, the flex and extend of the index finger and the thumb, and the flex and extend of the remaining three fingers, as shown in Fig. 9: SG1, SG2, SG3, and SG4. The DG include five categories, which are defined as the gestures generated by the continuous movement of the hand. The transformation of gesture state is shown in Fig. 9: DG1, DG2, DG3, DG4 and DG5.

Figure 9. LSTM network structure for five dynamic gestures and four static gestures recognition.

In order to realize the recognition of SG and DG at the same time, we specially design a LSTM network adapted to the e-skin signal to decode different gestures. The structure of the network is shown in Fig. 9. The time step $ t$ is set to 20, that is, the input data dimension of the network is $ 20 \times 11$ , which represents the amount of data of 100 ms. In addition, a 2-layer LSTM network is designed, and a fully connected layer and a softmax layer are set after the last time step. The construction of this network is based on the open source keras deep learning framework.

A participant, whose back of hand is the same size as the designed e-skin, is involved in the experiment. Then, the participant wears our e-skin to repeat different movements. Finally, based on the designed LSTM network, training and testing are carried out on the constructed samples, and 91.22% recognition accuracy is obtained. The confusion matrix of all movements recognition is shown in Fig. 10.

Figure 10. Confusion matrix diagram for dynamic gesture recognition.

6. Robotic hand teleoperation

To further verify the practical application potential of the e-skin, we have conducted an interaction experiment between robotic hand and human hand based on this e-skin using finite state machine (FSM) algorithm. Specifically, we connect the previously identified SG with the postures of the robotic hand. According to the recognition result, the robotic hand can complete the corresponding actions and accurately switch between different states. In this way, the participants can realize the remote operation and effective control of robotic hand.

6.1. Robotic hand platform

We have independently developed a multi-fingered hand platform. The specific internal structure of the robotic hand is shown in Fig. 11(a). This robotic hand has 2 degrees of freedom. The index finger and thumb can be flexed and extended at the same time, as can the remaining three fingers. They are driven by 5 W and 2 W motor, respectively. In addition, the force-bearing parts of the robotic hand are made of aluminum, and the shell is made of 3D-printed resin material. Such a design not only enables strong mechanical properties but also reduces its weight.

Figure 11. (a) Internal structure of the robotic hand. (b) Implementation diagram of robotic hand teleoperation.

6.2. FSM based robotic teleoperation

To completely match the previous gesture recognition results, we consider the four postures POS1, POS2, POS3, and POS4 of the robotic hand with the SG SG1, SG2, SG3, and SG4. DG are used as a priori for static gesture transformations to prevent misoperations caused by misrecognition. The mapping state transition diagram of the robotic hand posture and gesture dataset is shown in Fig. 12. The robotic hand adopts PD-based position control.

Figure 12. State transition diagram of gestures and robotic hand.

The specific implementation of the robotic teleoperation is shown in Fig. 11(b), which is mainly divided into the following steps:

  • The WiFi communication between the e-skin sensor terminal and Qt-based HMI software terminal is established to ensure that HMI software terminal can collect signals in real time.

  • The socket communication between the HMI software and the signal processing terminal in python is established. Then, the pre-trained LSTM classification model is loaded in advance, and the online recognition and analysis of gesture signals are performed.

  • Finally, the socket communication between the HMI software and the MFC-based control terminal is established, and the recognition results are transmitted to the robot to realize the state switching and follow-up of the robot.

We select SG1 as the initial gesture state and then test the state following performance of the e-skin wearer in the order of SG1-SG2-SG4-SG3-SG1-SG4-SG1. After several tests, the robotic hand can effectively follow the gestures to switch states, and the average switching delay is less than 1s. In addition, the false recognition rate is low, and barely false actions of the robotic hand are identified.

When the robotic fingers are in the open or closed state, the relative positions of the driving motors are defined as 0 and 1, respectively. After the experimental tests, the position displacements of the driving motors during the state switching are shown in Fig. 13. Figure 13 shows that the developed robotic hand has good position control and can move the corresponding position rapidly.

Figure 13. The position displacements of the driving motors in different states.

7. Discussion and conclusions

This paper has developed a stretchable and portable e-skin sensing system including e-skin sensor, FPC with WiFi communication function, and human-machine interface. The e-skin sensor can be attached to the opisthenar to detect the fingers’ flexion and extension. The FPC acquire the sensor signals and then wirelessly transmits to the terminal device with our developed human-machine interface for data processing.

LSTM neural network algorithm is used to test the collected e-skin signals and achieves 91.22% recognition accuracy on 9 kinds of DG. Unlike conventional visual sensors, the e-skin is not affected by light and environment, which is superior to traditional vision sensors in terms of cost and recognition stability. Based on the pre-trained LSTM network model, we conduct robotic hand teleoperation experiments using FSM, which validates the application potential of the developed e-skin for efficient control of robotic hand poses.

In future, we will analyze the mechanical and electrical properties of the developed e-skin sensor in detail. In addition, we will try to optimize the performance of this sensor and explore the relationship between the deformation of the e-skin and the flex of the fingers.

Acknowledgements

This work was supported in part by the Strategic Priority Research Program of the Chinese Academy of Sciences under Grant XDA16021200, in part by the National Natural Science Foundation of China under Grant 62133013 and Grant U1913601, and in part by the National Key Research and Development Program of China under Grant 2018AAA0102900 and Grant 2021YFF0501600.

Authors’ contributions

All the authors have made great contributions to this paper.

Conflicts of interest

The authors declare none.

References

Tam, S., Boukadoum, M., Campeau-Lecours, A. and Gosselin, B., “A fully embedded adaptive real-time hand gesture classifier leveraging HD-sEMG and deep learning,” IEEE Trans. Biomed. Circuits Syst. 14(2), 232243 (2020).CrossRefGoogle ScholarPubMed
Shaeffer, D. K., “MEMS inertial sensors: A tutorial overview,” IEEE Commun. Mag. 51(4), 100109 (2013).CrossRefGoogle Scholar
Jain, R., Semwal, V. and Kaushik, P., “Stride segmentation of inertial sensor data using statistical methods for different walking activities,” Robotica 40(8), 25672580 (2021).CrossRefGoogle Scholar
Shao, Y., Hu, H. and Visell, Y., “A wearable tactile sensor array for large area remote vibration sensing in the hand,” IEEE Sens. J. 20(12), 66126623 (2020).CrossRefGoogle Scholar
Ohka, M., Takata, J., Kobayashi, H., Suzuki, H., Morisawa, N. and Yussof, H. B., “Object exploration and manipulation using a robotic finger equipped with an optical three-axis tactile sensor,” Robotica 27(5), 763770 (2009).CrossRefGoogle Scholar
Hammock, M. L., Chortos, A., Tee, B. C.-K., Tok, J. B.-H. and Bao, Z., “25th anniversary article: The evolution of electronic skin (e-skin): A brief history, design considerations, and recent progress,” Adv. Mater. 25(42), 59976038 (2013).CrossRefGoogle ScholarPubMed
Yang, J. C., Mun, J., Kwon, S. Y., Park, S., Bao, Z. and Park, S., “Electronic skin: Recent progress and future prospects for skin-attachable devices for health monitoring, robotics, and prosthetics,” Adv. Mater. 31(48), 150 (2019).Google ScholarPubMed
Yu, Y., Nassar, J., Xu, C., Min, J., Yang, Y., Dai, A., Doshi, R., Huang, A., Song, Y., Gehlhar, R., Ames, A. and Gao, W., “Biofuel-powered soft electronic skin with multiplexed and wireless sensing for humanmachine interfaces,” Sci. Robot. 5(41), eaaz7946 (2020).CrossRefGoogle ScholarPubMed
Hua, Q., Sun, J., Liu, H., Bao, R., Yu, R., Zhai, J., Pan, C. and Wang, Z. L., “Skin-inspired highly stretchable and conformable matrix networks for multifunctional sensing,” Nat. Commun. 9(1), 111 (2018).CrossRefGoogle ScholarPubMed
Kim, K., Kim, B. and Lee, C. H., “Printing flexible and hybrid electronics for human skin and eye-interfaced health monitoring systems,” Adv. Mater. 32(15), 122 (2020).Google ScholarPubMed
Lee, J. H., Heo, J. S., Kim, Y.-J., Eom, J., Jung, H. J., Kim, J. W., Kim, I., Park, H.-Y., Mo, H. S., Kim, Y. H. and Park, S. K., “A behavior-learned cross-reactive sensor matrix for intelligent skin perception,” Adv. Mater. 32(22), 111 (2020).Google ScholarPubMed
Shih, B., Shah, D., Li, J., Thuruthel, T. G., Park, Y. L., Iida, F., Bao, Z., Kramer-Bottiglio, R. and Tolley, M. T., “Electronic skins and machine learning for intelligent soft robots,” Sci. Robot. 5(41), 18453663 (2020).CrossRefGoogle ScholarPubMed
Gu, G., Zhang, N., Xu, H., Lin, S., Yu, Y., Chai, G., Ge, L., Yang, H., Shao, Q., Sheng, X., Zhu, X. and Zhao, X., “A soft neuroprosthetic hand providing simultaneous myoelectric control and tactile feedback,” Nat. Biomed. Eng. (2021). doi: 10.1038/s41551-021-00767-0.CrossRefGoogle ScholarPubMed
Boutry, C. M., Negre, M., Jorda, M., Vardoulis, O., Chortos, A., Khatib, O. and Bao, Z., “A hierarchically patterned, bioinspired e-skin able to detect the direction of applied pressure for robotics,” Sci. Robot. 3(24), eaau6914 (2018).CrossRefGoogle ScholarPubMed
Rahiminejad, E., Parvizi-Fard, A., Iskarous, M. M., Thakor, N. V. and Amiri, M., “A biomimetic circuit for electronic skin with application in hand prosthesis,” IEEE Trans. Neural Syst. Rehabil. Eng. 29, 23332344 (2021). doi: 10.1109/TNSRE.2021.3120446.CrossRefGoogle ScholarPubMed
Liu, Y., Yiu, C., Song, Z., Huang, Y., Yao, K., Wong, T., Zhou, J., Zhao, L., Huang, X., Nejad, S., Wu, M., Li, D., He, J., Guo, X., u, J. Y., Feng, X., Xie, Z. and Yu, X., “Electronic skin as wireless humanmachine interfaces for robotic VR,” Sci. Adv. 8(2), eabl6700 (2022).CrossRefGoogle ScholarPubMed
Lee, G., Son, J., Lee, S., Kim, S., Kim, D., Nguyen, N., Lee, S. and Cho, K., “Fingerpad-inspired multimodal electronic skin for material discrimination and texture recognition,” Adv. Sci. 8(9), 2002606 (2021).CrossRefGoogle ScholarPubMed
Chen, W., Khamis, H., Birznieks, I., Lepora, N. F. and Redmond, S. J., “Tactile sensors for friction estimation and incipient slip detection - toward dexterous robotic manipulation: A review,” IEEE Sens. J. 18(22), 90499064 (2018).CrossRefGoogle Scholar
Jiang, S., Li, L., Xu, H., Xu, J., Gu, G. and Shull, P. B., “Stretchable e-Skin patch for gesture recognition on the back of the hand,” IEEE Trans. Ind. Electron. 67(1), 647657 (2020).CrossRefGoogle Scholar
Tang, L., Shang, J. and Jiang, X., “Multilayered electronic transfer tattoo that can enable the crease amplification effect,” Sci. Adv. 7(3), eabe3778 (2021).CrossRefGoogle ScholarPubMed
Sundaram, S., Kellnhofer, P., Li, Y., Zhu, J.-Y., Torralba, A. and Matusik, W., “Learning the signatures of the human grasp using a scalable tactile glove,” Nature 569(7758), 698702 (2019).CrossRefGoogle ScholarPubMed
Gao, W., Emaminejad, S., Nyein, H., Challa, S., Chen, K., Peck, A., Fahad, H., Ota, H., Shiraki, H., Kiriya, D., Lien, D.-H., Brooks, G., Davis, R. and Javey, A., “Fully integrated wearable sensor arrays for multiplexed in situ perspiration analysis,” Nature 529(7587), 509514 (2016).CrossRefGoogle ScholarPubMed
Li, C., Liu, D., Xu, C., Wang, Z., Shu, S., Sun, Z., Tang, W. and Wang, Z. L., “Sensing of joint and spinal bending or stretching via a retractable and wearable badge reel,” Nat. Commun. 12(1), 2950 (2021).CrossRefGoogle ScholarPubMed
Liu, Y., Yiu, C., Song, Z., Huang, Y., Yao, K., Wong, T., Zhou, J., Zhao, L., Huang, X., Nejad, S., Wu, M., Li, D., He, J., Guo, X., Yu, J., Feng, X., Xie, Z. and X. Yu, “Electronic skin as wireless human machine interfaces for robotic VR,” Sci. Adv. 8(2), 509 (2022).Google ScholarPubMed
Miyata, N., Kouchi, M., Kurihara, T. and M. Mochimaru, “Modeling of Human Hand Link Structure from Optical Motion Capture Data,” In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), vol. 3 (2004) pp. 21292135.Google Scholar
Figure 0

Figure 1. (a) Schematic diagram of e-skin sensor. (b) Wearing diagram of e-skin sensor. (c) Overall manufacturing process of stretchable e-skin sensor. (d, e) Physical drawing of DB100 microelectronic printer.

Figure 1

Figure 2. Graph of the electrical properties of the e-skin under cyclic stretching conditions.

Figure 2

Figure 3. (a) Circuit connection and signal transmission diagram. (b) Overview of the designed FPC.

Figure 3

Figure 4. Program flow chart of STM32.

Figure 4

Figure 5. GUI with data display and save function developed based on Qt framework.

Figure 5

Figure 6. Resistance response ratio of each channel of the e-skin when grasping different objects.

Figure 6

Figure 7. Schematic diagram of sliding window.

Figure 7

Figure 8. Schematic diagram of LSTM network.

Figure 8

Figure 9. LSTM network structure for five dynamic gestures and four static gestures recognition.

Figure 9

Figure 10. Confusion matrix diagram for dynamic gesture recognition.

Figure 10

Figure 11. (a) Internal structure of the robotic hand. (b) Implementation diagram of robotic hand teleoperation.

Figure 11

Figure 12. State transition diagram of gestures and robotic hand.

Figure 12

Figure 13. The position displacements of the driving motors in different states.