Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Meta-Transfer-Learning-Based Multimodal Human Pose Estimation for Lower Limbs
Previous Article in Journal
Calibration of Low-Cost LoRaWAN-Based IoT Air Quality Monitors Using the Super Learner Ensemble: A Case Study for Accurate Particulate Matter Measurement
Previous Article in Special Issue
Accurate Cardiac Duration Detection for Remote Blood Pressure Estimation Using mm-Wave Doppler Radar
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Deep-Learning-Based Analysis of Electronic Skin Sensing Data

Collaborative Innovation Center of Advanced Microstructures, School of Electronic Science and Engineering, Nanjing University, Nanjing 210093, China
*
Authors to whom correspondence should be addressed.
Sensors 2025, 25(5), 1615; https://doi.org/10.3390/s25051615
Submission received: 28 January 2025 / Revised: 26 February 2025 / Accepted: 3 March 2025 / Published: 6 March 2025
(This article belongs to the Special Issue Analyzation of Sensor Data with the Aid of Deep Learning)
Figure 1
<p>Overview diagram of the sensors, process flows, and applications of deep-learning-based e-skin. Image representing pressure was reproduced with permission from ref [<a href="#B30-sensors-25-01615" class="html-bibr">30</a>]. Copyright 2023, Springer Nature. Image representing temperature was reproduced with permission from ref [<a href="#B31-sensors-25-01615" class="html-bibr">31</a>]. Copyright 2021, Springer Nature. Image representing electrophysiological was reproduced with permission from ref [<a href="#B32-sensors-25-01615" class="html-bibr">32</a>]. Copyright 2023, John Wiley and Sons. Image representing electrophysiological was reproduced with permission from ref [<a href="#B33-sensors-25-01615" class="html-bibr">33</a>]. Copyright 2021, John Wiley and Sons. Left image representing health monitoring was reproduced with permission from ref [<a href="#B34-sensors-25-01615" class="html-bibr">34</a>]. Copyright 2023, Elsevier. Right image representing health monitoring was reproduced with permission from ref [<a href="#B35-sensors-25-01615" class="html-bibr">35</a>]. Copyright 2022, John Wiley and Sons. Left image representing human–machine interaction was reproduced with permission from ref [<a href="#B10-sensors-25-01615" class="html-bibr">10</a>]. Copyright 2024, John Wiley and Sons. Right image representing human–machine interaction was reproduced with permission from ref [<a href="#B36-sensors-25-01615" class="html-bibr">36</a>]. Copyright 2022, Elsevier.</p> ">
Figure 2
<p>Schematic diagram of common pressure sensing mechanisms [<a href="#B57-sensors-25-01615" class="html-bibr">57</a>]. (<b>a</b>) Resistive, (<b>b</b>) capacitive, (<b>c</b>) piezoelectric, (<b>d</b>) triboelectric. Copyright 2024, OAE Publishing Inc.</p> ">
Figure 3
<p>Temperature sensors based on bionic design. (<b>a</b>) Jellyfish-inspired sensor device schematic. Machine learning can be used to decouple temperature and pressure by analyzing capacitance and resistance signals under different conditions [<a href="#B64-sensors-25-01615" class="html-bibr">64</a>]. Copyright 2024, John Wiley and Sons. (<b>b</b>) Flowchart of DMSTS preparation based on centipede’s foot and schematic diagram of DMSTS bionic structure sensing layer [<a href="#B65-sensors-25-01615" class="html-bibr">65</a>]. Copyright 2024, John Wiley and Sons.</p> ">
Figure 4
<p>Design of the nepenthes-inspired hydrogel hybrid system [<a href="#B74-sensors-25-01615" class="html-bibr">74</a>]. (<b>a</b>) Schematic of the hydrogel system on human skin for ECG recording, with inset showing sweat-wicking NIH layer. (<b>b</b>) Exploded 3D model of the NIH hybrid system. (<b>c</b>) ECG signals from resting and exercising states displayed on the app. (<b>d</b>) Nepenthes-inspired microstructures of the hydrogel interface. (<b>e</b>,<b>f</b>) SEM image (<b>e</b>) and photograph (<b>f</b>) of nepenthes lip. (<b>g</b>) NIH network composition schematic. (<b>h</b>) Hydrogel/skin adhesion mechanism. (<b>i</b>) Nepenthes-inspired structure design of the hydrogel interface layer. α and β represent the cone angle of microgrooves and the wedge angle of microcolumns, respectively. (<b>j</b>) Electrical architecture of the NIH hybrid system. (<b>k</b>) Methylene blue droplets on NIH layer (<b>i</b>) and undergo directional transport (<b>ii</b>). (<b>l</b>) System/skin coupling during running (<b>i</b>,<b>ii</b>) and hydrogel/electrode interface under bending (<b>iii</b>). Scale bars: 25 µm (<b>e</b>), 4 cm (<b>f</b>), 5 mm (<b>k</b>,<b>l</b>(<b>iii</b>)), 50 mm (<b>l</b>(<b>i</b>)), 10 mm (<b>l</b>(<b>ii</b>)). Copyright 2024, John Wiley and Sons.</p> ">
Figure 5
<p>Human activity recognition and user identification using the deep learning method [<a href="#B34-sensors-25-01615" class="html-bibr">34</a>]. (<b>a</b>) A 1D-CNN system architecture for activity recognition and user identification. Confusion matrices for (<b>b</b>) activity prediction (99% accuracy) and (<b>c</b>) user prediction (99% accuracy). Photographs of user 1 during (<b>d</b>) walking, (<b>e</b>) running, and (<b>f</b>) jumping, with insets showing correct identification and activity. (<b>g</b>) Photograph of the processing circuit and TENG sensors on the shoe insole for data collection. Copyright 2023, Elsevier.</p> ">
Figure 6
<p>Facial EMG monitoring by PLPG and machine learning for emotion analysis [<a href="#B166-sensors-25-01615" class="html-bibr">166</a>]. (<b>a</b>) Schematic diagram of the YOLOv3 algorithm backbone network consisting of three upsamples that output three feature maps: y1, y2, y3. (<b>b</b>) YOLOv3 training loss vs. epochs. (<b>c</b>) Confusion matrix for 4 perspiration categories. (<b>d</b>) Images of perspiration categorization results. Copyright 2023, John Wiley and Sons.</p> ">
Figure 7
<p>Signal decoupling and simultaneous recognition model [<a href="#B45-sensors-25-01615" class="html-bibr">45</a>]. (<b>a</b>) Architecture of the decoupling and 1D-CNN-based recognition model for feature extraction and classification. (<b>b</b>) Sixteen standard objects from the cross-pairing of four materials (copper, cotton, resin, paper) and four textures. (<b>c</b>) Sample sensing signals and corresponding decoupled features. (<b>d</b>) Confusion matrix for material recognition (4 materials). (<b>e</b>) Confusion matrix for texture recognition (4 textures). (<b>f</b>) Confusion matrix for merged recognition of the 16 objects in (<b>b</b>). Copyright 2022, Elsevier.</p> ">
Figure 8
<p>Realization of hand gesture recognition by deep-learning-based algorithm [<a href="#B10-sensors-25-01615" class="html-bibr">10</a>]. (<b>a</b>) Process of hand gesture recognition with deep convolutional neural networks (DCNNs). (<b>b</b>) Three-dimensional plot of test accuracy vs. epochs and training ratios. (<b>c</b>) Accuracy rate transition with increasing epochs. (<b>d</b>) Loss rate transition with increasing epochs. (<b>e</b>) Confusion matrix for DCNNs. (<b>f</b>) Confusion matrix for support vector machines. (<b>g</b>) Confusion matrix for K-nearest neighbors. Copyright 2024, John Wiley and Sons.</p> ">
Figure 9
<p>Facial EMG monitoring by PLPG and machine learning for emotion analysis [<a href="#B11-sensors-25-01615" class="html-bibr">11</a>]. (<b>a</b>) Main muscles for emotion expression. (<b>b</b>) PLPG with M-3 pattern electrodes for fEMG acquisition. (<b>c</b>,<b>d</b>) Representative fEMG signals and extracted integrated EMG for positivee (<b>c</b>) and negative (<b>d</b>) emotions. (<b>e</b>) Machine learning flowchart for emotion classification. (<b>f</b>–<b>h</b>) Thermogram of fEMG correlation coefficients for positive (<b>f</b>), neutral (<b>g</b>), and negative (<b>h</b>) emotions, with classification labels in the 27th column. (<b>i</b>) Confusion matrix for classification accuracy. (<b>j</b>) LSTM identification results. Copyright 2024, John Wiley and Sons.</p> ">
Figure 10
<p>ML-enabled automatic grasped objects recognition system [<a href="#B173-sensors-25-01615" class="html-bibr">173</a>]. (<b>a</b>) A 1D-CNN framework. (<b>b</b>) Fifteen-channel spectra from TENG system for 6 spherical and 3 oval objects. (<b>c</b>) Confusion map for spherical and oval objects. (<b>d</b>) Manipulator grasping 5 elongated objects vertically and horizontally. (<b>e</b>) Deformation and contact map of manipulator with T-TENG patches. The marks of five-pointed star represent the contact positions on the T-TENG sensor patches integrated on three pneumatic fingers. (<b>f</b>) t-SNE visualization framework. (<b>g</b>) t-SNE results for vertical and horizontal grasps. (<b>h</b>) Confusion map for 5 elongated objects at two grasping angles. Copyright 2023, John Wiley and Sons.</p> ">
Versions Notes

Abstract

:
E-skin is an integrated electronic system that can mimic the perceptual ability of human skin. Traditional analysis methods struggle to handle complex e-skin data, which include time series and multiple patterns, especially when dealing with intricate signals and real-time responses. Recently, deep learning techniques, such as the convolutional neural network, recurrent neural network, and transformer methods, provide effective solutions that can automatically extract data features and recognize patterns, significantly improving the analysis of e-skin data. Deep learning is not only capable of handling multimodal data but can also provide real-time response and personalized predictions in dynamic environments. Nevertheless, problems such as insufficient data annotation and high demand for computational resources still limit the application of e-skin. Optimizing deep learning algorithms, improving computational efficiency, and exploring hardware–algorithm co-designing will be the key to future development. This review aims to present the deep learning techniques applied in e-skin and provide inspiration for subsequent researchers. We first summarize the sources and characteristics of e-skin data and review the deep learning models applicable to e-skin data and their applications in data analysis. Additionally, we discuss the use of deep learning in e-skin, particularly in health monitoring and human–machine interactions, and we explore the current challenges and future development directions.

1. Introduction

Electronic skin is an integrated flexible electronic system that mimics the sensory functions of human skin. Compared with conventional rigid devices, electronic skin has better breathability and flexibility and can realize a seamless fit with the skin, reducing the discomfort of people wearing it [1]. In addition, similar to human skin, it is self-healing and biocompatible. Highly integrated e-skin can detect multimodal signals and, through wireless communication, enable the stable and continuous measurement of multiple signals for a long time. This makes it widely used in the field of health monitoring [2,3,4,5], human–computer interactions [6,7,8,9], and other areas. By monitoring physiological parameters, such as heart rate, blood oxygen, muscle activity, etc., e-skin can detect and intervene in potential health problems at an early stage or provide health coaches with data for providing scientific training methods. E-skin can also accurately sense temperature, tactile signals, and other perceptions of the external environment, which is key for developing robots with diverse capabilities. This enables complex real-world applications like human–machine emotional interactions and gesture recognition [10,11].
The realization of the practical functions of e-skin is highly dependent on the efficient processing and interpretation of complex sensing data. The sensing data of electronic skin are typically high-dimensional and temporal in nature, but they are often plagued by noise and artifacts [12,13]. Nowadays, commonly used data analysis methods, such as those based on statistical and frequency domain analysis, can efficiently process single-modal or linear signals [14], but their classification accuracies are generally lower than 80% in multimodal fusion and dynamic environments (e.g., the traditional machine learning methods KNN and SVM are only 79% and 70% accurate in complex gesture recognition [10]). Moreover, traditional methods rely on manual feature engineering, which is difficult to adapt to dynamic environments such as changes in wearing position [15].
With the rapid development of artificial intelligence technology, deep-learning-based algorithms provide an effective solution for complex data analysis. Deep learning breaks through the bottleneck of traditional methods through hierarchical feature abstraction, the core of which lies in the ability of the multilayer neural network structure to automatically learn hierarchical feature representations in data. Convolutional neural networks (CNNs) can automatically extract spatially distributed features of sensor arrays by exploiting the properties of local connectivity and weight sharing [16], and long short-term memory networks (LSTMs) can effectively model the temporal dependence of bioelectrical signals through their unique gating mechanism [17]. These deep learning models benefit from backpropagation algorithms and gradient descent optimization methods to extract features and recognize patterns from complex data without human intervention. In addition, the popularity of deep learning frameworks such as TensorFlow and PyTorch (version 2.5) has made model training and deployment easier. This flexibility allows deep learning to effectively process multimodal data from sensors such as pressure, temperature, and bioelectricity, providing more accurate analysis and prediction for a variety of application scenarios (Figure 1) [18,19,20]. In addition, deep-learning-based methods also assist electronic skin in adapting to dynamic environments more efficiently, supporting real-time responses and personalized prediction. For example, Yang et al. achieved an average accuracy of 91% in laryngeal motion speech recognition using the AlexNet model, representing a 15% improvement over traditional methods [21]. In addition, a graph neural network (GNN) was successfully applied to model the topology of flexible sensor networks, significantly improving the robustness of human pose reconstruction [22].
Deep learning has significantly enhanced the capabilities of e-skin in multimodal data processing and pattern recognition, but its practical applications still face several challenges. First, the lack of high-quality labeled data restricts the model generalization ability. Existing datasets are mostly confined to laboratory environments with small sample sizes, which cannot meet the requirements of tens of thousands of human-scale samples needed for clinical-grade applications [23]. Researchers are working on the construction of multimodal and large-scale datasets, and they hope to enhance the robustness of these models by simulating diverse physiological data in real-life scenarios. In addition, techniques such as transfer learning and reinforcement learning can be used to effectively utilize the limited labeled data and improve the adaptability of the model in different tasks [24,25]. Second, the contradiction between model computational intensity and device power consumption also restricts the application of these models, especially on embedded platforms. The inference latency of traditional deep learning models such as ResNet-50 can reach hundreds of milliseconds, making it difficult to meet real-time interaction needs. To address this challenge, two optimization schemes have been proposed in academia. At the algorithmic level, optimization algorithms such as knowledge distillation and model compression (e.g., quantization and pruning) are used, which can reduce model parameters while maintaining high accuracy [26]. At the hardware level, the use of an integrated memory-computing chip reduces energy consumption and improves the energy efficiency ratio [27]. Furthermore, the lack of personalized adaptation mechanisms leads to unstable model performance across individuals [28]. To this end, federated learning solves the problem of multicenter data collaboration through distributed training and privacy-preserving techniques while improving model robustness across individuals through personalized modeling optimization algorithms (e.g., FedProx) [29]. These issues limit the widespread application of e-skin in fields such as health monitoring and robotic interaction. Based on the above problems, this review comprehensively introduces deep learning analysis techniques for electronic skin sensing data, aiming to provide a reference for the integration of artificial intelligence and electronic skin.
In this paper, we first introduce the data sources of e-skin sensors and analyze the unique characteristics of these data. Next, several commonly used deep learning models and methods are introduced, and the key steps of data preprocessing and feature extraction are analyzed in depth. Then, we summarize the research progress of e-skin in the fields of health monitoring and human–machine interaction in recent years and discuss its innovations and achievements in practical applications. Finally, the opportunities and challenges faced by deep-learning-based e-skin and possible solutions are discussed.
Figure 1. Overview diagram of the sensors, process flows, and applications of deep-learning-based e-skin. Image representing pressure was reproduced with permission from ref [30]. Copyright 2023, Springer Nature. Image representing temperature was reproduced with permission from ref [31]. Copyright 2021, Springer Nature. Image representing electrophysiological was reproduced with permission from ref [32]. Copyright 2023, John Wiley and Sons. Image representing electrophysiological was reproduced with permission from ref [33]. Copyright 2021, John Wiley and Sons. Left image representing health monitoring was reproduced with permission from ref [34]. Copyright 2023, Elsevier. Right image representing health monitoring was reproduced with permission from ref [35]. Copyright 2022, John Wiley and Sons. Left image representing human–machine interaction was reproduced with permission from ref [10]. Copyright 2024, John Wiley and Sons. Right image representing human–machine interaction was reproduced with permission from ref [36]. Copyright 2022, Elsevier.
Figure 1. Overview diagram of the sensors, process flows, and applications of deep-learning-based e-skin. Image representing pressure was reproduced with permission from ref [30]. Copyright 2023, Springer Nature. Image representing temperature was reproduced with permission from ref [31]. Copyright 2021, Springer Nature. Image representing electrophysiological was reproduced with permission from ref [32]. Copyright 2023, John Wiley and Sons. Image representing electrophysiological was reproduced with permission from ref [33]. Copyright 2021, John Wiley and Sons. Left image representing health monitoring was reproduced with permission from ref [34]. Copyright 2023, Elsevier. Right image representing health monitoring was reproduced with permission from ref [35]. Copyright 2022, John Wiley and Sons. Left image representing human–machine interaction was reproduced with permission from ref [10]. Copyright 2024, John Wiley and Sons. Right image representing human–machine interaction was reproduced with permission from ref [36]. Copyright 2022, Elsevier.
Sensors 25 01615 g001

2. Data Sources of Electronic Skin

The primary function of e-skin lies in the effective sensing and collection of various types of data, which is enabled by a variety of sensor types. By capturing physical, biological, and chemical signals, these sensors provide rich data for deep learning models, thus supporting the widespread application of e-skins in fields such as healthcare [37,38], robotics [39,40], and virtual reality [41,42]. In the following, we will take a detailed look at the common flexible sensors of pressure and temperature and electrophysiological and optical sensors and explore their characteristics in data acquisition as well as their relationship with deep learning analysis.

2.1. Pressure Sensors: Decoding Tactile Patterns

Pressure sensors are one of the core functions of e-skins and are mainly classified into four categories: resistive, capacitive, piezoelectric, and triboelectric sensors [43,44,45,46] (Figure 2). Both resistive and capacitive sensors use changes in resistance or capacitance values caused by the deformation of the material due to the action of external mechanical forces for sensing. Resistive sensors usually consist of three parts: intrinsically conductive elastomeric materials or composites of conductive materials with elastomers and stretchable materials [47]. Capacitive sensors usually consist of a pair of electrodes and a dielectric layer sandwiched between the electrodes. Conductive materials such as metal films, metal nanowires, graphene, carbon black, and carbon nanotubes have been widely used to prepare electrodes for capacitive electronic skin. Their dielectric layers usually consist of elastic dielectric materials with micro/nanostructures. The microstructures can greatly enhance the sensitivity of these two types of sensors. Zhang et al. developed ultrathin, ultralight, and gas-permeable versatile electrospun micropyramid arrays through a self-assembly technology based on wet heterostructured electrified jets [48]. The capacitive sensor based on this microstructure has high sensitivity (19 kPa−1), an ultralow detection limit (0.05 Pa), and an ultrafast response (≤0.8 ms).
Piezoelectric and triboelectric sensors are relatively new sensing methods. Unlike traditional resistive and capacitive sensors, these two sensing methods offer faster response times and higher output power. Piezoelectric sensors produce a change in piezoelectric potential on the surface of the material due to an externally applied mechanical force that causes a change in the dipole deflection or dipole moment of the material. Triboelectric sensors, on the other hand, are based on the coupled effects of contact initiation and electrostatic induction and utilize external mechanical force to cause an electrode potential difference for sensing. Piezoelectric sensors must be fabricated from piezoelectric ceramics (e.g., barium titanate, lead zirconate titanate, and sodium and potassium niobates [49,50,51]) or polymers (e.g., polyvinylidene fluoride (PVDF), its copolymers poly(vinylidene fluoride-trifluoroethylene) (P(VDF-TrFE)), and poly(vinylidene fluoride-hexafluoropropylene) (P(VDF-HFP)) [52,53,54])), while triboelectric sensors do not have this material restriction. Both solids, liquids, and gases produce a friction point effect when in contact with each other and are capable of being used as electrostatic sensing devices.
Of course, a single sensor or sensing mode cannot meet the demands of large-scale sensing tasks and basic applications. Arrays of pressure sensors in e-skin can perceive tactile information such as contact strength, distribution, and dynamic changes. The data from these sensors are presented as a two-dimensional matrix with high resolution. Sundaram et al. developed a scalable all-haptic glove [55]. A large-scale haptic dataset containing 135,000 frames was obtained by collecting haptic maps generated by 548 pressure sensors on the glove under grasping actions. The convolutional neural network linked the temporal and spatial relationships between the haptic signals and successfully recognized 26 object grasping patterns. This joint physical–virtual optimization framework significantly improves the spatio-temporal resolution of haptic feedback. More complex designs of pressure sensors introduce multilayer structures or flexible material optimization. Using a new composite of elastic polymers and metal particles, Guo et al. proposed self-healing artificial innervation foam piezoresistive tactile sensors [56]. They are capable of measuring both vertical pressure and shear direction simultaneously. These high-dimensional data provide rich inputs for deep learning models, facilitating human–machine interaction in augmented reality and robotic skin applications. In addition, this type of sensing data can be combined with other modalities (such as temperature or chemical signals) to support more complex object recognition or dynamic interactions.
Figure 2. Schematic diagram of common pressure sensing mechanisms [57]. (a) Resistive, (b) capacitive, (c) piezoelectric, (d) triboelectric. Copyright 2024, OAE Publishing Inc.
Figure 2. Schematic diagram of common pressure sensing mechanisms [57]. (a) Resistive, (b) capacitive, (c) piezoelectric, (d) triboelectric. Copyright 2024, OAE Publishing Inc.
Sensors 25 01615 g002

2.2. Temperature Sensors: Evaluating Thermal Dynamics and Environmental Characterization

Temperature sensing is one of the core functions of e-skin, which is able to mimic the ability of human skin to sense temperature changes. Temperature sensors in e-skin are usually based on the thermoelectric effect or the changes in the electrical properties of temperature-sensitive materials. Common principles of temperature sensing include thermoelectric effects, resistance changes, thermistors, and changes in the properties of semiconductor materials [58,59]. These sensors can detect changes in ambient temperature, body temperature, the presence of heat sources, and differences in temperature distribution. Temperature sensing is one of the core functions of e-skin, which is able to mimic the ability of human skin to sense temperature changes. Temperature sensors in electronic skin are usually based on the thermoelectric effect or electrical changes in temperature-sensitive materials. Common temperature sensing principles include the thermoelectric effect, resistance changes, thermistors, and semiconductor material property changes [11,12]. They can detect changes in ambient temperature, body temperature, the presence of heat sources, and differences in temperature distribution. Resistive temperature sensors are the predominant mechanism of operation. The heat transfer/dissipation mechanism or geometry of a thermistor can change significantly with temperature [57]. Common thermistor materials include metal oxide and silicon nitride [60]. In addition, organic hydrogels with a high ionic concentration and ionic mobility have an increase in conductivity in response to an increase in temperature. Ionic hydrogels are also widely used in temperature sensors [61]. The signals output from these sensors are mostly continuous time series data and show strong correlation when combined with pressure sensors. For example, highly integrated temperature sensors on robotic hands can more accurately determine the temperature of objects, the direction of fluid flow, and the temperature distribution on curved surfaces by simultaneously analyzing the temperature and pressure distribution at contact points [62,63].
Biomimetic design has also been widely applied in the fabrication of temperature sensors. Inspired by the ‘mesoglea’ and ‘ectoderm’ structures of jellyfish, researchers have combined hydrogels with flexible frameworks to design a highly sensitive temperature and pressure sensor with a structure similar to the umbrella of jellyfish [64]. In this design, capacitance and resistance to pressure and temperature differ significantly in sensitivity and trends. The authors successfully used a neural network based on the Keras sequential framework to decouple the data and achieve accurate simultaneous measurements of temperature and pressure. This design significantly improves the pressure sensitivity and effectively reduces the signal drift, achieving a linear temperature sensitivity of −1.64 °C−1 (Figure 3a). In addition, researchers implemented a novel flexible dual-mode strain/temperature sensor (DMSTS) using graphite powder/polyaniline/silicone rubber composites that mimic the bionic microstructure of a centipede’s foot [65]. The DMSTS possesses an excellent strain range of 177% and a low detection limit of 0.5% strain, with a high sensitivity of 10.3 below 90 °C. Due to the photothermal properties of graphite powder and polyaniline, the DMSTS has broad application prospects in human motion detection, infrared imaging, and photothermal effects. When integrated into smart sensing systems, it enables dynamic non-contact temperature measurement, the detection of human micro-expressions, and monitoring of joint movements (Figure 3b). The data generated by these optimized temperature sensors have higher accuracy, providing strong support for subsequent deep learning models, making temperature sensing more accurate and reliable in a variety of application scenarios.

2.3. Electrophysiological Sensors: Interpreting the Dynamic Patterns of Physiological Signals

Electrophysiological signals are electrical signals generated by biological activities within the body, and long-term monitoring of bioelectric signals on the skin plays a crucial role in human–machine collaboration, disease prevention, and precise diagnosis [66]. Due to the need for close contact with the skin, good adhesion and low contact impedance between the sensor’s electrodes and the skin are required. Conventional Ag/AgCl gel electrodes possess good signal quality, but they are relatively hard, and the gel dehydrates with prolonged use, degrading the signal quality and causing skin irritation. In contrast, biocompatible conductors such as gold [67], conductive polymers [68], graphene [69,70], or nanowires [71] are often chosen as ultrathin conductors. By attaching electrodes to specific locations on the human body, electrophysiological signals such as electrocardiograms (ECG), electroencephalograms (EEG), electromyograms (EMG), and surface electromyograms (sEMG) can be measured [72,73,74]. These data usually appear in the form of time series with complex dynamic properties such as non-stationarity, strong noise, and nonlinear relationships between signals.
Moin et al. proposed the Hyperdimensional Computing (HDC) architecture, where raw data or preprocessed features of the finger sEMG are projected onto 1000D bipolar ((−1, +1)1000, for simplicity in describing mathematical operations) or binary ((0, 1)1000, for implementation with digital logic) hypervectors with information fully distributed across all bits, analogous to the way the human brain utilizes vast circuits of neurons and synapses for learning and recall [75]. The framework includes in-sensor training, inference in multiple new contexts, incremental updates to adapt to new contexts, and improved accuracy. It is able to perform real-time inference for gesture classification with a latency of 50 ms.
In recent years, the design of electrophysiological sensors has also trended towards higher sensitivity and interference resistance. Researchers have designed nepenthes-inspired microstructures on the surface of dual-network hydrogel electrodes, which significantly improve the signal-to-noise ratio of the signal and reduce the effect of motion artefacts, thereby obtaining high-quality ECG waveforms and heart rate recordings (Figure 4) [74]. These improved high-quality time series signals are commonly used for periodic pattern recognition and anomaly detection in deep learning models, such as real-time monitoring of arrhythmias or analysis of neural signals [46].

2.4. Optical Sensors: From Vision to Biometric Information

Optical signals are characteristic information (e.g., refractive index, absorbance, change in light intensity, etc.) generated by the interaction between a biological tissue and light. Conventional silicon-based or glass-based optical sensors cannot meet the requirement of flexibility. Researchers have developed optical sensing interfaces based on waveform optical waveguide materials (e.g., PDMS [76], hydrogels [77]), quantum dots, and nanoparticles [78]. Atomic-scale materials are quantum-limited, which leads to unique optoelectronic properties (e.g., strong optical coupling, multi-exciton generation, tunnelling, etc.). In addition, hybrid perovskite materials have shown excellent optoelectronic properties that have attracted attention [79]. Due to the high degree of freedom in material processing, perovskite is available in a variety of low-dimensional structures, which is promising in flexible electronics. Applications of optical sensors in e-skin include UV monitoring [80], pulse oximetry [81], and photoplethysmography [82] (PPG). Data from these sensors come in the form of time series and frequency-domain signals that can reflect the optical properties of biological tissues or the environment. By analyzing the PPG signals and triaxial accelerometry through deep learning, changes in blood flow velocity or heart rate can be accurately calculated to realize the early detection of atrial fibrillation [81].
More importantly, optical sensors can be integrated with other types of sensors, such as pressure and temperature sensors, to form multimodal sensing networks. This data fusion technique opens up new ideas for comprehensively evaluating complex object properties as well as human physiological states. For example, by integrating optical and mechanical sensing information, researchers have developed an optical artificial skin that is highly sensitive to external mechanical compression and strain. It is more robust and transparent, with a larger sensing area and superior mechanical properties [83].

3. Characterization of Electronic Skin Sensing Data

3.1. Higher Dimensionality

Data from e-skins are typically stored in high-dimensional arrays. For example, arrays of pressure sensors can provide information on distribution in two dimensions. When combined with multiple sensors, the data dimensions can be extended to three or more dimensions. This multidimensional nature imposes extreme computational demands for data processing. Unlike static image data, the data generated by e-skin not only reflect spatial patterns but also contain dynamic changes, creating unique time-domain features. In high-resolution tactile pressure-detection arrays, data at a single time point can reach hundreds of thousands of dimensions. The data size grows exponentially when data acquisition is performed at kilohertz sampling rates [28].
Traditional linear dimensionality reduction methods, such as principal component analysis (PCA) or linear discriminant analysis (LDA), can reduce the dimensionality of the data but are ineffective at capturing the nonlinear features present in high-dimensional data [84]. Deep learning models, particularly convolutional neural networks and models based on attention mechanisms, show significant advantages in extracting complex spatial pattern features [10,85]. However, the number of parameters in transformer models based on the attention mechanism often exceeds 106. Optimization strategies such as pruning models or binarized networks can be used in the embedded deployment style [86]. Moreover, the presence of high-dimensional data opens up the possibility for multimodal feature extraction. Ye et al. proposed a bimodally coupled multifunctional haptic sensor for non-contact gesture recognition and material identification to address challenges posed by signal interference and high power consumption in multimodal collaboration [87]. The sensor symmetrically integrates capacitive and friction electric sensors, using an energy-complementary approach to reduce energy consumption and effectively prevent signal interference. Using deep learning techniques, the sensor can obtain information about material properties (such as hardness, softness, and deformation) from the detected pressure, perfectly recognizing a wide range of different materials.
However, while high-dimensional data offer possibilities for multimodal feature extraction, they are also constrained by a number of factors. From the perspective of the physical design of the e-skin device, the size and compactness of the sensors limit the spatial resolution of the data acquisition. Transmission bandwidth and storage capacity limit the data transfer and storage capacity (typically <1 GB), low power requirements (typically <10 mW) constrain the deployment of complex algorithms, and thermal effects of the flexible substrate further constrain the performance of the computational units [88].
In terms of application scenarios, different applications have different requirements for real-time data processing and accuracy. In real-time-sensitive scenarios such as medical health monitoring, e-skin requires real-time processing of physiological data to provide timely health warnings. The transmission delay of high-dimensional data may affect the timeliness of anomaly detection, e.g., cardiac ultrasound imaging requires processing speeds of 30 fps or more to meet clinical diagnostic needs. Moreover, in complex real-world environments, e-skins may be subject to multiple interferences, such as electromagnetic interference and ambient temperature variations. To address the above issues, recent research proposes a layered energy management strategy: an event-driven sampling mechanism is used in the signal preprocessing stage, such as frame-based sensor arrays, where only pixels that undergo changes in light intensity can generate data [89]. This design reduces the 16,000 data points collected every 50 microseconds to 200, a 98% reduction in data volume. Introducing a pulsed neural network in the feature extraction session reduces the computation of convolutional operations by 75% using spatio-temporal sparse coding [90]. In addition, a distributed architecture based on federated learning allows for 90% of the model training to be carried out in the cloud, requiring only lightweight inference (<100 kB memory footprint) to be performed locally, significantly reducing end-side power consumption [91].

3.2. Temporal Characterization and Dynamic Dependencies

The temporal nature of e-skin data is primarily reflected in the monitoring of bioelectrical signals (e.g., ECG and EEG), motion sensing data, and temperature variations. The dynamic changes in these signals contain information related to health status, tactile patterns, and environmental changes. For instance, certain pathological features in ECG signals (e.g., arrhythmia) may only appear during specific time periods or periodic trends, requiring the analysis of long time series for accurate detection [92,93]. Additionally, activity patterns in EEG signals (e.g., alpha or gamma waves) are closely linked to sleep stages and neural states, and extracting their features requires modeling temporal relationships [94].
Compared to traditional static signals, e-skin time series data exhibit pronounced non-stationarity. This non-smoothness arises from various factors such as sudden temperature changes in the environment, artifacts caused by poor sensor contact, and signal distortion during movement, all of which lead to fluctuations in the time series data. This complexity challenges traditional linear analysis methods (e.g., Fourier transforms), which often struggle to capture the nonlinear dynamic features present in such signals [94,95].
To tackle the challenges of analyzing complex non-stationary time series data, data analysis methods must be robust enough to ensure stable and reliable results. Deep learning techniques, particularly recurrent neural networks (RNNs), long short-term memory networks (LSTMs), and their variants, have shown significant advantages in modeling the dynamic relationships within time series data. These methods have been effectively applied to e-skin data, aiding in the interpretation and understanding of complex waveforms and signal patterns [96,97].

3.3. Noise and Artifacts

Electronic skin signals are often influenced by various factors during acquisition, leading to significant noise and artifacts in the data. These interferences can arise from external sources, such as electromagnetic noise, temperature fluctuations, and humidity changes, as well as internal factors like poor contact between the electrodes and the skin. Furthermore, motion-induced changes in sensor positioning can distort the signals. For example, EMG signals frequently contain artifacts caused by electrode slippage or environmental noise, which lower the signal-to-noise ratio and complicate signal processing and feature extraction [72].
To mitigate noise and artifacts in e-skin signals, researchers employ various noise reduction techniques. Traditional methods, such as low-pass filtering and wavelet transforms, can effectively remove high-frequency noise. Wavelet transforms, in particular, are advantageous for preserving essential signal features when working with non-stationary signals; however, their adaptability in complex noise environments is limited. In recent years, deep learning techniques have introduced innovative methods for noise reduction. Self-supervised learning has also been applied to pre-training tasks to construct latent feature distributions of unlabeled data, enhancing signals and suppressing artifacts [98,99]. Variational Autoencoders (VAEs) have also been employed to reconstruct clean signals by learning latent feature distributions [100].

3.4. Multimodal Characteristics and Fusion Challenges

A significant advantage of e-skin is its multimodal sensing capability, which enables the simultaneous collection of various types of information, such as pressure, temperature, bioelectrical signals, and chemical data. This multimodal sensing not only provides individual physical data but also reveals more complex patterns through their interactions. For instance, in medical applications, the combined analysis of bio-signals such as pressure, temperature, and body fluids enables more accurate assessments of a patient’s inflammatory state or circulatory abnormalities [101]. Additionally, in smart prosthetics, the integration of bioelectrical signals with haptic feedback creates a more natural and intuitive human–machine interaction experience [102]. However, differences in the dynamic range, sampling frequency, and noise characteristics of different signals pose challenges for efficient multimodal data fusion.
The fusion of multimodal data involves addressing several key challenges. First, data between different sensors will often have different timestamps and sampling rates. The data often suffer from time synchronization, leading to timing alignment difficulties. For example, pressure sensors may be sampled at hundreds of times per second, while temperature sensors may have much lower sampling frequencies. To achieve accurate data alignment, complex interpolation and synchronization algorithms (e.g., Dynamic Time Warping (DTW)) may be introduced [103]. These algorithms require more computational resources and computational cycles, leading to increased energy consumption. This not only affects the accuracy of the data but also increases the difficulty of data processing. Second, differences in data information represented across modalities increase the complexity of feature fusion. Pressure signals are typically stored as two-dimensional matrices, while bioelectrical and chemical signals are represented as one-dimensional time series. This disparity complicates the uniform modeling of features across modalities, potentially impacting the quality of fusion. Moreover, noise in multimodal signals presents significant challenges. Noise in one modality may propagate through the fusion process and distort other signals, reducing the reliability of the final analysis. Most importantly, the interactive nature of multimodal data complicates signal decoupling. In complex sensor circuits or multimodal bioelectronic skins that integrate strain, temperature, and pressure detection, effective decoupling strategies are essential. These strategies must distinguish the independent contributions of each modality while preserving the intermodal interaction characteristics [101,104].

4. Deep Learning Methods in Data Analytics

4.1. Basic Concepts of Modeling and Different Application Scenarios

E-skin sensing data are characterized by high-dimensionality, temporality, and multimodality. Deep learning techniques have become essential tools for analyzing and interpreting these complex data. Below are several commonly used deep learning technologies and their specific applications in e-skin data analysis. Table 1 provides an overview of how different deep learning technologies can be applied to e-skin data, highlighting their advantages, disadvantages, and applications in wearable e-skins.

4.1.1. Convolutional Neural Networks

Convolutional neural networks (CNNs) are feed-forward neural networks designed to extract features from data with a convolutional structure, such as images and matrices. They perform well in image processing and spatial pattern recognition. Unlike traditional feature extraction methods [114], CNNs are inspired by the process of visual perception [115], and their architecture allows for automatic feature extraction from input data. This eliminates the need for complex manual feature engineering. CNNs consist of the following key components:
  • Convolutional Layer: This is the core component of a CNN, which is responsible for extracting local features from the input data using multiple filters. Each convolutional kernel slides across the input data to generate a feature map, capturing important spatial information such as edges, corners, and textures [116,117,118]. Due to the local connectivity property, CNNs can effectively reduce the number of parameters and computational complexity.
  • Activation Function: After the convolutional layer, a nonlinear activation function (e.g., ReLU, Sigmoid, or Tanh) is typically applied to introduce nonlinearity. This enables the network to learn more complex feature representations, capture higher-order patterns in the input data, and improve the model’s expressive power [119].
  • Pooling Layer: The pooling layer reduces the size and computational complexity of the feature map by downsampling while enhancing the invariance of the features. This helps prevent overfitting and improves the model’s ability to tolerate input deformations such as rotations or displacements.
  • Fully Connected Layers: After multiple convolutional and pooling layers, the features are flattened and passed through one or more fully connected layers. These layers map the extracted features to the final output, such as classification labels or regression values, allowing the model to make the final decision.
In electronic skin applications, CNNs effectively extract sensor data features (such as pressure and temperature) through hierarchical feature learning, enabling tasks such as surface identification, feature extraction, and health detection [120,121,122]. This makes CNNs highly accurate in analyzing pressure sensor arrays. However, while CNNs excel at handling local features, they may struggle to capture long-range dependencies. Additionally, CNNs require large datasets, and their performance can be affected by the quality and quantity of the input data. As a result, CNNs may not perform well in environments with small datasets.

4.1.2. Recurrent Neural Networks and Their Variants

Recurrent neural networks (RNNs) are a class of neural networks that take sequence data as inputs and recursively process the data in the sequence’s temporal direction, with all nodes connected in a chained fashion. The core feature of RNNs is their ability to dynamically maintain the contextual information of the input sequence through temporal recursive connections. This characteristic enables RNNs to effectively capture temporal dependencies, making them suitable for tasks such as speech recognition and time series prediction [17,123,124]. The basic components of an RNN include:
  • Input Layer: Receives time series data as inputs, typically shaped as the number of samples, time steps, and number of features.
  • Hidden Layer: Recursively memorizes and updates information from previous time steps. The hidden state at each time step depends not only on the current input but also on the hidden state from the previous time step. This structure allows the RNN to capture temporal relationships in sequential data.
  • Output Layer: Generates the prediction result for the current time step based on the hidden state, ensuring continuous information flow.
In gradient computation within RNNs, issues such as gradient decay or explosion may arise, especially when the number of time steps is large or small. To address these challenges and improve RNN performance with long sequential data, long short-term memory (LSTM) networks and gated recurrent units (GRUs) were introduced:
  • Long Short-Term Memory: LSTM networks introduce memories that share the same shape as hidden states and are used to store additional information [108]. LSTM controls the flow of information via forget gates, input gates, and output gates. The forget gate decides which information should be discarded, the input gate selects the new information to be added, and the output gate controls the output of the hidden state. The LSTM architecture allows it to efficiently capture long-term dependencies in time series data, which is particularly important for physiological signal monitoring and anomaly detection.
  • Gated Recurrent Unit: A GRU is a simplified version of an LSTM, merging input and forget gates to reduce the model’s complexity [109]. Due to their reduced number of parameters, GRUs are widely used in scenarios requiring faster computational speeds, such as real-time motion pattern recognition.
In e-skin applications, both LSTM networks and GRUs are capable of monitoring physiological signal changes and analyzing health status in real time. The introduction of these models resolves the challenges faced by traditional RNNs in analyzing long sequences, improving both the accuracy and efficiency of the models.

4.1.3. Transformer

A transformer is a neural network architecture based on a self-attention mechanism, which has achieved success in fields such as natural language processing by addressing the long sequence dependency problem through parallel computation and multilayer feature extraction [110]. The key components of the transformer include:
  • Multi-head Self-attention Mechanism: This mechanism allows the model to compute the relevance of each position in the input sequence, adaptively focusing on information from different positions. This helps the model capture long-range dependencies and context, making it especially well-suited for processing long sequences of data.
  • Feed-forward Neural Network: After each self-attention layer, there is a feed-forward neural network responsible for enhancing and transforming the representation of each position. This network typically consists of two linear transformation layers and an activation function, allowing the extracted features to have rich expressive power.
  • Layer Normalization and Residual Connections: Between the self-attention layers and the feed-forward networks, layer normalization is applied to improve training stability and convergence speed, while residual connections help reduce the difficulty of training deep networks.
The encoder–decoder structure of the transformer can handle various types of inputs and outputs, making it suitable for tasks like health monitoring, material analysis, and facial expression recognition [111,125]. In the context of e-skin applications, transformers can effectively integrate data from different sensors, helping to infer complex patterns and improve the handling of multimodal data. However, the complex architecture of transformers requires significant computational resources and memory, and they may not perform as well on smaller datasets compared to other models.

4.1.4. Self-Supervised Learning and Transfer Learning

Self-supervised learning and transfer learning are important strategies for applying deep learning in e-skin technologies. Self-supervised learning allows models to be effectively trained without large amounts of labeled data by designing pre-training tasks. For example, the model can learn by reconstructing or predicting data in a time series, which is particularly useful when data are scarce [112].
Transfer learning, on the other hand, leverages deep learning models that have been trained on large datasets, using the knowledge gained to significantly accelerate the training process for small-sample learning tasks. By providing the model with pre-trained parameters, it can converge more quickly on new e-skin datasets, enabling e-skin technologies to rapidly adapt to different environments and tasks, resulting in improved learning efficiency and accuracy [113].
However, the effectiveness of these methods typically depends on the similarity between the source and target tasks. If the features from the source task cannot be directly transferred to the new task, it may cause a decline in model performance.

4.2. Data Preprocessing and Feature Extraction

In deep learning applications for e-skin, data preprocessing and feature extraction are crucial steps to ensure optimal model performance and accuracy. As mentioned earlier, the data generated by e-skin often suffer from issues such as noise, instability, and high dimensionality. Therefore, effective preprocessing must be carried out before feeding the data into the deep learning model to enhance the quality and reliability of the analysis.

4.2.1. Data Cleaning and Noise Reduction

Electronic skin signals are often affected by noise and artifacts, and data cleaning and noise reduction are essential solutions. The sources of noise include electromagnetic interference, poor sensor contact, and environmental factors. To improve signal quality and ensure analytical accuracy, effective noise reduction methods must be applied. Traditional noise reduction methods utilize signal processing techniques to remove noise while preserving the relevant components.
Common noise reduction techniques include low-pass filtering and wavelet transforms. Low-pass filtering effectively removes high-frequency noise while retaining low-frequency components, improving the signal-to-noise ratio. For data from multimodal electronic skin sensors, low-pass filtering helps reduce high-frequency interference caused by poor contact or sensor noise, resulting in more stable signals [126]. Wavelet transforms decompose signals and remove noise through time–frequency analysis, making them suitable for processing non-stationary signals and those with abrupt changes or localized features. These techniques are commonly used to remove artifacts caused by environmental factors or motion and are particularly effective for sensor data with large fluctuations, such as strain and skin temperature signals [127].
While traditional noise reduction methods are effective, deep-learning-based techniques offer greater potential with the advancement of deep learning technologies. Self-supervised learning, convolutional neural networks, and generative adversarial networks can automatically learn the spatial and temporal characteristics of noise and effectively remove it, especially excelling in high-dimensional and multimodal data processing [128,129,130]. However, despite the higher accuracy of deep learning methods, traditional approaches remain effective and widely used for noise reduction, particularly when real-time processing and computational resources are limited.

4.2.2. Feature Extraction

Feature extraction is a crucial part of data preprocessing, as it transforms raw signals into meaningful features that can be utilized by deep learning models. Effective feature extraction helps the model extract key information, enhancing learning capability and improving prediction accuracy.
Electronic skin signals are typically temporal and multimodal, requiring feature extraction methods that reflect their multidimensional characteristics. In time-domain analysis, commonly used statistical features such as the mean, standard deviation, skewness, and kurtosis provide preliminary input data for deep learning models [28,131]. These features reveal the fluctuation patterns of the signal and assist in monitoring changes in various physiological states. Additionally, if multiple time-domain features of uncertain validity need to be extracted before initial training, principal component analysis (PCA) is often effective in selecting the correct features [132].
For periodic signals, frequency-domain feature extraction becomes essential. In high-frequency signals such as sound and texture, time-domain features may not fully describe the signal. By applying the Fourier transform, time-domain signals can be converted into the frequency domain, revealing the signal’s frequency components [133,134]. Frequency-domain analysis allows for differentiation between low- and high-frequency components, providing more detailed feature inputs for deep learning models. Furthermore, many electronic skin signals exhibit variations in both the time and frequency domains, so traditional time-domain or frequency-domain methods may fail to capture the full dynamic characteristics of the signals. To address this, time–frequency analysis methods such as the short-time Fourier transform (STFT), wavelet transform (WT), and Wigner–Ville distribution (WVD) can be used to extract features from both the time and frequency dimensions [135,136,137].

5. Key Applications of Deep-Learning-Powered E-Skin

As an innovative technology that mimics and even surpasses the perceptual capabilities of human skin, e-skin can collect multidimensional physiological and environmental data in real time. Thanks to the pattern recognition and feature learning capabilities of deep learning, e-skin plays a critical role in processing these complex data and enabling intelligent decision-making. Table 2 shows representative e-skins applying deep learning techniques in recent years.

5.1. Cardiovascular Disease Monitoring

Active and continuous monitoring of blood pressure (BP) is one of the most fundamental practices in modern medicine, playing a crucial role in preventing deaths related to cardiovascular diseases [158]. Currently, physicians rely on traditional cuff sphygmomanometers to measure static values of systolic blood pressure (SBP), diastolic blood pressure (DBP), and mean arterial pressure (MAP) [159]. However, these devices do not allow for the continuous and uninterrupted monitoring of a patient’s hemodynamics in daily, ambulatory, and nocturnal settings [160].
Existing cuffless blood pressure monitoring methods, which use acoustic, pressure, or optical modalities, face challenges in simultaneously achieving accurate blood pressure signal capture while maintaining good skin compatibility [161]. To address these issues, researchers have designed a self-adhesive, low-impedance graphene electronic tattoo (GET) that adheres firmly to the skin, ensuring stable positioning over time [162]. This device enables stable sensing of hemodynamic parameters directly from the arteries by measuring bioimpedance (Bio-Z), which remains relatively stable. Using a deep learning approach, the obtained Bio-Z data are modeled to create a reliable BP estimation model. The system operates with high fidelity, causing no disturbance to the patient, and provides accuracy comparable to that of a Class A wearable blood pressure measurement device, offering a promising solution for continuous wearable blood pressure monitoring.

5.2. Elderly Care

E-skin can also monitor the activity level, posture, and sleep quality of older adults to help provide safe care. Park et al. developed a double-layer nanofibrous electric nanogenerator (TENG) using an electrostatic spinning process, placed it on an insole, and analyzed its friction electric sensing data through deep learning techniques (Figure 5) [34]. This device recognizes the user’s activity status through the compression of the insole and can respond promptly if an elderly person falls or exhibits abnormal activity. In addition, Xu et al. developed an e-skin based on metallic fabrics and natural friction electric technology for detecting electrical signals generated by friction between the shoe sole and the ground or between the skin and fabric when the body is in motion or undergoing changes in its physical state [163]. People often make movements such as rolling over, bending their legs, or raising their hands while sleeping, and these movements typically occur not individually but as a combination of several actions. When these movements happen, friction occurs between the hands and clothing and the legs and the bed, generating electrical signals that change in sequence over time. These data are analyzed by a one-dimensional convolutional neural network, which can effectively track sleep states. It can remotely recognize the body’s state and alert caregivers to changes in physiological signals due to unexpected events.

5.3. Sweat Monitoring

Sweating is the body’s physiological response to the movement of fluids from sweat glands to the skin’s surface. Individual sweat levels can serve as an important physiological marker of the body’s health, comfort, emotional state, and exertion. The current wearable sweat monitoring process can be divided into two parts: sweat collection and analysis. Sweat collection primarily relies on microfluidic devices. Microfluidic sensors use multilayered structures to collect sweat, which implies a complex manufacturing process, high production costs, and the use of rigid materials [164].
Colorimetric analysis is an optical method that uses a specific ingredient to react with a reagent under certain conditions to produce a colored compound. Light is then passed through the colored solution and the sample to be measured. The shade of the colored solution is compared to that of a standard solution. However, the safety of this method requires further consideration [165].
To address these challenges, Chen et al. developed a colorimetric electronic skin for smart sweat monitoring [166]. This e-skin consists of a polyurethane nanoweb and an object detection algorithm, YOLOv3. The polyurethane nanoweb has a 44% porosity and capillarity, making it a low-cost colorimetric indicator. When sweat is absorbed, the nanoweb’s volume expands by 362.37%, and its light transmission changes by 277.78%. The researchers used a capillary-action-based finite-element model to explain the change in light transmission, created a database of 735 images, and analyzed the sweat data using the YOLOv3 algorithm. The test results successfully recognized the sweat volume with 97% accuracy (Figure 6). This research offers a new direction for healthcare applications.

5.4. Texture Recognition

The haptics of electronic skin enable intelligent robotic systems and prosthetics to perform precise motion trajectory planning and interact naturally with humans and the environment. Traditional unimodal tactile sensors can only detect mechanical stimuli and lack the ability to sense material properties, while multimodal sensors face challenges in decoupling their signals, which may interfere with each other [167,168]. Song et al. developed a novel tactile sensor using friction nanogenerators that achieves multimodal sensing with a single mechanism [45]. Thanks to its inherent contact electrochemical properties and the grating structure used, the macroscopic features (such as amplitude, trend, and envelope) of the output signal can be utilized for material identification, while the microscopic features (such as frequency, change points, and variance) can be used for surface texture recognition. The haptic integrated system based on this sensor enables real-time and synchronized recognition of materials and textures. The sensor’s touch is converted into electrical signals, which are then decoupled into macro-features and micro-features through signal processing. These features are fed into a convolutional neural network for pattern decoding and feature extraction. The model can accurately predict materials and textures with 99.07% and 99.32% accuracy, respectively (Figure 7). Additionally, the system can recognize special raised points of Braille, identify Braille information, and distinguish the material of Braille. This capability can significantly help blind and visually impaired individuals better understand Braille on various infrastructures.

5.5. Gesture Recognition

Gesture interaction, as an important natural interface, allows for communication with machines through instinctive and perceptual behaviors. It is one of the areas in human–machine interaction that requires further breakthroughs [169]. Various techniques, including machine vision, infrared, radar, ultrasound, and electrical signals, have been widely studied and explored for gesture recognition. By introducing a microscale incompatible phase into a polymer network, Sun et al. developed a highly elastic and durable ionogel sensor [10]. This sensor is characterized by a high sensitivity, broad response range, and high durability, making it ideal for accurately monitoring human activity. Using this strain sensor, gestures corresponding to the numbers 0 to 9 were precisely captured from five participants with different hand characteristics, creating a reliable dataset. The gesture sensor signals are compensated using a deep convolutional neural network to form a system capable of dynamic gesture recognition (Figure 8). A system based on this model can analyze both the temporal and spatial characteristics of the sensor data, helping to better understand the dynamic process of gestures.

5.6. Emotion Recognition

Emotions are important indicators of people’s internal states, and understanding emotions can help predict human behavior and provide appropriate feedback. Currently, speech signal analysis, image-based facial action coding systems, and electrophysiological signal analysis are commonly used for emotion recognition [170,171,172]. Among these methods, electrophysiological signals offer high accuracy and sensitivity. Du et al. reported a 40 nm thick film based on a photolithographic double-network conductive polymer mediated by a graphene layer [11]. When applied to the zygomatic bone, this film can monitor subtle facial movements induced by visual stimuli and capture corresponding facial electromyographic signals (fEMG). After preprocessing the fEMG signals using a moving average filter, they were classified into three emotional categories: positive, negative, and neutral. A bidirectional long short-term memory network was used to build the classification model, achieving an accuracy of 93%. Through accurate data acquisition and machine learning, the model effectively identified and analyzed the emotional experience’s potency (positive or negative) and the intensity of the emotion (Figure 9).

5.7. Virtual Shopping

With the rapid development of the Internet and logistics, online shopping has become an indispensable part of our daily lives, offering great convenience and enabling us to collect desired products without leaving home. To better enhance the customer experience and provide an immersive shopping experience in a virtual store using VR technology, Gong et al. reported an intelligent soft robotic arm supported by tactile/length friction nanogenerator (T-TENG, L-TENG) sensors and polyvinylidene difluoride (PVDF) pyroelectric temperature sensors [173]. The pyroelectric sensors allow the robotic arm to capture the temperature distribution of an item. Additionally, the robotic arm, equipped with T-TENG and L-TENG sensors, has 15 channels for data collection. When an item is grasped, the stimulus applied to the inner surface of each finger generates three-dimensional sensory information, including the contact position and area of the three contact surfaces. The deformation of each finger also provides information about the size and shape of the object, especially for asymmetrical objects. Through deep learning, hidden information such as the contact position, contact area, and bending angle is computed from the sensing signals, enabling the automatic recognition of 28 different object shapes with an accuracy of 97.143% (Figure 10). By utilizing IoT and AI analytics, a virtual store based on digital twins was successfully implemented, providing users with real-time feedback on product details.

6. Challenges and Prospects

In recent years, e-skin technology has made rapid progress in the fields of health monitoring, intelligent robotics, and human–computer interaction, yet its further development still faces multidimensional technical challenges. This chapter systematically analyzes the current core bottlenecks and proposes domain-specific solutions and development routes (Table 3).
One of the core issues facing e-skin technology is the challenge of data standardization and model generalizability. Different devices and sensors differ significantly in terms of hardware architecture, sampling frequency (0.1–10 kHz), signal resolution (8–16 bit), and data format (CSV/JSON/Binary) [174]. It has been shown that when sensors with different data protocols, e.g., IEEE 802.15.4 [175] vs. OSI model, are trained cross-platform, the accuracy of the ResNet-50 model in a tactile recognition task decreases by 20% [176]. This fragmentation severely limits cross-platform data sharing and interoperability and hinders the creation of large-scale multimodal datasets. In addition, the lack of standardization directly affects the efficiency of deep learning models. E-skin data are usually high-dimensional, multimodal, and non-stationary, which makes it difficult to generalize existing models effectively. In a pressure sensor data standardization experiment, training a convolutional neural network using datasets from different vendors resulted in a cross-validation rate that varied as much as 23% [177]. This discretization stems from the lack of uniform data preprocessing specifications, such as baseline correction methods and noise filtering thresholds. This problem is more prominent in open environments, where sensor data may be disturbed by environmental noise or changes in user behavior, thus reducing the adaptability of existing models. To solve such problems, standardized protocols covering the whole chain of sensor calibration, feature extraction, and data transmission need to be established. A dynamic adaptation framework based on meta-learning can effectively reduce the generalization error of classification models on uncalibrated devices by embedding domain-adaptive modules in the model architecture [176]. Meanwhile, the development of a smart sensor interface protocol compliant with the IEEE 21451-2 standard [178] will also effectively improve the efficiency of multi-source data fusion.
Another major bottleneck limiting AI e-skins is the lack of high-quality labeled datasets. Multimodal sensors enable e-skin systems to collect large amounts of physiological and environmental data, but traditional supervised learning faces exponentially increasing labeling costs that greatly limit the size of available datasets. As a result, lab-based self-constructed datasets tend to be small samples. Their accuracy in terms of surface EMG signals, for example, is up to 5–7 min of expert annotation time for a single action recognition sample, and existing open datasets such as Ninapro DB5 contain only 53 types of action data from 10 subjects [179]. The lack of data leads to insufficient model training, negatively impacting its performance and practical large-scale applications [176]. The emerging solution combines unsupervised representation learning with weakly supervised fine-tuning techniques: contrast learning is first used to extract generic features from unlabeled signals of order 106, and then task-specific migration is achieved with a small amount of labeled data (<10%). This approach achieved 87.2% accuracy in a continuous gesture recognition task, a 19% improvement over purely supervised learning [180].
Higher computational performance is also required for tasks with larger model parameters. In addition, real-time processing and privacy are unavoidable topics. Emerging technologies such as edge AI and cloud AI technologies can be further developed. They can be integrated into e-skin systems to solve these problems. Edge AI refers to the deployment of deep learning models and algorithms directly on the e-skin. This approach allows for real-time processing and decision-making on the device and can effectively reduce the need for data transmission bandwidth and the latency incurred by communicating with the cloud. In addition, the non-circulation of data can maximize the protection of user privacy. Cloud artificial intelligence technology, on the other hand, utilizes the more powerful computational and storage capabilities of remote servers to analyze data from e-skins. By incorporating a distributed learning framework for federated learning and privacy-preserving techniques, it is possible to share a larger dataset while ensuring privacy security [181].
The interpretability of AI models has also emerged as a pressing issue. While many deep learning models in medical and health monitoring applications demonstrate impressive predictive accuracy, their “black box” nature lacks intuitive explanations for their decision-making processes. This lack of transparency makes it difficult for users, particularly doctors and patients, to fully trust these technologies [182]. To address this, future research should emphasize the development of more transparent AI methods, such as integrating attention mechanisms, interpretable neural networks, or causal inference models, to make the decision-making process comprehensible [183,184]. Studies have shown that combining physically inspired attention mechanisms with deep learning can significantly improve interpretability [185]. For example, in a diabetic foot ulcer prediction model, the introduction of a spatio-temporal attention module based on the propagation law of biological impedance enables the localization of key pathological features with millimeter-level accuracy, while providing decisions that are consistent with clinical experience [186].
Lastly, a relatively underexplored area is the enhancement of AI’s behavioral prediction capabilities. Currently, most AI applications are focused on classifying or analyzing collected data, lacking the ability to predict users’ future behavior [187]. For example, if a prosthetic device equipped with e-skin can predict the user’s next movement based on historical behavioral patterns, it could preemptively adjust and prepare the prosthetic for the action. This capability has broad potential in applications like posture prediction, motion assistance, and rehabilitation training, further elevating the intelligence of e-skin systems in human–computer interaction. Introducing neuromorphic computing into the e-skin system can break through this bottleneck; using a spiking neural network to process millisecond tactile timing signals, an ultra-low-latency response of 93 ms was achieved in a prosthetic grip prediction task, improving the real-time performance by a factor of 5 compared with the traditional LSTM model [188].
Table 3. A summary of the challenges and prospects of e-skin.
Table 3. A summary of the challenges and prospects of e-skin.
CategoryChallengesSolutions and ProspectsReference(s)
Data Standardization and Model GeneralizabilityDifferences in architecture, sampling frequency, and data formats
Decreased model accuracy in cross-platform training
High-dimensional, multimodal, non-stationary data
Establish standardized protocols for sensor calibration, feature extraction, and data transmission
Implement dynamic adaptation frameworks based on meta-learning
Promoting the IEEE Smart Sensor Interface Protocol
Development of dynamic optimization algorithms for multi-source data fusion
[174,176,177]
Lack of High-Quality Labeled DatasetsHigh labeling costs for large datasets
Small sample sizes limit model training
Combining unsupervised learning and weakly supervised fine-tuning techniques
Combining contrast learning with small labeled datasets for better performance
Improve methods for sharing datasets
[176,179,180]
Computational PerformanceLarger model parameters require high computational resources
Need for real-time processing and privacy
Implement edge AI for real-time processing and decision-making on e-skin
Leveraging cloud AI for big dataset analysis with federated learning and privacy-preserving technologies
Further development of edge AI for real-time performance
Protecting privacy through distributed learning and non-circulating data
[181]
AI Model InterpretabilityLack of an intuitive explanation of the decision-making processDeveloping transparent AI models with attention mechanisms, interpretable neural networks, or causal inference models
Focusing on AI transparency to build trust in medical and health applications
Integrating biologically inspired attention mechanisms
[182,183,184,185,186]
Behavioral PredictionLack of ability to predict future user behavior, limiting AI capabilities in dynamic scenariosIntroducing neuromorphic computing to predict user actions based on historical data, using spiking neural networks for low-latency responses
Enhancing prediction capabilities for prosthetics, posture prediction, and rehabilitation
[187,188]

Author Contributions

Writing—original draft preparation, Y.G.; writing—review and editing, X.S. and L.L.; supervision, L.P., Y.S. and W.C. All authors have read and agreed to the published version of the manuscript.

Funding

This paper was supported by the National Key Research and Development Program of China under grant 2021YFA1401100; the National Natural Science Foundation of China under grant 61825403 and 61921005.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Hammock, M.L.; Chortos, A.; Tee, B.C.-K.; Tok, J.B.-H.; Bao, Z. 25th Anniversary Article: The Evolution of Electronic Skin (E-Skin): A Brief History, Design Considerations, and Recent Progress. Adv. Mater. 2013, 25, 5997–6038. [Google Scholar] [CrossRef] [PubMed]
  2. Ma, H.; Hou, J.; Xiao, X.; Wan, R.; Ge, G.; Zheng, W.; Chen, C.; Cao, J.; Wang, J.; Liu, C.; et al. Self-healing electrical bioadhesive interface for electrophysiology recording. J. Colloid Interface Sci. 2024, 654, 639–648. [Google Scholar] [CrossRef]
  3. Li, X.; Qi, B.; Wan, X.; Zhang, J.; Yang, W.; Xiao, Y.; Mao, F.; Cai, K.; Huang, L.; Zhou, J. Electret-based flexible pressure sensor for respiratory diseases auxiliary diagnosis system using machine learning technique. Nano Energy 2023, 114, 108652. [Google Scholar] [CrossRef]
  4. Yang, M.; Wang, Z.; Jia, Q.; Xiong, J.; Wang, H. Bio-Skin-Inspired Flexible Pressure Sensor Based on Carbonized Cotton Fabric for Human Activity Monitoring. Sensors 2024, 24, 4321. [Google Scholar] [CrossRef] [PubMed]
  5. Alex, M.; Khan, K.R.B.; Al-Othman, A.; Al-Sayah, M.H.; Al Nashash, H. MXene-Based Flexible Electrodes for Electrophysiological Monitoring. Sensors 2024, 24, 3260. [Google Scholar] [CrossRef]
  6. Xiao, X.; Xiao, X.; Zhou, Y.; Zhao, X.; Chen, G.; Liu, Z.; Wang, Z.; Lu, C.; Hu, M.; Nashalian, A.; et al. An ultrathin rechargeable solid-state zinc ion fiber battery for electronic textiles. Sci. Adv. 2021, 7, eabl3742. [Google Scholar] [CrossRef]
  7. Yu, J.; Ai, M.; Liu, C.; Bi, H.; Wu, X.; Ying, W.B.; Yu, Z. Cilia-Inspired Bionic Tactile E-Skin: Structure, Fabrication and Applications. Sensors 2025, 25, 76. [Google Scholar] [CrossRef] [PubMed]
  8. Ock, I.W.; Zhao, X.; Wan, X.; Zhou, Y.; Chen, G.; Chen, J. Boost the voltage of a magnetoelastic generator via tuning the magnetic induction layer resistance. Nano Energy 2023, 109, 108298. [Google Scholar] [CrossRef]
  9. Zhao, X.; Zhou, Y.; Xu, J.; Chen, G.; Fang, Y.; Tat, T.; Xiao, X.; Song, Y.; Li, S.; Chen, J. Soft fibers with magnetoelasticity for wearable electronics. Nat. Commun. 2021, 12, 6755. [Google Scholar] [CrossRef]
  10. Sun, Y.; Huang, J.; Cheng, Y.; Zhang, J.; Shi, Y.; Pan, L. High-accuracy dynamic gesture recognition: A universal and self-adaptive deep-learning-assisted system leveraging high-performance ionogels-based strain sensors. SmartMat 2024, 5, e1269. [Google Scholar] [CrossRef]
  11. Du, X.; Wang, H.; Wang, Y.; Cao, Z.; Yang, L.; Shi, X.; Zhang, X.; He, C.; Gu, X.; Liu, N. An Ultra-Conductive and Patternable 40 nm-Thick Polymer Film for Reliable Emotion Recognition. Adv. Mater. 2024, 36, 2403411. [Google Scholar] [CrossRef] [PubMed]
  12. Yang, H.; Xiao, X.; Manshaii, F.; Ren, D.; Li, X.; Yin, J.; Li, Q.; Zhang, X.; Xiong, S.; Xi, Y.; et al. A dual-symmetry triboelectric acoustic sensor with ultrahigh sensitivity and working bandwidth. Nano Energy 2024, 126, 109638. [Google Scholar] [CrossRef]
  13. Menghini, L.; Gianfranchi, E.; Cellini, N.; Patron, E.; Tagliabue, M.; Sarlo, M. Stressing the accuracy: Wrist-worn wearable sensor validation over different conditions. Psychophysiology 2019, 56, e13441. [Google Scholar] [CrossRef]
  14. Singhal, C.M.; Kaushik, V.; Awasthi, A.; Zalke, J.B.; Palekar, S.; Rewatkar, P.; Srivastava, S.K.; Kulkarni, M.B.; Bhaiyya, M.L. Deep Learning-Enhanced Portable Chemiluminescence Biosensor: 3D-Printed, Smartphone-Integrated Platform for Glucose Detection. Bioengineering 2025, 12, 119. [Google Scholar] [CrossRef]
  15. Quer, G.; Radin, J.M.; Gadaleta, M.; Baca-Motes, K.; Ariniello, L.; Ramos, E.; Kheterpal, V.; Topol, E.J.; Steinhubl, S.R. Wearable sensor data and self-reported symptoms for COVID-19 detection. Nat. Med. 2021, 27, 73–77. [Google Scholar] [CrossRef] [PubMed]
  16. Massari, L.; Fransvea, G.; D’Abbraccio, J.; Filosa, M.; Terruso, G.; Aliperta, A.; D’Alesio, G.; Zaltieri, M.; Schena, E.; Palermo, E.; et al. Functional mimicry of Ruffini receptors with fibre Bragg gratings and deep neural networks enables a bio-inspired large-area tactile-sensitive skin. Nat. Mach. Intell. 2022, 4, 425–435. [Google Scholar] [CrossRef]
  17. Cao, L.; Ye, C.; Zhang, H.; Yang, S.; Shan, Y.; Lv, Z.; Ren, J.; Ling, S. An Artificial Motion and Tactile Receptor Constructed by Hyperelastic Double Physically Cross-Linked Silk Fibroin Ionoelastomer. Adv. Funct. Mater. 2023, 33, 2301404. [Google Scholar] [CrossRef]
  18. Li, G.; Zhu, R. A Multisensory Tactile System for Robotic Hands to Recognize Objects. Adv. Mater. Technol. 2019, 4, 1900602. [Google Scholar] [CrossRef]
  19. Hughes, J.; Spielberg, A.; Chounlakone, M.; Chang, G.; Matusik, W.; Rus, D. A Simple, Inexpensive, Wearable Glove with Hybrid Resistive-Pressure Sensors for Computational Sensing, Proprioception, and Task Identification. Adv. Intell. Syst. 2020, 2, 2000002. [Google Scholar] [CrossRef]
  20. Qiu, Y.; Sun, S.; Wang, X.; Shi, K.; Wang, Z.; Ma, X.; Zhang, W.; Bao, G.; Tian, Y.; Zhang, Z.; et al. Nondestructive identification of softness via bioinspired multisensory electronic skins integrated on a robotic hand. NPJ Flex. Electron. 2022, 6, 45. [Google Scholar] [CrossRef]
  21. Yang, Q.; Jin, W.; Zhang, Q.; Wei, Y.; Guo, Z.; Li, X.; Yang, Y.; Luo, Q.; Tian, H.; Ren, T.-L. Mixed-modality speech recognition and interaction using a wearable artificial throat. Nat. Mach. Intell. 2023, 5, 169–180. [Google Scholar] [CrossRef]
  22. Luo, Y.; Li, Y.; Sharma, P.; Shou, W.; Wu, K.; Foshey, M.; Li, B.; Palacios, T.; Torralba, A.; Matusik, W. Learning human–environment interactions using conformal tactile textiles. Nat. Electron. 2021, 4, 193–201. [Google Scholar] [CrossRef]
  23. Xu, C.; Solomon, S.A.; Gao, W. Artificial intelligence-powered electronic skin. Nat. Mach. Intell. 2023, 5, 1344–1355. [Google Scholar] [CrossRef]
  24. Zhuang, F.; Qi, Z.; Duan, K.; Xi, D.; Zhu, Y.; Zhu, H.; Xiong, H.; He, Q. A Comprehensive Survey on Transfer Learning. Proc. IEEE 2021, 109, 43–76. [Google Scholar] [CrossRef]
  25. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A.A.; Veness, J.; Bellemare, M.G.; Graves, A.; Riedmiller, M.; Fidjeland, A.K.; Ostrovski, G.; et al. Human-level control through deep reinforcement learning. Nature 2015, 518, 529–533. [Google Scholar] [CrossRef]
  26. Liu, Y.; Cao, J.; Li, B.; Hu, W.; Ding, J.; Li, L.; Maybank, S. Cross-Architecture Knowledge Distillation. Int. J. Comput. Vis. 2024, 132, 2798–2824. [Google Scholar] [CrossRef]
  27. Shulaker, M.M.; Hills, G.; Park, R.S.; Howe, R.T.; Saraswat, K.; Wong, H.S.P.; Mitra, S. Three-dimensional integration of nanotechnologies for computing and data storage on a single chip. Nature 2017, 547, 74–78. [Google Scholar] [CrossRef]
  28. Fu, X.; Cheng, W.; Wan, G.; Yang, Z.; Tee, B.C.K. Toward an AI Era: Advances in Electronic Skins. Chem. Rev. 2024, 124, 9899–9948. [Google Scholar] [CrossRef]
  29. McMahan, H.B.; Moore, E.; Ramage, D.; Hampson, S.; Arcas, B.A.Y. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS), Fort Lauderdale, FL, USA, 20–22 April 2017; pp. 1273–1282. [Google Scholar]
  30. Zhang, Y.; Lu, Q.; He, J.; Huo, Z.; Zhou, R.; Han, X.; Jia, M.; Pan, C.; Wang, Z.L.; Zhai, J. Localizing strain via micro-cage structure for stretchable pressure sensor arrays with ultralow spatial crosstalk. Nat. Commun. 2023, 14, 1252. [Google Scholar] [CrossRef]
  31. Liu, Z.; Tian, B.; Zhang, B.; Liu, J.; Zhang, Z.; Wang, S.; Luo, Y.; Zhao, L.; Shi, P.; Lin, Q.; et al. A thin-film temperature sensor based on a flexible electrode and substrate. Microsyst. Nanoeng. 2021, 7, 42. [Google Scholar] [CrossRef]
  32. Song, Y.; Ren, W.; Zhang, Y.; Liu, Q.; Peng, Z.; Wu, X.; Wang, Z. Synergetic Monitoring of both Physiological Pressure and Epidermal Biopotential Based on a Simplified on-Skin-Printed Sensor Modality. Small 2023, 19, 2303301. [Google Scholar] [CrossRef]
  33. Jiang, C.; Zhang, Z.; Pan, J.; Wang, Y.; Zhang, L.; Tong, L. Finger-Skin-Inspired Flexible Optical Sensor for Force Sensing and Slip Detection in Robotic Grasping. Adv. Mater. Technol. 2021, 6, 2100285. [Google Scholar] [CrossRef]
  34. Shrestha, K.; Pradhan, G.B.; Bhatta, T.; Sharma, S.; Lee, S.; Song, H.; Jeong, S.; Park, J.Y. Intermediate nanofibrous charge trapping layer-based wearable triboelectric self-powered sensor for human activity recognition and user identification. Nano Energy 2023, 108, 108180. [Google Scholar] [CrossRef]
  35. Xin, M.; Yu, T.; Jiang, Y.; Tao, R.; Li, J.; Ran, F.; Zhu, T.; Huang, J.; Zhang, J.; Zhang, J.-H.; et al. Multi-vital on-skin optoelectronic biosensor for assessing regional tissue hemodynamics. SmartMat 2023, 4, e1157. [Google Scholar] [CrossRef]
  36. Liu, J.; Chen, J.; Dai, F.; Zhao, J.; Li, S.; Shi, Y.; Li, W.; Geng, L.; Ye, M.; Chen, X.; et al. Wearable five-finger keyboardless input system based on silk fibroin electronic skin. Nano Energy 2022, 103, 107764. [Google Scholar] [CrossRef]
  37. Zhi, C.; Shi, S.; Zhang, S.; Si, Y.; Yang, J.; Meng, S.; Fei, B.; Hu, J. Bioinspired All-Fibrous Directional Moisture-Wicking Electronic Skins for Biomechanical Energy Harvesting and All-Range Health Sensing. Nano-Micro Lett. 2023, 15, 60. [Google Scholar] [CrossRef]
  38. Yang, X.; Chen, W.; Fan, Q.; Chen, J.; Chen, Y.; Lai, F.; Liu, H. Electronic Skin for Health Monitoring Systems: Properties, Functions, and Applications. Adv. Mater. 2024, 36, 2402542. [Google Scholar] [CrossRef] [PubMed]
  39. Shu, S.; Wang, Z.; Chen, P.; Zhong, J.; Tang, W.; Wang, Z.L. Machine-Learning Assisted Electronic Skins Capable of Proprioception and Exteroception in Soft Robotics. Adv. Mater. 2023, 35, 2211385. [Google Scholar] [CrossRef]
  40. Liu, F.; Deswal, S.; Christou, A.; Shojaei Baghini, M.; Chirila, R.; Shakthivel, D.; Chakraborty, M.; Dahiya, R. Printed synaptic transistor–based electronic skin for robots to feel and learn. Sci. Robot. 2022, 7, eabl7286. [Google Scholar] [CrossRef]
  41. Zhang, Z.; Xu, Z.; Emu, L.; Wei, P.; Chen, S.; Zhai, Z.; Kong, L.; Wang, Y.; Jiang, H. Active mechanical haptics with high-fidelity perceptions for immersive virtual reality. Nat. Mach. Intell. 2023, 5, 643–655. [Google Scholar] [CrossRef]
  42. Yao, K.; Zhou, J.; Huang, Q.; Wu, M.; Yiu, C.K.; Li, J.; Huang, X.; Li, D.; Su, J.; Hou, S.; et al. Encoding of tactile information in hand via skin-integrated wireless haptic interface. Nat. Mach. Intell. 2022, 4, 893–903. [Google Scholar] [CrossRef]
  43. Yang, Y.; Yang, Y.; Cao, Y.; Wang, X.; Chen, Y.; Liu, H.; Gao, Y.; Wang, J.; Liu, C.; Wang, W.; et al. Anti-freezing, resilient and tough hydrogels for sensitive and large-range strain and pressure sensors. Chem. Eng. J. 2021, 403, 126431. [Google Scholar] [CrossRef]
  44. Ma, Z.; Xiang, X.; Shao, L.; Zhang, Y.; Gu, J. Multifunctional Wearable Silver Nanowire Decorated Leather Nanocomposites for Joule Heating, Electromagnetic Interference Shielding and Piezoresistive Sensing. Angew. Chem. Int. Ed. 2022, 61, e202200705. [Google Scholar] [CrossRef] [PubMed]
  45. Song, Z.; Yin, J.; Wang, Z.; Lu, C.; Yang, Z.; Zhao, Z.; Lin, Z.; Wang, J.; Wu, C.; Cheng, J.; et al. A flexible triboelectric tactile sensor for simultaneous material and texture recognition. Nano Energy 2022, 93, 106798. [Google Scholar] [CrossRef]
  46. Luo, Y.; Abidian, M.R.; Ahn, J.-H.; Akinwande, D.; Andrews, A.M.; Antonietti, M.; Bao, Z.; Berggren, M.; Berkey, C.A.; Bettinger, C.J.; et al. Technology Roadmap for Flexible Sensors. ACS Nano 2023, 17, 5211–5295. [Google Scholar] [CrossRef]
  47. Sun, X.; Guo, X.; Gao, J.; Wu, J.; Huang, F.; Zhang, J.-H.; Huang, F.; Lu, X.; Shi, Y.; Pan, L. E-Skin and Its Advanced Applications in Ubiquitous Health Monitoring. Biomedicines 2024, 12, 2307. [Google Scholar] [CrossRef] [PubMed]
  48. Zhang, J.-H.; Li, Z.; Xu, J.; Li, J.; Yan, K.; Cheng, W.; Xin, M.; Zhu, T.; Du, J.; Chen, S.; et al. Versatile self-assembled electrospun micropyramid arrays for high-performance on-skin devices with minimal sensory interference. Nat. Commun. 2022, 13, 5839. [Google Scholar] [CrossRef]
  49. Qian, W.; Guo, C.; Dan, H.; Zhao, H.; Wang, J.; Bowen, C.R.; Yang, Y. Temperature-Enhanced Flexo-Photovoltaic Coupled Nanogenerator for Harvesting Vibration and Light Energies. ACS Energy Lett. 2024, 9, 1907–1914. [Google Scholar] [CrossRef]
  50. Minhas, J.Z.; Hasan, M.A.M.; Yang, Y. Ferroelectric Materials Based Coupled Nanogenerators. Nanoenergy Adv. 2021, 1, 131–180. [Google Scholar] [CrossRef]
  51. Qian, W.; Wu, H.; Yang, Y. Ferroelectric BaTiO3 Based Multi-Effects Coupled Materials and Devices. Adv. Electron. Mater. 2022, 8, 2200190. [Google Scholar] [CrossRef]
  52. Li, T.; Qu, M.; Carlos, C.; Gu, L.; Jin, F.; Yuan, T.; Wu, X.; Xiao, J.; Wang, T.; Dong, W.; et al. High-Performance Poly(vinylidene difluoride)/Dopamine Core/Shell Piezoelectric Nanofiber and Its Application for Biomedical Sensors. Adv. Mater. 2021, 33, 2006093. [Google Scholar] [CrossRef] [PubMed]
  53. Costa, C.M.; Cardoso, V.F.; Martins, P.; Correia, D.M.; Gonçalves, R.; Costa, P.; Correia, V.; Ribeiro, C.; Fernandes, M.M.; Martins, P.M.; et al. Smart and Multifunctional Materials Based on Electroactive Poly(vinylidene fluoride): Recent Advances and Opportunities in Sensors, Actuators, Energy, Environmental, and Biomedical Applications. Chem. Rev. 2023, 123, 11392–11487. [Google Scholar] [CrossRef] [PubMed]
  54. Bai, Y.; Zheng, X.; Zhong, X.; Cui, Q.; Zhang, S.; Wen, X.; Heng, B.C.; He, S.; Shen, Y.; Zhang, J.; et al. Manipulation of Heterogeneous Surface Electric Potential Promotes Osteogenesis by Strengthening RGD Peptide Binding and Cellular Mechanosensing. Adv. Mater. 2023, 35, 2209769. [Google Scholar] [CrossRef]
  55. Sundaram, S.; Kellnhofer, P.; Li, Y.; Zhu, J.-Y.; Torralba, A.; Matusik, W. Learning the signatures of the human grasp using a scalable tactile glove. Nature 2019, 569, 698–702. [Google Scholar] [CrossRef]
  56. Guo, H.; Tan, Y.J.; Chen, G.; Wang, Z.; Susanto, G.J.; See, H.H.; Yang, Z.; Lim, Z.W.; Yang, L.; Tee, B.C.K. Artificially innervated self-healing foams as synthetic piezo-impedance sensor skins. Nat. Commun. 2020, 11, 5747. [Google Scholar] [CrossRef]
  57. Zhu, P.; Li, Z.; Pang, J.; He, P.; Zhang, S. Latest developments and trends in electronic skin devices. Soft Sci. 2024, 4, 17. [Google Scholar] [CrossRef]
  58. Kumaresan, Y.; Ozioko, O.; Dahiya, R. Multifunctional Electronic Skin With a Stack of Temperature and Pressure Sensor Arrays. IEEE Sens. J. 2021, 21, 26243–26251. [Google Scholar] [CrossRef]
  59. Zhao, X.; Long, Y.; Yang, T.; Li, J.; Zhu, H. Simultaneous High Sensitivity Sensing of Temperature and Humidity with Graphene Woven Fabrics. ACS Appl. Mater. Interfaces 2017, 9, 30171–30176. [Google Scholar] [CrossRef] [PubMed]
  60. Roy, S.; Deo, K.A.; Lee, H.P.; Soukar, J.; Namkoong, M.; Tian, L.; Jaiswal, A.; Gaharwar, A.K. 3D Printed Electronic Skin for Strain, Pressure and Temperature Sensing. Adv. Funct. Mater. 2024, 34, 2313575. [Google Scholar] [CrossRef]
  61. You, I.; Mackanic, D.G.; Matsuhisa, N.; Kang, J.; Kwon, J.; Beker, L.; Mun, J.; Suh, W.; Kim, T.Y.; Tok, J.B.-H.; et al. Artificial multimodal receptors based on ion relaxation dynamics. Science 2020, 370, 961–965. [Google Scholar] [CrossRef]
  62. Shin, J.; Jeong, B.; Kim, J.; Nam, V.B.; Yoon, Y.; Jung, J.; Hong, S.; Lee, H.; Eom, H.; Yeo, J.; et al. Sensitive Wearable Temperature Sensor with Seamless Monolithic Integration. Adv. Mater. 2020, 32, 1905527. [Google Scholar] [CrossRef] [PubMed]
  63. Xin, Y.Y.; Zhou, J.; Lubineau, G. A highly stretchable strain-insensitive temperature sensor exploits the Seebeck effect in nanoparticle-based printed circuits. J. Mater. Chem. A 2019, 7, 24493–24501. [Google Scholar] [CrossRef]
  64. Ren, H.; Li, W.; Li, H.; Ding, Y.; Li, J.; Feng, Y.; Su, Z.; Zhang, X.; Jiang, L.; Liu, H.; et al. Jellyfish-Inspired High-Sensitivity Pressure-Temperature Sensor. Adv. Funct. Mater. 2024, 2417715. [Google Scholar] [CrossRef]
  65. Guo, X.; Niu, Y.; Yin, Z.; Wang, D.; Liu, L.; Tang, Y.; Li, X.; Zhang, Y.; Li, Y.; Zhang, T.; et al. Bionic Microstructure-Inspired Dual-Mode Flexible Sensor with Photothermal Effect for Ultrasensitive Temperature and Strain Monitoring. Adv. Mater. Technol. 2024, 9, 2400701. [Google Scholar] [CrossRef]
  66. Wang, J.; Wang, C.; Cai, P.; Luo, Y.; Cui, Z.; Loh, X.J.; Chen, X. Artificial Sense Technology: Emulating and Extending Biological Senses. ACS Nano 2021, 15, 18671–18678. [Google Scholar] [CrossRef]
  67. Kim, D.-H.; Lu, N.; Ma, R.; Kim, Y.-S.; Kim, R.-H.; Wang, S.; Wu, J.; Won, S.M.; Tao, H.; Islam, A.; et al. Epidermal Electronics. Science 2011, 333, 838–843. [Google Scholar] [CrossRef]
  68. Zhang, L.; Kumar, K.S.; He, H.; Cai, C.J.; He, X.; Gao, H.; Yue, S.; Li, C.; Seet, R.C.-S.; Ren, H.; et al. Fully organic compliant dry electrodes self-adhesive to skin for long-term motion-robust epidermal biopotential monitoring. Nat. Commun. 2020, 11, 4683. [Google Scholar] [CrossRef]
  69. Li, Y.; Matsumura, G.; Xuan, Y.; Honda, S.; Takei, K. Stretchable Electronic Skin using Laser-Induced Graphene and Liquid Metal with an Action Recognition System Powered by Machine Learning. Adv. Funct. Mater. 2024, 34, 2313824. [Google Scholar] [CrossRef]
  70. Tang, X.; Yang, J.; Luo, J.; Cheng, G.; Sun, B.; Zhou, Z.; Zhang, P.; Wei, D. A graphene flexible pressure sensor based on a fabric-like groove structure for high-resolution tactile imaging. Chem. Eng. J. 2024, 495, 153281. [Google Scholar] [CrossRef]
  71. Zhao, C.; Fang, Y.; Chen, H.; Zhang, S.; Wan, Y.; Riaz, M.S.; Zhang, Z.; Dong, W.; Diao, L.; Ren, D.; et al. Ultrathin Mo2S3 Nanowire Network for High-Sensitivity Breathable Piezoresistive Electronic Skins. ACS Nano 2023, 17, 4862–4870. [Google Scholar] [CrossRef]
  72. Tian, Q.; Zhao, H.; Wang, X.; Jiang, Y.; Zhu, M.; Yelemulati, H.; Xie, R.; Li, Q.; Su, R.; Cao, Z.; et al. Hairy-Skin-Adaptive Viscoelastic Dry Electrodes for Long-Term Electrophysiological Monitoring. Adv. Mater. 2023, 35, 2211236. [Google Scholar] [CrossRef]
  73. Shin, J.H.; Choi, J.Y.; June, K.; Choi, H.; Kim, T.-i. Polymeric Conductive Adhesive-Based Ultrathin Epidermal Electrodes for Long-Term Monitoring of Electrophysiological Signals. Adv. Mater. 2024, 36, 2313157. [Google Scholar] [CrossRef] [PubMed]
  74. Yang, G.; Lan, Z.; Gong, H.; Wen, J.; Pang, B.; Qiu, Y.; Zhang, Y.; Guo, W.; Bu, T.; Xie, B.; et al. A Nepenthes-Inspired Hydrogel Hybrid System for Sweat-Wicking Electrophysiological Signal Recording during Exercises. Adv. Funct. Mater. 2024, 2417841. [Google Scholar] [CrossRef]
  75. Moin, A.; Zhou, A.; Rahimi, A.; Menon, A.; Benatti, S.; Alexandrov, G.; Tamakloe, S.; Ting, J.; Yamamoto, N.; Khan, Y.; et al. A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition. Nat. Electron. 2021, 4, 54–63. [Google Scholar] [CrossRef]
  76. Resende, S.; Fernandes, J.; Sousa, P.C.; Calaza, C.; Frasco, M.F.; Freitas, P.P.; Goreti, F.; Sales, M. Fabrication and sensing properties of a molecularly imprinted polymer on a photonic PDMS substrate for the optical detection of C-reactive protein. Chem. Eng. J. 2024, 485, 149924. [Google Scholar] [CrossRef]
  77. Geng, S.; Guo, P.; Wang, J.; Zhang, Y.; Shi, Y.; Li, X.; Cao, M.; Song, Y.; Zhang, H.; Zhang, Z.; et al. Ultrasensitive Optical Detection and Elimination of Residual Microtumors with a Postoperative Implantable Hydrogel Sensor for Preventing Cancer Recurrence. Adv. Mater. 2024, 36, 2307923. [Google Scholar] [CrossRef]
  78. Naithani, S.; Heena; Sharma, P.; Layek, S.; Thetiot, F.; Goswami, T.; Kumar, S. Nanoparticles and quantum dots as emerging optical sensing platforms for Ni(II) detection: Recent approaches and perspectives. Coord. Chem. Rev. 2025, 524, 216331. [Google Scholar] [CrossRef]
  79. Lim, K.G.; Han, T.H.; Lee, T.W. Engineering electrodes and metal halide perovskite materials for flexible/stretchable perovskite solar cells and light-emitting diodes. Energy Environ. Sci. 2021, 14, 2009–2035. [Google Scholar] [CrossRef]
  80. Yeon, H.; Lee, H.; Kim, Y.; Lee, D.; Lee, Y.; Lee, J.-S.; Shin, J.; Choi, C.; Kang, J.-H.; Suh, J.M.; et al. Long-term reliable physical health monitoring by sweat pore–inspired perforated electronic skins. Sci. Adv. 2021, 7, eabg8459. [Google Scholar] [CrossRef]
  81. He, L.; Hu, G.; Jiang, J.; Wei, W.; Xue, X.; Fan, K.; Huang, H.; Shen, L. Highly Sensitive Tin-Lead Perovskite Photodetectors with Over 450 Days Stability Enabled by Synergistic Engineering for Pulse Oximetry System. Adv. Mater. 2023, 35, 2210016. [Google Scholar] [CrossRef]
  82. Shashikumar, S.P.; Shah, A.J.; Li, Q.; Clifford, G.D.; Nemati, S. A Deep Learning Approach to Monitoring and Detecting Atrial Fibrillation using Wearable Technology. In Proceedings of the 4th IEEE EMBS International Conference on Biomedical and Health Informatics (BHI), Orlando, FL, USA, 16–19 February 2017; pp. 141–144. [Google Scholar]
  83. Fliegans, L.; Troughton, J.; Divay, V.; Blayac, S.; Ramuz, M. Design, Fabrication and Characterisation of Multi-Parameter Optical Sensors Dedicated to E-Skin Applications. Sensors 2022, 23, 114. [Google Scholar] [CrossRef]
  84. Li, Y.; Zhao, M.; Yan, Y.; He, L.; Wang, Y.; Xiong, Z.; Wang, S.; Bai, Y.; Sun, F.; Lu, Q.; et al. Multifunctional biomimetic tactile system via a stick-slip sensing strategy for human–machine interactions. NPJ Flex. Electron. 2022, 6, 46. [Google Scholar] [CrossRef]
  85. Zhang, X.; Wang, Y.; Zhang, L.; Zhang, X.; Guo, Y.; Hao, B.; Qin, Y.; Li, Q.; Fan, L.; Dong, H.; et al. Facile preparation of porous MXene/cellulose nanofiber composite for highly-sensitive flexible piezoresistive sensors in e-skin. Chem. Eng. J. 2025, 505, 159369. [Google Scholar] [CrossRef]
  86. Cheng, H.; Zhang, M.; Shi, J.Q. A Survey on Deep Neural Network Pruning: Taxonomy, Comparison, Analysis, and Recommendations. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 10558–10578. [Google Scholar] [CrossRef] [PubMed]
  87. Ye, G.; Wu, Q.; Chen, Y.; Wang, X.; Xiang, Z.; Duan, J.; Wan, Y.; Yang, P. Bimodal Coupling Haptic Perceptron for Accurate Contactless Gesture Perception and Material Identification. Adv. Fiber Mater. 2024, 6, 1874–1886. [Google Scholar] [CrossRef]
  88. Nittala, A.S.; Steimle, J. Next Steps in Epidermal Computing: Opportunities and Challenges for Soft On-Skin Devices. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 30 April–5 May 2022; p. 389. [Google Scholar]
  89. Zhou, Y.; Fu, J.; Chen, Z.; Zhuge, F.; Wang, Y.; Yan, J.; Ma, S.; Xu, L.; Yuan, H.; Chan, M.; et al. Computational event-driven vision sensors for in-sensor spiking neural networks. Nat. Electron. 2023, 6, 870–878. [Google Scholar] [CrossRef]
  90. Su, X.; Zhang, B.; Liang, C.; Tian, M.; Zhang, T.; Bian, Z.; Miao, J.; Yang, Q.; Xu, Y.; Yu, B.; et al. Integrating Image Perception and Time-to-First-Spike Coding in MoS2 Phototransistors for Spiking Neural Network. Adv. Funct. Mater. 2024, 34, 2315323. [Google Scholar] [CrossRef]
  91. Banabilah, S.; Aloqaily, M.; Alsayed, E.; Malik, N.; Jararweh, Y. Federated learning review: Fundamentals, enabling technologies, and future applications. Inf. Process. Manag. 2022, 59, 103061. [Google Scholar] [CrossRef]
  92. Cui, T.; Qiao, Y.; Li, D.; Huang, X.; Yang, L.; Yan, A.; Chen, Z.; Xu, J.; Tan, X.; Jian, J.; et al. Multifunctional, breathable MXene-PU mesh electronic skin for wearable intelligent 12-lead ECG monitoring system. Chem. Eng. J. 2023, 455, 140690. [Google Scholar] [CrossRef]
  93. Hao, Y.; Yan, Q.; Liu, H.; He, X.; Zhang, P.; Qin, X.; Wang, R.; Sun, J.; Wang, L.; Cheng, Y. A Stretchable, Breathable, And Self-Adhesive Electronic Skin with Multimodal Sensing Capabilities for Human-Centered Healthcare. Adv. Funct. Mater. 2023, 33, 2303881. [Google Scholar] [CrossRef]
  94. Li, X.; Zhu, P.; Zhang, S.; Wang, X.; Luo, X.; Leng, Z.; Zhou, H.; Pan, Z.; Mao, Y. A Self-Supporting, Conductor-Exposing, Stretchable, Ultrathin, and Recyclable Kirigami-Structured Liquid Metal Paper for Multifunctional E-Skin. ACS Nano 2022, 16, 5909–5919. [Google Scholar] [CrossRef] [PubMed]
  95. de Lima e Silva, P.C.; Severiano, C.A.; Alves, M.A.; Silva, R.; Weiss Cohen, M.; Guimarães, F.G. Forecasting in non-stationary environments with fuzzy time series. Appl. Soft Comput. 2020, 97, 106825. [Google Scholar] [CrossRef]
  96. Ngiam, K.Y.; Khor, I.W. Big data and machine learning algorithms for health-care delivery. Lancet Oncol. 2019, 20, e262–e273. [Google Scholar] [CrossRef]
  97. Elmarakeby, H.A.; Hwang, J.; Arafeh, R.; Crowdis, J.; Gang, S.; Liu, D.; AlDubayan, S.H.; Salari, K.; Kregel, S.; Richter, C.; et al. Biologically informed deep neural network for prostate cancer discovery. Nature 2021, 598, 348–352. [Google Scholar] [CrossRef]
  98. Yuan, H.; Chan, S.; Creagh, A.P.; Tong, C.; Acquah, A.; Clifton, D.A.; Doherty, A. Self-supervised learning for human activity recognition using 700,000 person-days of wearable data. NPJ Digit. Med. 2024, 7, 91. [Google Scholar] [CrossRef]
  99. Khowaja, S.A.; Khuwaja, P.; Dharejo, F.A.; Raza, S.; Lee, I.H.; Naqvi, R.A.; Dev, K. ReFuSeAct: Representation fusion using self-supervised learning for activity recognition in next generation networks. Inf. Fusion 2024, 102, 102044. [Google Scholar] [CrossRef]
  100. Zhu, Z.; Su, P.; Zhong, S.; Huang, J.; Ottikkutti, S.; Tahmasebi, K.N.; Zou, Z.; Zheng, L.; Chen, D. Using a VAE-SOM architecture for anomaly detection of flexible sensors in limb prosthesis. J. Ind. Inf. Integr. 2023, 35, 100490. [Google Scholar] [CrossRef]
  101. Gao, W.; Emaminejad, S.; Nyein, H.Y.Y.; Challa, S.; Chen, K.; Peck, A.; Fahad, H.M.; Ota, H.; Shiraki, H.; Kiriya, D.; et al. Fully integrated wearable sensor arrays for multiplexed in situ perspiration analysis. Nature 2016, 529, 509–514. [Google Scholar] [CrossRef]
  102. Li, S.Y.; Yang, M.Y.; Wu, Y.Z.; Asghar, W.; Lu, X.J.; Zhang, H.F.; Cui, E.H.; Fang, Z.J.; Shang, J.; Liu, Y.W.; et al. A flexible dual-mode sensor with decoupled strain and temperature sensing for smart robots. Mater. Horiz. 2024, 11, 6361–6370. [Google Scholar] [CrossRef]
  103. Jeong, Y.-K.; Baek, K.-R. Asymmetric Gait Analysis Using a DTW Algorithm with Combined Gyroscope and Pressure Sensor. Sensors 2021, 21, 3750. [Google Scholar] [CrossRef]
  104. Zhang, H.; Chen, H.; Lee, J.-H.; Kim, E.; Chan, K.-Y.; Venkatesan, H.; Adegun, M.H.; Agbabiaka, O.G.; Shen, X.; Zheng, Q.; et al. Bioinspired Chromotropic Ionic Skin with In-Plane Strain/Temperature/Pressure Multimodal Sensing and Ultrahigh Stimuli Discriminability. Adv. Funct. Mater. 2022, 32, 2208362. [Google Scholar] [CrossRef]
  105. Liu, Y.; Pu, H.; Sun, D.-W. Efficient extraction of deep image features using convolutional neural network (CNN) for applications in detecting and analysing complex food matrices. Trends Food Sci. Technol. 2021, 113, 193–204. [Google Scholar] [CrossRef]
  106. Derry, A.; Krzywinski, M.; Altman, N. Convolutional neural networks. Nat. Methods 2023, 20, 1269–1270. [Google Scholar] [CrossRef]
  107. Ibrahim, R.; Shafiq, M.O. Explainable Convolutional Neural Networks: A Taxonomy, Review, and Future Directions. ACM Comput. Surv. 2023, 55, 206. [Google Scholar] [CrossRef]
  108. Van Houdt, G.; Mosquera, C.; Nápoles, G. A review on the long short-term memory model. Artif. Intell. Rev. 2020, 53, 5929–5955. [Google Scholar] [CrossRef]
  109. Cho, K.; Merrienboer, B.v.; Gülçehre, Ç.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, Doha, Qatar, 25–29 October 2014. [Google Scholar]
  110. Vaswani, A.; Shazeer, N.M.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is All you Need. In Proceedings of the Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  111. Ke, X.; Duan, Y.; Duan, Y.; Zhao, Z.; You, C.; Sun, T.; Gao, X.; Zhang, Z.; Xue, W.; Liu, X.; et al. Deep-learning-enhanced metal-organic framework e-skin for health monitoring. Device 2025, 100650. [Google Scholar] [CrossRef]
  112. Balestriero, R.; Ibrahim, M.; Sobal, V.; Morcos, A.S.; Shekhar, S.; Goldstein, T.; Bordes, F.; Bardes, A.; Mialon, G.; Tian, Y.; et al. A Cookbook of Self-Supervised Learning. arXiv 2023, arXiv:2304.12210. [Google Scholar]
  113. Weiss, K.; Khoshgoftaar, T.M.; Wang, D. A survey of transfer learning. J. Big Data 2016, 3, 9. [Google Scholar] [CrossRef]
  114. Lindeberg, T. Scale Invariant Feature Transform. Scholarpedia 2012, 7, 10491. [Google Scholar] [CrossRef]
  115. Hubel, D.H.; Wiesel, T.N. Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J. Physiol. 1962, 160, 106–154. [Google Scholar] [CrossRef]
  116. Innamorati, C.; Ritschel, T.; Weyrich, T.; Mitra, N.J. Learning on the Edge: Investigating Boundary Filters in CNNs. Int. J. Comput. Vis. 2020, 128, 773–782. [Google Scholar] [CrossRef]
  117. Liu, C.; Tupin, F.; Gousseau, Y. Training CNNs on speckled optical dataset for edge detection in SAR images. ISPRS J. Photogramm. Remote Sens. 2020, 170, 88–102. [Google Scholar] [CrossRef]
  118. Choi, Y.R.; Kil, R.M. Face Video Retrieval Based on the Deep CNN With RBF Loss. IEEE Trans. Image Process. 2021, 30, 1015–1029. [Google Scholar] [CrossRef]
  119. Zhao, X.; Wang, L.; Zhang, Y.; Han, X.; Deveci, M.; Parmar, M. A review of convolutional neural networks in computer vision. Artif. Intell. Rev. 2024, 57, 99. [Google Scholar] [CrossRef]
  120. Wang, H.S.; Hong, S.K.; Han, J.H.; Jung, Y.H.; Jeong, H.K.; Im, T.H.; Jeong, C.K.; Lee, B.-Y.; Kim, G.; Yoo, C.D.; et al. Biomimetic and flexible piezoelectric mobile acoustic sensors with multiresonant ultrathin structures for machine learning biometrics. Sci. Adv. 2021, 7, eabe5683. [Google Scholar] [CrossRef]
  121. Zheng, X.T.; Yang, Z.; Sutarlie, L.; Thangaveloo, M.; Yu, Y.; Salleh, N.A.B.M.; Chin, J.S.; Xiong, Z.; Becker, D.L.; Loh, X.J.; et al. Battery-free and AI-enabled multiplexed sensor patches for wound monitoring. Sci. Adv. 2023, 9, eadg6670. [Google Scholar] [CrossRef]
  122. Suo, J.; Liu, Y.; Wu, C.; Chen, M.; Huang, Q.; Liu, Y.; Yao, K.; Chen, Y.; Pan, Q.; Chang, X.; et al. Wide-Bandwidth Nanocomposite-Sensor Integrated Smart Mask for Tracking Multiphase Respiratory Activities. Adv. Sci. 2022, 9, 2203565. [Google Scholar] [CrossRef]
  123. Lu, Y.; Tian, H.; Cheng, J.; Zhu, F.; Liu, B.; Wei, S.; Ji, L.; Wang, Z.L. Decoding lip language using triboelectric sensors with deep learning. Nat. Commun. 2022, 13, 1401. [Google Scholar] [CrossRef] [PubMed]
  124. Yuan, R.; Tiw, P.J.; Cai, L.; Yang, Z.; Liu, C.; Zhang, T.; Ge, C.; Huang, R.; Yang, Y. A neuromorphic physiological signal processing system based on VO2 memristor for next-generation human-machine interface. Nat. Commun. 2023, 14, 3695. [Google Scholar] [CrossRef]
  125. Yang, C.; Liu, H.; Ma, J.; Xu, M. Multimodal Flexible Sensor for the Detection of Pressing–Bending–Twisting Mechanical Deformations. ACS Appl. Mater. Interfaces 2025, 17, 2413–2424. [Google Scholar] [CrossRef]
  126. Pandey, R.K.; Chao, P.C.P. A Dual-Channel PPG Readout System With Motion-Tolerant Adaptability for OLED-OPD Sensors. IEEE Trans. Biomed. Circuits Syst. 2022, 16, 36–51. [Google Scholar] [CrossRef] [PubMed]
  127. Yang, C.; Zhang, D.; Wang, W.; Zhang, H.; Zhou, L. Multi-functional MXene/helical multi-walled carbon nanotubes flexible sensor for tire pressure detection and speech recognition enabled by machine learning. Chem. Eng. J. 2025, 505, 159157. [Google Scholar] [CrossRef]
  128. Zavala-Mondragón, L.A.; With, P.H.N.d.; Sommen, F.v.d. Image Noise Reduction Based on a Fixed Wavelet Frame and CNNs Applied to CT. IEEE Trans. Image Process. 2021, 30, 9386–9401. [Google Scholar] [CrossRef]
  129. Wu, S.; Zhang, Y.; He, C.; Luo, Z.; Chen, Z.; Ye, J. Self-Supervised Learning for Generic Raman Spectrum Denoising. Anal. Chem. 2024, 96, 17476–17485. [Google Scholar] [CrossRef] [PubMed]
  130. Zhou, Y.; Yu, K.; Wang, M.; Ma, Y.; Peng, Y.; Chen, Z.; Zhu, W.; Shi, F.; Chen, X. Speckle Noise Reduction for OCT Images Based on Image Style Transfer and Conditional GAN. IEEE J. Biomed. Health Inform. 2022, 26, 139–150. [Google Scholar] [CrossRef]
  131. Qu, X.; Liu, Z.; Tan, P.; Wang, C.; Liu, Y.; Feng, H.; Luo, D.; Li, Z.; Wang, Z.L. Artificial tactile perception smart finger for material identification based on triboelectric sensing. Sci. Adv. 2022, 8, eabq2521. [Google Scholar] [CrossRef] [PubMed]
  132. Wu, X.; Luo, X.; Song, Z.; Bai, Y.; Zhang, B.; Zhang, G. Ultra-Robust and Sensitive Flexible Strain Sensor for Real-Time and Wearable Sign Language Translation. Adv. Funct. Mater. 2023, 33, 2303504. [Google Scholar] [CrossRef]
  133. Chun, S.; Kim, J.-S.; Yoo, Y.; Choi, Y.; Jung, S.J.; Jang, D.; Lee, G.; Song, K.-I.; Nam, K.S.; Youn, I.; et al. An artificial neural tactile sensing system. Nat. Electron. 2021, 4, 429–438. [Google Scholar] [CrossRef]
  134. Lin, S.; Hu, S.; Song, W.; Gu, M.; Liu, J.; Song, J.; Liu, Z.; Li, Z.; Huang, K.; Wu, Y.; et al. An ultralight, flexible, and biocompatible all-fiber motion sensor for artificial intelligence wearable electronics. NPJ Flex. Electron. 2022, 6, 27. [Google Scholar] [CrossRef]
  135. Sun, Z.; Zhou, L.; Wang, W. Learning Time-Frequency Analysis in Wireless Sensor Networks. IEEE Internet Things J. 2018, 5, 3388–3396. [Google Scholar] [CrossRef]
  136. Li, L.; Cai, H.; Jiang, Q.; Ji, H. An empirical signal separation algorithm for multicomponent signals based on linear time-frequency analysis. Mech. Syst. Signal Process. 2019, 121, 791–809. [Google Scholar] [CrossRef]
  137. Faisal, K.N.; Mir, H.S.; Sharma, R.R. Human Activity Recognition From FMCW Radar Signals Utilizing Cross-Terms Free WVD. IEEE Internet Things J. 2024, 11, 14383–14394. [Google Scholar] [CrossRef]
  138. Yang, C.; Wang, H.; Yang, J.; Yao, H.; He, T.; Bai, J.; Guang, T.; Cheng, H.; Yan, J.; Qu, L. A Machine-Learning-Enhanced Simultaneous and Multimodal Sensor Based on Moist-Electric Powered Graphene Oxide. Adv. Mater. 2022, 34, 2205249. [Google Scholar] [CrossRef] [PubMed]
  139. Zheng, Y.; Yin, L.; Jayan, H.; Jiang, S.; El-Seedi, H.R.; Zou, X.; Guo, Z. In situ self-cleaning PAN/Cu2O@Ag/Au@Ag flexible SERS sensor coupled with chemometrics for quantitative detection of thiram residues on apples. Food Chem. 2025, 473, 143032. [Google Scholar] [CrossRef] [PubMed]
  140. Li, W.; Zou, K.; Guo, J.; Zhang, C.; Feng, J.; You, J.; Cheng, G.; Zhou, Q.; Kong, M.; Li, G.; et al. Integrated Fibrous Iontronic Pressure Sensors with High Sensitivity and Reliability for Human Plantar Pressure and Gait Analysis. ACS Nano 2024, 18, 14672–14684. [Google Scholar] [CrossRef]
  141. Wang, M.; Yan, Z.; Wang, T.; Cai, P.; Gao, S.; Zeng, Y.; Wan, C.; Wang, H.; Pan, L.; Yu, J.; et al. Gesture recognition using a bioinspired learning architecture that integrates visual data with somatosensory data from stretchable sensors. Nat. Electron. 2020, 3, 563–570. [Google Scholar] [CrossRef]
  142. Kim, K.K.; Kim, M.; Pyun, K.; Kim, J.; Min, J.; Koh, S.; Root, S.E.; Kim, J.; Nguyen, B.-N.T.; Nishio, Y.; et al. A substrate-less nanomesh receptor with meta-learning for rapid hand task recognition. Nat. Electron. 2023, 6, 64–75. [Google Scholar] [CrossRef]
  143. Chen, M.; Ouyang, J.; Jian, A.; Liu, J.; Li, P.; Hao, Y.; Gong, Y.; Hu, J.; Zhou, J.; Wang, R.; et al. Imperceptible, designable, and scalable braided electronic cord. Nat. Commun. 2022, 13, 7097. [Google Scholar] [CrossRef]
  144. Gong, S.; Zhang, X.; Nguyen, X.A.; Shi, Q.; Lin, F.; Chauhan, S.; Ge, Z.; Cheng, W. Hierarchically resistive skins as specific and multimetric on-throat wearable biosensors. Nat. Nanotechnol. 2023, 18, 889–897. [Google Scholar] [CrossRef]
  145. Liu, M.; Zhang, Y.; Wang, J.; Qin, N.; Yang, H.; Sun, K.; Hao, J.; Shu, L.; Liu, J.; Chen, Q.; et al. A star-nose-like tactile-olfactory bionic sensing array for robust object recognition in non-visual environments. Nat. Commun. 2022, 13, 79. [Google Scholar] [CrossRef]
  146. Zhang, X.-Y.; Liu, H.; Ma, X.-Y.; Wang, Z.-C.; Li, G.-P.; Han, L.; Sun, K.; Yang, Q.-S.; Ji, S.-R.; Yu, D.-L.; et al. Deep Learning Enabled High-Performance Speech Command Recognition on Graphene Flexible Microphones. ACS Appl. Electron. Mater. 2022, 4, 2306–2312. [Google Scholar] [CrossRef]
  147. Ge, C.; An, X.; He, X.; Duan, Z.; Chen, J.; Hu, P.; Zhao, J.; Wang, Z.; Zhang, J. Integrated Multifunctional Electronic Skins with Low-Coupling for Complicated and Accurate Human–Robot Collaboration. Adv. Sci. 2023, 10, 2301341. [Google Scholar] [CrossRef]
  148. Zhang, Y.; Mao, J.; Zheng, R.-K.; Zhang, J.; Wu, Y.; Wang, X.; Miao, K.; Yao, H.; Yang, L.; Zheng, H. Ferroelectric Polarization-Enhanced Performance of Flexible CuInP2S6 Piezoelectric Nanogenerator for Biomechanical Energy Harvesting and Voice Recognition Applications. Adv. Funct. Mater. 2023, 33, 2214745. [Google Scholar] [CrossRef]
  149. Kim, T.; Shin, Y.; Kang, K.; Kim, K.; Kim, G.; Byeon, Y.; Kim, H.; Gao, Y.; Lee, J.R.; Son, G.; et al. Ultrathin crystalline-silicon-based strain gauges with deep learning algorithms for silent speech interfaces. Nat. Commun. 2022, 13, 5815. [Google Scholar] [CrossRef]
  150. Jung, Y.H.; Pham, T.X.; Issa, D.; Wang, H.S.; Lee, J.H.; Chung, M.; Lee, B.-Y.; Kim, G.; Yoo, C.D.; Lee, K.J. Deep learning-based noise robust flexible piezoelectric acoustic sensors for speech processing. Nano Energy 2022, 101, 107610. [Google Scholar] [CrossRef]
  151. Guo, W.; Ma, Z.; Chen, Z.; Hua, H.; Wang, D.; Elhousseini Hilal, M.; Fu, Y.; Lu, P.; Lu, J.; Zhang, Y.; et al. Thin and soft Ti3C2Tx MXene sponge structure for highly sensitive pressure sensor assisted by deep learning. Chem. Eng. J. 2024, 485, 149659. [Google Scholar] [CrossRef]
  152. Zong, X.; Zhang, C.; Zhang, N.; Wang, Z.; Wang, J. Breathable, superhydrophobic and multifunctional Janus nanofibers for dual-mode passive thermal management/facial expression recognition with deep learning. Chem. Eng. J. 2025, 505, 159759. [Google Scholar] [CrossRef]
  153. Xie, Y.; Wu, X.; Huang, X.; Liang, Q.; Deng, S.; Wu, Z.; Yao, Y.; Lu, L. A Deep Learning-Enabled Skin-Inspired Pressure Sensor for Complicated Recognition Tasks with Ultralong Life. Research 2023, 6, 0157. [Google Scholar] [CrossRef]
  154. Zhu, J.; Zhang, X.; Wang, R.; Wang, M.; Chen, P.; Cheng, L.; Wu, Z.; Wang, Y.; Liu, Q.; Liu, M. A Heterogeneously Integrated Spiking Neuron Array for Multimode-Fused Perception and Object Classification. Adv. Mater. 2022, 34, 2200481. [Google Scholar] [CrossRef]
  155. Guo, L.; Wang, T.; Wu, Z.; Wang, J.; Wang, M.; Cui, Z.; Ji, S.; Cai, J.; Xu, C.; Chen, X. Portable Food-Freshness Prediction Platform Based on Colorimetric Barcode Combinatorics and Deep Convolutional Neural Networks. Adv. Mater. 2020, 32, 2004805. [Google Scholar] [CrossRef]
  156. Hu, H.; Huang, H.; Li, M.; Gao, X.; Yin, L.; Qi, R.; Wu, R.S.; Chen, X.; Ma, Y.; Shi, K.; et al. A wearable cardiac ultrasound imager. Nature 2023, 613, 667–675. [Google Scholar] [CrossRef]
  157. Jeong, H.; Yoo, J.-Y.; Ouyang, W.; Greane, A.L.J.X.; Wiebe, A.J.; Huang, I.; Lee, Y.J.; Lee, J.Y.; Kim, J.; Ni, X.; et al. Closed-loop network of skin-interfaced wireless devices for quantifying vocal fatigue and providing user feedback. Proc. Natl. Acad. Sci. USA 2023, 120, e2219394120. [Google Scholar] [CrossRef] [PubMed]
  158. Konstantinidis, D.; Iliakis, P.; Tatakis, F.; Thomopoulos, K.; Dimitriadis, K.; Tousoulis, D.; Tsioufis, K. Wearable blood pressure measurement devices and new approaches in hypertension management: The digital era. J. Hum. Hypertens. 2022, 36, 945–951. [Google Scholar] [CrossRef]
  159. Stergiou, G.S.; Alpert, B.; Mieke, S.; Asmar, R.; Atkins, N.; Eckert, S.; Frick, G.; Friedman, B.; Graßl, T.; Ichikawa, T.; et al. A Universal Standard for the Validation of Blood Pressure Measuring Devices. Hypertension 2018, 71, 368–374. [Google Scholar] [CrossRef] [PubMed]
  160. Kwon, Y.; Stafford, P.L.; Lim, D.C.; Park, S.; Kim, S.-H.; Berry, R.B.; Calhoun, D.A. Blood pressure monitoring in sleep: Time to wake up. Blood Press. Monit. 2020, 25, 61–68. [Google Scholar] [CrossRef]
  161. Wang, C.; Qi, B.; Lin, M.; Zhang, Z.; Makihata, M.; Liu, B.; Zhou, S.; Huang, Y.-h.; Hu, H.; Gu, Y.; et al. Continuous monitoring of deep-tissue haemodynamics with stretchable ultrasonic phased arrays. Nat. Biomed. Eng. 2021, 5, 749–758. [Google Scholar] [CrossRef] [PubMed]
  162. Kireev, D.; Sel, K.; Ibrahim, B.; Kumar, N.; Akbari, A.; Jafari, R.; Akinwande, D. Continuous cuffless monitoring of arterial blood pressure via graphene bioimpedance tattoos. Nat. Nanotechnol. 2022, 17, 864–870. [Google Scholar] [CrossRef]
  163. Xu, J.; Chen, W.; Liu, L.; Jiang, S.; Wang, H.; Zhang, J.; Gan, X.; Zhou, X.; Guo, T.; Wu, C.; et al. Intelligent recognition of human motion using an ingenious electronic skin based on metal fabric and natural triboelectrification. Sci. China Mater. 2024, 67, 887–897. [Google Scholar] [CrossRef]
  164. Choi, J.; Ghaffari, R.; Baker, L.B.; Rogers, J.A. Skin-interfaced systems for sweat collection and analytics. Sci. Adv. 2018, 4, eaar3921. [Google Scholar] [CrossRef]
  165. Jung, W.; Han, J.; Choi, J.-W.; Ahn, C.H. Point-of-care testing (POCT) diagnostic systems using microfluidic lab-on-a-chip technologies. Microelectron. Eng. 2015, 132, 46–57. [Google Scholar] [CrossRef]
  166. Chen, H.; Xu, S.; Liu, H.; Liu, C.; Liu, H.; Chen, J.; Huang, H.; Gong, H.; Wu, J.; Tang, H.; et al. Nanomesh-YOLO: Intelligent Colorimetry E-Skin Based on Nanomesh and Deep Learning Object Detection Algorithm. Adv. Funct. Mater. 2024, 34, 2309798. [Google Scholar] [CrossRef]
  167. Li, G.; Liu, S.; Wang, L.; Zhu, R. Skin-inspired quadruple tactile sensors integrated on a robot hand enable object recognition. Sci. Robot. 2020, 5, eabc8134. [Google Scholar] [CrossRef] [PubMed]
  168. Sun, F.; Fang, B.; Xue, H.; Liu, H.; Huang, H. A novel multi-modal tactile sensor design using thermochromic material. Sci. China Inf. Sci. 2019, 62, 214201. [Google Scholar] [CrossRef]
  169. Tchantchane, R.; Zhou, H.; Zhang, S.; Alici, G. A Review of Hand Gesture Recognition Systems Based on Noninvasive Wearable Sensors. Adv. Intell. Syst. 2023, 5, 2300207. [Google Scholar] [CrossRef]
  170. Gannouni, S.; Aledaily, A.; Belwafi, K.; Aboalsamh, H. Emotion detection using electroencephalography signals and a zero-time windowing-based epoch estimation and relevant electrode identification. Sci. Rep. 2021, 11, 7071. [Google Scholar] [CrossRef]
  171. Cross, M.P.; Hunter, J.F.; Smith, J.R.; Twidwell, R.E.; Pressman, S.D. Comparing, Differentiating, and Applying Affective Facial Coding Techniques for the Assessment of Positive Emotion. J. Posit. Psychol. 2023, 18, 420–438. [Google Scholar] [CrossRef]
  172. Zhang, S.; Zhao, X.; Tian, Q. Spontaneous Speech Emotion Recognition Using Multiscale Deep Convolutional LSTM. IEEE Trans. Affect. Comput. 2022, 13, 680–688. [Google Scholar] [CrossRef]
  173. Sun, Z.; Zhu, M.; Zhang, Z.; Chen, Z.; Shi, Q.; Shan, X.; Yeow, R.C.H.; Lee, C. Artificial Intelligence of Things (AIoT) Enabled Virtual Shop Applications Using Self-Powered Sensor Enhanced Soft Robotic Manipulator. Adv. Sci. 2021, 8, 2100230. [Google Scholar] [CrossRef]
  174. Zhu, Y.; Wang, M.; Yin, X.; Zhang, J.; Meijering, E.; Hu, J. Deep Learning in Diverse Intelligent Sensor Based Systems. Sensors 2023, 23, 62. [Google Scholar]
  175. IEEE 802.15.4-2003; IEEE Standard for Information Technology—Telecommunications and Information Exchange Between Systems—Local and Metropolitan Area Networks—Specific Requirements—Part 15.4: Wireless Medium Access Control (MAC) and Physical Layer (PHY) Specifications for Low-Rate Wireless Personal Area Networks (LR-WPANs). IEEE: Piscataway, NJ, USA, 2003.
  176. Hutabarat, Y.; Owaki, D.; Hayashibe, M. Recent Advances in Quantitative Gait Analysis Using Wearable Sensors: A Review. IEEE Sens. J. 2021, 21, 26470–26487. [Google Scholar] [CrossRef]
  177. Sun, Q.; Ge, Z. A Survey on Deep Learning for Data-Driven Soft Sensors. IEEE Trans. Ind. Inform. 2021, 17, 5853–5866. [Google Scholar] [CrossRef]
  178. ISO/IEC/IEEE 21451-2:2010; Information Technology—Smart Transducer Interface for Sensors and Actuators Part 2: Transducer to Microprocessor Communication Protocols and Transducer Electronic Data Sheet (TEDS) Formats. IEEE: Piscataway, NJ, USA, 2010.
  179. Xiong, B.; Chen, W.; Niu, Y.; Gan, Z.; Mao, G.; Xu, Y. A Global and Local Feature fused CNN architecture for the sEMG-based hand gesture recognition. Comput. Biol. Med. 2023, 166, 107497. [Google Scholar] [CrossRef] [PubMed]
  180. Li, G.; Tang, H.; Sun, Y.; Kong, J.; Jiang, G.; Jiang, D.; Tao, B.; Xu, S.; Liu, H. Hand gesture recognition based on convolution neural network. Clust. Comput. 2019, 22, 2719–2729. [Google Scholar] [CrossRef]
  181. Hu, K.; Gong, S.; Zhang, Q.; Seng, C.; Xia, M.; Jiang, S. An overview of implementing security and privacy in federated learning. Artif. Intell. Rev. 2024, 57, 204. [Google Scholar] [CrossRef]
  182. Lundberg, S.M.; Erion, G.; Chen, H.; DeGrave, A.; Prutkin, J.M.; Nair, B.; Katz, R.; Himmelfarb, J.; Bansal, N.; Lee, S.-I. From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. 2020, 2, 56–67. [Google Scholar] [CrossRef]
  183. Dai, N.; Lei, I.M.; Li, Z.; Li, Y.; Fang, P.; Zhong, J. Recent advances in wearable electromechanical sensors—Moving towards machine learning-assisted wearable sensing systems. Nano Energy 2023, 105, 108041. [Google Scholar] [CrossRef]
  184. Saad, H.S.; Zaki, J.F.W.; Abdelsalam, M.M. Employing of machine learning and wearable devices in healthcare system: Tasks and challenges. Neural Comput. Appl. 2024, 36, 17829–17849. [Google Scholar] [CrossRef]
  185. Tang, Q.; Liang, J.; Zhu, F. A comparative review on multi-modal sensors fusion based on deep learning. Signal Process. 2023, 213, 109165. [Google Scholar] [CrossRef]
  186. Yap, M.H.; Hachiuma, R.; Alavi, A.; Brüngel, R.; Cassidy, B.; Goyal, M.; Zhu, H.; Rückert, J.; Olshansky, M.; Huang, X.; et al. Deep learning in diabetic foot ulcers detection: A comprehensive evaluation. Comput. Biol. Med. 2021, 135, 104596. [Google Scholar] [CrossRef]
  187. Xiao, X.; Yin, J.; Xu, J.; Tat, T.; Chen, J. Advances in Machine Learning for Wearable Sensors. ACS Nano 2024, 18, 22734–22751. [Google Scholar] [CrossRef]
  188. Yamazaki, K.; Vo-Ho, V.-K.; Bulsara, D.; Le, N. Spiking Neural Networks and Their Applications: A Review. Brain Sci. 2022, 12, 863. [Google Scholar] [CrossRef] [PubMed]
Figure 3. Temperature sensors based on bionic design. (a) Jellyfish-inspired sensor device schematic. Machine learning can be used to decouple temperature and pressure by analyzing capacitance and resistance signals under different conditions [64]. Copyright 2024, John Wiley and Sons. (b) Flowchart of DMSTS preparation based on centipede’s foot and schematic diagram of DMSTS bionic structure sensing layer [65]. Copyright 2024, John Wiley and Sons.
Figure 3. Temperature sensors based on bionic design. (a) Jellyfish-inspired sensor device schematic. Machine learning can be used to decouple temperature and pressure by analyzing capacitance and resistance signals under different conditions [64]. Copyright 2024, John Wiley and Sons. (b) Flowchart of DMSTS preparation based on centipede’s foot and schematic diagram of DMSTS bionic structure sensing layer [65]. Copyright 2024, John Wiley and Sons.
Sensors 25 01615 g003
Figure 4. Design of the nepenthes-inspired hydrogel hybrid system [74]. (a) Schematic of the hydrogel system on human skin for ECG recording, with inset showing sweat-wicking NIH layer. (b) Exploded 3D model of the NIH hybrid system. (c) ECG signals from resting and exercising states displayed on the app. (d) Nepenthes-inspired microstructures of the hydrogel interface. (e,f) SEM image (e) and photograph (f) of nepenthes lip. (g) NIH network composition schematic. (h) Hydrogel/skin adhesion mechanism. (i) Nepenthes-inspired structure design of the hydrogel interface layer. α and β represent the cone angle of microgrooves and the wedge angle of microcolumns, respectively. (j) Electrical architecture of the NIH hybrid system. (k) Methylene blue droplets on NIH layer (i) and undergo directional transport (ii). (l) System/skin coupling during running (i,ii) and hydrogel/electrode interface under bending (iii). Scale bars: 25 µm (e), 4 cm (f), 5 mm (k,l(iii)), 50 mm (l(i)), 10 mm (l(ii)). Copyright 2024, John Wiley and Sons.
Figure 4. Design of the nepenthes-inspired hydrogel hybrid system [74]. (a) Schematic of the hydrogel system on human skin for ECG recording, with inset showing sweat-wicking NIH layer. (b) Exploded 3D model of the NIH hybrid system. (c) ECG signals from resting and exercising states displayed on the app. (d) Nepenthes-inspired microstructures of the hydrogel interface. (e,f) SEM image (e) and photograph (f) of nepenthes lip. (g) NIH network composition schematic. (h) Hydrogel/skin adhesion mechanism. (i) Nepenthes-inspired structure design of the hydrogel interface layer. α and β represent the cone angle of microgrooves and the wedge angle of microcolumns, respectively. (j) Electrical architecture of the NIH hybrid system. (k) Methylene blue droplets on NIH layer (i) and undergo directional transport (ii). (l) System/skin coupling during running (i,ii) and hydrogel/electrode interface under bending (iii). Scale bars: 25 µm (e), 4 cm (f), 5 mm (k,l(iii)), 50 mm (l(i)), 10 mm (l(ii)). Copyright 2024, John Wiley and Sons.
Sensors 25 01615 g004
Figure 5. Human activity recognition and user identification using the deep learning method [34]. (a) A 1D-CNN system architecture for activity recognition and user identification. Confusion matrices for (b) activity prediction (99% accuracy) and (c) user prediction (99% accuracy). Photographs of user 1 during (d) walking, (e) running, and (f) jumping, with insets showing correct identification and activity. (g) Photograph of the processing circuit and TENG sensors on the shoe insole for data collection. Copyright 2023, Elsevier.
Figure 5. Human activity recognition and user identification using the deep learning method [34]. (a) A 1D-CNN system architecture for activity recognition and user identification. Confusion matrices for (b) activity prediction (99% accuracy) and (c) user prediction (99% accuracy). Photographs of user 1 during (d) walking, (e) running, and (f) jumping, with insets showing correct identification and activity. (g) Photograph of the processing circuit and TENG sensors on the shoe insole for data collection. Copyright 2023, Elsevier.
Sensors 25 01615 g005
Figure 6. Facial EMG monitoring by PLPG and machine learning for emotion analysis [166]. (a) Schematic diagram of the YOLOv3 algorithm backbone network consisting of three upsamples that output three feature maps: y1, y2, y3. (b) YOLOv3 training loss vs. epochs. (c) Confusion matrix for 4 perspiration categories. (d) Images of perspiration categorization results. Copyright 2023, John Wiley and Sons.
Figure 6. Facial EMG monitoring by PLPG and machine learning for emotion analysis [166]. (a) Schematic diagram of the YOLOv3 algorithm backbone network consisting of three upsamples that output three feature maps: y1, y2, y3. (b) YOLOv3 training loss vs. epochs. (c) Confusion matrix for 4 perspiration categories. (d) Images of perspiration categorization results. Copyright 2023, John Wiley and Sons.
Sensors 25 01615 g006
Figure 7. Signal decoupling and simultaneous recognition model [45]. (a) Architecture of the decoupling and 1D-CNN-based recognition model for feature extraction and classification. (b) Sixteen standard objects from the cross-pairing of four materials (copper, cotton, resin, paper) and four textures. (c) Sample sensing signals and corresponding decoupled features. (d) Confusion matrix for material recognition (4 materials). (e) Confusion matrix for texture recognition (4 textures). (f) Confusion matrix for merged recognition of the 16 objects in (b). Copyright 2022, Elsevier.
Figure 7. Signal decoupling and simultaneous recognition model [45]. (a) Architecture of the decoupling and 1D-CNN-based recognition model for feature extraction and classification. (b) Sixteen standard objects from the cross-pairing of four materials (copper, cotton, resin, paper) and four textures. (c) Sample sensing signals and corresponding decoupled features. (d) Confusion matrix for material recognition (4 materials). (e) Confusion matrix for texture recognition (4 textures). (f) Confusion matrix for merged recognition of the 16 objects in (b). Copyright 2022, Elsevier.
Sensors 25 01615 g007
Figure 8. Realization of hand gesture recognition by deep-learning-based algorithm [10]. (a) Process of hand gesture recognition with deep convolutional neural networks (DCNNs). (b) Three-dimensional plot of test accuracy vs. epochs and training ratios. (c) Accuracy rate transition with increasing epochs. (d) Loss rate transition with increasing epochs. (e) Confusion matrix for DCNNs. (f) Confusion matrix for support vector machines. (g) Confusion matrix for K-nearest neighbors. Copyright 2024, John Wiley and Sons.
Figure 8. Realization of hand gesture recognition by deep-learning-based algorithm [10]. (a) Process of hand gesture recognition with deep convolutional neural networks (DCNNs). (b) Three-dimensional plot of test accuracy vs. epochs and training ratios. (c) Accuracy rate transition with increasing epochs. (d) Loss rate transition with increasing epochs. (e) Confusion matrix for DCNNs. (f) Confusion matrix for support vector machines. (g) Confusion matrix for K-nearest neighbors. Copyright 2024, John Wiley and Sons.
Sensors 25 01615 g008
Figure 9. Facial EMG monitoring by PLPG and machine learning for emotion analysis [11]. (a) Main muscles for emotion expression. (b) PLPG with M-3 pattern electrodes for fEMG acquisition. (c,d) Representative fEMG signals and extracted integrated EMG for positivee (c) and negative (d) emotions. (e) Machine learning flowchart for emotion classification. (fh) Thermogram of fEMG correlation coefficients for positive (f), neutral (g), and negative (h) emotions, with classification labels in the 27th column. (i) Confusion matrix for classification accuracy. (j) LSTM identification results. Copyright 2024, John Wiley and Sons.
Figure 9. Facial EMG monitoring by PLPG and machine learning for emotion analysis [11]. (a) Main muscles for emotion expression. (b) PLPG with M-3 pattern electrodes for fEMG acquisition. (c,d) Representative fEMG signals and extracted integrated EMG for positivee (c) and negative (d) emotions. (e) Machine learning flowchart for emotion classification. (fh) Thermogram of fEMG correlation coefficients for positive (f), neutral (g), and negative (h) emotions, with classification labels in the 27th column. (i) Confusion matrix for classification accuracy. (j) LSTM identification results. Copyright 2024, John Wiley and Sons.
Sensors 25 01615 g009
Figure 10. ML-enabled automatic grasped objects recognition system [173]. (a) A 1D-CNN framework. (b) Fifteen-channel spectra from TENG system for 6 spherical and 3 oval objects. (c) Confusion map for spherical and oval objects. (d) Manipulator grasping 5 elongated objects vertically and horizontally. (e) Deformation and contact map of manipulator with T-TENG patches. The marks of five-pointed star represent the contact positions on the T-TENG sensor patches integrated on three pneumatic fingers. (f) t-SNE visualization framework. (g) t-SNE results for vertical and horizontal grasps. (h) Confusion map for 5 elongated objects at two grasping angles. Copyright 2023, John Wiley and Sons.
Figure 10. ML-enabled automatic grasped objects recognition system [173]. (a) A 1D-CNN framework. (b) Fifteen-channel spectra from TENG system for 6 spherical and 3 oval objects. (c) Confusion map for spherical and oval objects. (d) Manipulator grasping 5 elongated objects vertically and horizontally. (e) Deformation and contact map of manipulator with T-TENG patches. The marks of five-pointed star represent the contact positions on the T-TENG sensor patches integrated on three pneumatic fingers. (f) t-SNE visualization framework. (g) t-SNE results for vertical and horizontal grasps. (h) Confusion map for 5 elongated objects at two grasping angles. Copyright 2023, John Wiley and Sons.
Sensors 25 01615 g010
Table 1. Comparison of different deep learning technologies.
Table 1. Comparison of different deep learning technologies.
TechnologyAdvantagesDisadvantagesApplicationsReference(s)
CNNsAutomatic extraction of spatial featuresDifficult to capture long-range dependenciesDemonstrates high classification accuracy for the analysis of pressure sensor arrays[105,106,107]
Parameter sharing reduces computationRequires large amounts of labelled dataEnables tasks such as surface identification, feature acquisition, and health detection
Suitable for image/matrix data
RNNs/LSTM/GRUHighly capable of temporal modelingGradient vanishing/exploding (RNNs)Real-time monitoring of changes in physiological signals to analyze health status[108,109]
Dynamically maintains contextual information
Suitable for sequential data
Slow trainingSolves the dilemma of traditional RNNs in long sequence analysis and improves accuracy and efficiency
Difficult to parallelize
TransformerParallel computation is efficientHigh consumption of computational resourcesIntegrates data from different sensors to infer complex patterns[110,111]
Self-attention captures long-range dependenceEasy overfitting for small dataImproves the processing of multimodal data
Strong multimodal fusion
Self-supervised LearningNo need for large amounts of labelled dataPre-training tasks are designed to be sensitiveEnables models to be trained efficiently by designing pre-training tasks when there are insufficient data[112]
Generic features learned through pre-training tasksMigration effects are dependent on task relevanceSuitable for small-sample learning tasks to speed up the training process
Transfer LearningReduce data requirementsDependent on similarity between source and target tasksCross-scene health monitoring model migration[113]
Reuse pre-trained model knowledgePossible negative migrationSensor disparity adaptation
Table 2. Representative studies that used DL-powered electronic skin for tasks.
Table 2. Representative studies that used DL-powered electronic skin for tasks.
CategoryTargeted ParametersNumber of Model ParametersNumber of Sensor ChannelsDL ModelsLearning ObjectivesYearReference
DL for data processing Humidity, temperature, pressure, and UV4105LSTMMulti-signal decoupling2022[138]
Pressure, temperature2105CNNTemperature and pressure mapping
Signal decoupling
2024[64]
Ionic liquids, photovoltaics, conductive fabric signals3105TransformerMulti-signal decoupling2024[125]
DL for healthcareDirectional flow of air and air vibration during respiratory activity1106CNNCough diagnosis2022[122]
Biomarkers of wound exudates5106CNNWound healing monitoring2023[121]
EEG2106LSTMEpileptic seizure detection2023[124]
Friction from motion3106CNNMotion status monitoring2023[34]
Thiram residues1104CNNFood safety testing2025[139]
Plantar pressure distribution28106CNNMotion gait analysis2024[140]
DL for HMIStress on hand arrays548107CNNRecognition of grabbed items2019[55]
Finger bending strain515,000CNNGesture recognition2020[141]
Finger bending strain5106TransformerGesture recognition2022[142]
Finger bending strain10105LSTMGesture recognition2022[143]
Laryngeal movement1107CNNClassification of voice and neck movements2023[144]
Hand pressure and ethanol gas concentration76108CNNObject recognition2022[145]
Esophageal muscle movement1107CNNSpeech recognition2023[21]
Oral muscle exercise1105RNNLip recognition2021[123]
Speaking voice waveforms1106CNNSpeech recognition2022[146]
Humidity, proximity, pressure3105LSTMObject recognition2023[147]
Laryngeal movement1106CNNSpeech recognition2023[148]
Hand tactile information2049107CNNSurface texture recognition2021[133]
Lip muscle strain8107CNNSpeech recognition without voice2022[149]
Acoustic oscillation7106CNNSpeaker identification2022[150]
Facial muscle signals2105RNNEmotion recognition2024[11]
Muscle movement signals1105RNNClassification of pronunciation2024[151]
Facial muscle exercise5106CNNEmotion recognition2025[152]
Finger bending strain5105CNNGesture recognition2024[10]
Wrist rotation16107CNNHandwriting recognition2023[153]
Temperature, pressure9106SNNObject recognition2022[154]
Modulus of elasticity2106CNNSoftness classification2022[20]
Ammonia60105CNNFood freshness monitoring2020[155]
Hand tactile information16106CNNTactile mapping2022[16]
Strain on different parts of the body216106CNNWhole-body poses recognition2021[22]
Ultrasound images of the heart6105CNNLeft-ventricular volume2023[156]
Vocal dose1105CNNVocal fatigue2023[157]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, Y.; Sun, X.; Li, L.; Shi, Y.; Cheng, W.; Pan, L. Deep-Learning-Based Analysis of Electronic Skin Sensing Data. Sensors 2025, 25, 1615. https://doi.org/10.3390/s25051615

AMA Style

Guo Y, Sun X, Li L, Shi Y, Cheng W, Pan L. Deep-Learning-Based Analysis of Electronic Skin Sensing Data. Sensors. 2025; 25(5):1615. https://doi.org/10.3390/s25051615

Chicago/Turabian Style

Guo, Yuchen, Xidi Sun, Lulu Li, Yi Shi, Wen Cheng, and Lijia Pan. 2025. "Deep-Learning-Based Analysis of Electronic Skin Sensing Data" Sensors 25, no. 5: 1615. https://doi.org/10.3390/s25051615

APA Style

Guo, Y., Sun, X., Li, L., Shi, Y., Cheng, W., & Pan, L. (2025). Deep-Learning-Based Analysis of Electronic Skin Sensing Data. Sensors, 25(5), 1615. https://doi.org/10.3390/s25051615

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop