Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Unmanned Aerial Vehicle (UAV) and Spectral Datasets in South Africa for Precision Agriculture
Next Article in Special Issue
Deep Learning-Based Black Spot Identification on Greek Road Networks
Previous Article in Journal
Exploring the Evolution of Sentiment in Spanish Pandemic Tweets: A Data Analysis Based on a Fine-Tuned BERT Architecture
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Fast Deep Learning ECG Sex Identifier Based on Wavelet RGB Image Classification

1
Department of Telecommunications, Faculty of Engineering, Fundación Universitaria Compensar, Bogota 111311, Colombia
2
Department of Electronics, Faculty of Engineering, Pontificia Universidad Javeriana, Bogota 110231, Colombia
*
Author to whom correspondence should be addressed.
Submission received: 26 March 2023 / Revised: 12 May 2023 / Accepted: 12 May 2023 / Published: 29 May 2023
(This article belongs to the Special Issue Signal Processing for Data Mining)

Abstract

:
Human sex recognition with electrocardiogram signals is an emerging area in machine learning, mostly oriented toward neural network approaches. It might be the beginning of a field of heart behavior analysis focused on sex. However, a person’s heartbeat changes during daily activities, which could compromise the classification. In this paper, with the intention of capturing heartbeat dynamics, we divided the heart rate into different intervals, creating a specialized identification model for each interval. The sexual differentiation for each model was performed with a deep convolutional neural network from images that represented the RGB wavelet transformation of ECG pseudo-orthogonal X, Y, and Z signals, using sufficient samples to train the network. Our database included 202 people, with a female-to-male population ratio of 49.5–50.5% and an observation period of 24 h per person. As our main goal, we looked for periods of time during which the classification rate of sex recognition was higher and the process was faster; in fact, we identified intervals in which only one heartbeat was required. We found that for each heart rate interval, the best accuracy score varied depending on the number of heartbeats collected. Furthermore, our findings indicated that as the heart rate increased, fewer heartbeats were needed for analysis. On average, our proposed model reached an accuracy of 94.82% ± 1.96%. The findings of this investigation provide a heartbeat acquisition procedure for ECG sex recognition systems. In addition, our results encourage future research to include sex as a soft biometric characteristic in person identification scenarios and for cardiology studies, in which the detection of specific male or female anomalies could help autonomous learning machines move toward specialized health applications.

1. Introduction

Historically, humankind has settled into two general categories, male and female. This differentiation has been fundamental to the way society is consolidated today. For this reason, some fields of study have attempted to cover this distinction. For example, the terms gender and sex are closely related nowadays but differ according to their disciplinary perspective and are not interchangeable. Sex is a term that focuses on a person’s biology, relating to the physical or physiological distinction between male and female [1]. On the contrary, gender is a sociological variable that involves the cultural, behavioral, and psychological traits associated with one’s sex [2]. Although sex and gender are related to two different worldviews, it does not mean that these variables are independent or incompatible; there exist areas of overlap [3,4]. However, because of the physiological focus of this text, we prefer the use of the term sex in this manuscript.
The recognition of sex in the field of technology has been actively developed through computer vision focused on the body [5] and face [6] as regions of interest. Furthermore, data related to keystroke dynamics [7], ear shape [8,9], tweeting [10], voice [11], and gait [12] have provided enough discrimination space for a sex recognition classifier. These approaches have improved five common applications, namely business intelligence, access control, image filtering, soft biometrics [13], and health-related services [14], as well as supporting IoT solutions [15]. Nevertheless, recent studies have pointed out the possibility of sex recognition through the electric signal of the heart (electrocardiogram), with a significant classification rate [16,17,18], emphasizing neural network techniques. We propose such a recognition approach with the intention of it being used for device user authentication [19] or in a cardiac context [16].
In the authentication context, sex recognition may help on two fronts. First, it could help to prevent access when the impostor has the opposite sex to the smartphone owner, as a first filter. Second, the extraction of sex as a soft biometric attribute could help to increase the user recognition rate of ECG identification, such as in multimodal image-oriented biometrics [20]. This option is viable for testing because the heart has demonstrated identification properties as a biometric trait [21,22]. In addition, heart-related tech has the potential for ubiquity, because an ECG contains an inherited living property that represents the user’s presence; although the user may alter the waveform temporarily, they cannot interrupt it, because the signal generation is involuntary.
Regarding cardiac approximation, heart wellness status indicates a person’s health regarding the heart’s pumping of blood to circulate through the entire cardiovascular system, which is essential for vital human operations. However, a heart anomaly could be caused by increased artery obstruction or an electrical disorder and could lead to a heart attack or a heart arrest. This kind of progressive anomaly behavior could be detected earlier or treated in a more detailed way by taking into account the different heart conditions of males and females. There is evidence of anatomical differences in the heart between the sexes [23]; for example, in a woman’s heart, the veins and some chambers are smaller, and the heart as a whole pumps faster compared to that of men. In addition, men’s arteries contract under stress, provoking a blood pressure rise, whilst women experience an increase in their pumping rate. Consequently, this set of differences implies different electrical behavior; indeed, the QRS and PR electrocardiogram complexes are larger in men [24], and the T waveform presents specificities by sex [25]. This sex recognition approach seeks to facilitate a future comprehensive study in which a machine learning model could be specialized for heart monitoring to detect behaviors specific to a certain sex.
Machine learning for ECG sex recognition has been implemented since 2014 [26]. A common strategy for feature extraction is fiducial orientation, which seeks quantitative relations based on the time or amplitude covered by the PQRST complex [21]. Nevertheless, this approach does not allow for taking advantage of the entire signal waveform; instead, it is necessary to choose transformations that involve the complete heartbeat. From a user perspective, ECG-related acquisition devices are limited with regard to daily use and portability because they require a considerable number of attachable electrodes with specialized placement, which entails a significant setup time. Furthermore, ECG-oriented sex classification leading investigations use a 12-lead configuration and are mainly influenced by deep learning. Additionally, most research has used data from people in resting position for controlled experiments, because variation in a person’s stance increases the signal processing complexity due to changes in the heartbeat waveform. Moreover, in related research, it is customary to analyze signal windows of 10 s. Nonetheless, users demand a prompt response, particularly in authentication scenarios. Finally, it is noteworthy that there is no existing academic research in the ECG sex recognition field exploring the classification effect for the collection of different heartbeats.
In the following list, we present the contributions of this text.
1.
In our study, we assessed the accuracy of ECG sex classification while controlling for the number of heartbeats collected. We used a variable time window of up to 4 s for our analysis. It is notable that this type of research has not been conducted according to the previous academic literature.
2.
We found that for higher RR intervals (heart rate in milliseconds), only one heartbeat was required to obtain a better classification rate.
3.
After performing a heart rate time division by bins, we reached an ECG sex classification accuracy mean of 94.82% ± 1.96%. However, we found peaks greater than 96% at some heart rate intervals using our architecture applied to pseudo-orthogonal ECG signal samples. This result used fewer heartbeats in comparison to the methods in previous works.
4.
The proposed methodology achieved faster acquisition, reducing the time by 6.9 s compared to similar research and 21% compared to our previous work.
In addition, as evidenced by our previous research [16], the architecture of this work had the following characteristics:
  • This study analyzed only three ECG signals (X, Y, and Z), contrary to the common 12-lead configuration implemented in related works that uses 10 signals.
  • Our proposal co-ordinated the deep convolutional neural network model based on the user RR interval, allowing us to obtain results close to those in related works.
  • Through wavelet transformation, we used the entire signal waveform, converting the three bipolar signals into one RGB image.
  • We extended the signal analysis, which usually takes place with subjects in the resting position, because our database contained a 24-hour record. In fact, although we did not control the person’s stance variable, we achieved significant ECG sex differentiation.
This paper continues by presenting the related work in the area of ECG sex recognition over the last few years (Section 2). Then, the materials and methods are presented in Section 3, which contains a description of our database (Section 3.1) and the methodology implemented in our experiment (Section 3.2). Then, Section 4 describes the architecture of our work. Subsequently, the results are expounded upon in Section 5, and the consequences are addressed in the discussion (Section 6). Finally, we end this text with the conclusion in Section 7.

2. Related Work

Deep learning has permeated the study of ECG signals, including applications such as arrhythmias [27], QRS complex [28] and R peak detection [29], fetal signal separation [30], sleep apnea [31], person identification [32], and sex recognition [16]. This section provides an overview of research related to the recognition of sex through the ECG signals of a person. To this end, we present a description of the work presented in Table 1; most recent work used ten-second ECG time window analysis and 12-lead signal acquisition in a resting position.
Attia et al. [17], with 12-lead samples of 10 s each, experimented with ECG sex classification, achieving an accuracy of 90.4% and an AUC of 0.97 with a deep convolutional neural network. In addition, they estimated the patient’s age using their ECG, checking their health status by the comparison of the chronological and estimated ages.
Similarly, Siegersma et al. found a 1.4-times higher mortality risk for people who presented misclassified sex samples compared with those who provided well-classified samples [18,33]. Simultaneously, using a deep convolutional neural network, the authors performed sex classification with an accuracy of 89% for internal validation on the UMCU database. Their external validation used the Know-Your-Heart and Utrecht Health Project databases, obtaining an accuracy of 81% and 82%, respectively.
On the other hand, Lyle et at. mapped with equidistant N points an ECG signal through a new space of N dimensions [34]. This technique is called symmetric projection attractor reconstruction (SPAR), which seeks to project in a new space the morphology and variability of a time-series signal. They worked with healthy subjects using two different database collections with N = 104 (DB1) and N = 8903 (DB2). Applying stacked machine learning cross-validation on DB2, they obtained an accuracy of 86.3%. Then, extending their trained model to DB1, they achieved an accuracy of 91.3%.
Strodthoff et al. performed their ECG sex classification with ResNet and inception models, reaching an accuracy of 84.9% [35]. In addition, they created the PTB-XL database following the SCP-ECG standard, searching for a benchmark dataset for future ECG-based machine learning research. This database included 71 types of annotation, organized within the superclasses of myocardial infarction, conduction disturbance, ST/T changes, hypertrophy, and normal ECG.
The model proposed by Diamant et al., called patient contrastive learning of representations (PCLR), sought to extract signal information without the need for labeled datasets or a clinical specialist [36]. This approach generated a specific latent representation from the patient’s projected encoded ECG signal evaluated by the contrastive loss function. For sex classification, it achieved an F1 score of 87%. The PCLR model was also evaluated for age estimation and left ventricular hypertrophy and atrial fibrillation detection.
Finally, in our previous research [16], from the point of view of the ECG lead, we tested sex recognition classification with the combination of the X, Y, and Z signals, comparing them individually and jointly. The results suggested that the best approach was to use the XYZ signals together, achieving an accuracy of 94.4% with a time-window mean of 3.93 ¯ s. The architecture of this work had the following attributes: (i) a classification experiment without controlling the person’s stance, extending the resting position variable of related work; (ii) our ECG sex classification score suggested that it is possible to install fewer electrodes on the user, helping to increase user comfort compared to a 12-lead setup; (iii) our experimental architecture used three signals as input data, obtaining comparable results to the related work implementing 12-lead signals; (iv) the pseudo-orthogonal ECG signals were treated as a single RGB image.
In contrast to related work, our research analyzed sex recognition from a heartbeat collection perspective. We were motivated to provide a better user experience by reducing the acquisition time and speeding up the computing stage. Moreover, we tested the classifier performance by varying the number of heartbeats as a control variable for our experiment. This proposed approach is missing in the current literature. In addition, it included as a unique feature the study of the lowest expected number of heartbeats required to obtain the highest accuracy for an ECG sex classification. Furthermore, we found that as the heart rate increased, it was necessary to analyze fewer PQRST complexes. Indeed, we found intervals for which only one heartbeat was needed, and the variation in the number of heartbeats collected for the different heart intervals did not affect the sex classification during the person’s day-to-day activities.
Table 1. Related work on ECG sex identification with ML algorithms.
Table 1. Related work on ECG sex identification with ML algorithms.
Ref.Acc.
(%)
LeadSample Length
(s)
Tech.Fs
(Hz)
PositionMale–Female
(%)
Tr. | Ts.
Sample
Year
[17]90.41210CNN500Supine52–48∼500 k | ∼275 k2019
[18]92.212N/ADNNN/AN/A50.5–49.5∼131 k | ∼68.5 k2021
[34]DB1:
91.3
DB2:
86.3
1210SPAR
and KNN
DB1:
1000
DB2:
500
RestingDB1:
60–40
DB2:
46–54
DB1:
N = 0.104 k
DB2:
N = 8.9 k
2021
[35]84.91210CNN
xresnet
1d101
100Resting52–48N = ∼22 k
10-fold:
8 | 2
2021
[33]Valid.
Int: 89
Ext: 81 and 82
1210DNN250 or 500RestingTr: N/A
Int. valid.: N/A
Ext. valid.:
42.6–57.4
Tr: ∼132 k
Int. valid.: 68.5 k
Ext. valid.: 7.7 k
2022
[36]F score:
87
1210PCLR and
Contrastive
learning
250 or 500RestingN/AN = ∼3229 k
90% | 10%
2022
[16]94.46 3.93 ¯ CNN200Random51–49∼3 k | ∼1.3 k2022
Own94.86 3.1 ¯ CNN200Random51–49∼3 k | ∼1.3 k2023

3. Materials and Methods

3.1. Database Description

The ECG signals analyzed in this paper were obtained from the University of Rochester medical center, reference number E-HOL-03-0202-003 [37]. The database contained signals with pseudo-orthogonal configurations composed of X, Y, and Z bipolar leads. Furthermore, the signal recording was 24 h long, with a healthy population of 202 people. Heartbeats used in this research were only those with the label Normal. Additionally, we supplemented Thew’s website statistics with the following information: (1) The University of Rochester confirmed that the database contained 102 males and 100 females. (2) Due to quality issues, we excluded data from patient #1043. (3) The recording average of the whole database was 19.7 ± 6.1 h. (4) The R peak labels started at minute five.
The ECG configuration for a pseudo-orthogonal acquisition requires the mounting of six electrodes in an orthogonal triaxial placement, as depicted in Figure 1. Taking as a reference the 12-lead nomenclature [38], the bipolar signal X requires electrodes at the V6 location on both the right and left. Similarly, for the bipolar signal Z, the electrodes must be in position V3 on both the chest and the back. Finally, the bipolar Y signal is measured from the upper sternum (manubrium) and the lowest rib in the left anterior axillary line.

3.2. Methodology

The methodology proposed in this study included three baselines: (i) Figure 1 depicts the electrode configuration used for our experiment. Each set of electrodes provided a different heart electrical activity perspective. The magnitude indicated how strong the heart was at different moments, and the orientation represented the direction of the electrical activity, which was almost orthogonal between the electrodes, providing a three-dimensional vector approach. We took advantage of the contribution of these three sources for our sex classification. (ii) To establish a minimal acquisition time window, we were motivated by the work of Pokaprakarn et al. and Lai et al., who used acquisition times of 3 and 5 s [40,41], respectively, with successful accuracy rates in their arrhythmia classification approaches. (iii) We used heart rate as a configuration parameter.
Our strategy consisted in verifying the heart rate tendencies with the use of a histogram. The RR interval varies from 0.6 to 1.2 s, so we proposed to take advantage of the dynamism of the heart using a range of RR values for our sex classification model. We defined a Δ Δ RR of 0.06 s, so the difference between the values of the RR intervals (0.6–1.2) divided by Δ RR provided a total of 10 bins.
Figure 2 shows the distribution of the RR values according to the ten bins. Each sex in the histogram was built with approximately 8 million samples, and it is important to highlight that both sexes presented close tendencies. The sex distribution of our data in Figure 2 helped us to propose the classification scenario. We proposed the creation of a discrimination model based on the histogram bins in order to assess heartbeat dynamics and differentiate their waveforms, focusing on each classification model, as suggested in Figure 3.
The experiment presented in this manuscript was based on the contribution of the bipolar signals X, Y, and Z, following the results of our previous work [16]. However, for this research, by bin, we deployed the architecture of Section 4 four times, using as a control variable the number of heartbeats collected, seeking the best accuracy result. Therefore, each bin was tested between one and four heartbeats.

4. Architecture

The architecture of the experiment in general is described in Figure 4, composed of six blocks. (i) It started with the database storage block, which contained the raw pseudo-orthogonal XYZ ECG bipolar signals, following the description in Section 3.1. (ii) Then, artifacts such as baseline wander, powerline, and other additive noises were removed from each of the bipolar signals in the noise-filtering block. (iii) Randomly, the heartbeat selector provided a set of heartbeats oriented by the bin separation of Figure 2 and the number of heartbeats selected by sample, all acquired during the 24-h recording period per person. (iv) The three time-variant signals XYZ were transformed into a time-frequency domain and themselves added to build an RGB image. (v) The sample storage block separated each RGB image into two different folders, according to the person’s sex, male or female. (vi) Finally, the classification stage separated the storage images into training and testing to generate and evaluate the computational model.
Figure 5 presents the set of techniques and procedures implemented for our experiment, expanding the block diagram in Figure 4. Generally, we separated the architecture into two steps, signal preprocessing and data classification. Our proposed data path, shown in Figure 5, was performed for each person in the database.
The preprocessing step was divided into five sections. The first step was the direct application of the suppress band filter at 50 and 60 Hz and a high-pass filter at 0.7 Hz on the frequency space using an FFT; this operation was applied to each of the XYZ bipolar signals, covering all user data records. As the next step, we located common RR vector indexes based on the ten bins described in Figure 2, taking advantage of the database signal annotations. Then, 23 samples were randomly selected from the working bin. These steps completed the extraction of the region of interest.
Then, the XYZ ROI was moved to the two-dimensional wavelet time-frequency domain, which provided the advantage of completely using the PQRST complex without losing signal information. Our wavelet was an analytic Morse within a continuous filter bank with a time-bandwidth product of 60, a voice per octave of 12, and a symmetry characterization of 3.
The matrix magnitude of each transformed signal is depicted in Figure 5. Wvt-X, Wvt-Y, and Wvt-Z were each represented by one color in the RGB image, as appropriate. An example of the transformations is shown in Figure 6, in descending order from four to one female heartbeat(s). Once the image was constructed, it was stored in the female or male folder.
We performed our classification task with GoogLeNet, which is a convolutional neural network composed of 144 layers in Matlab and a 22-layer deep network developed by Google [42]. The data distribution for our sample storage was 70% for training and 30% for testing. Furthermore, we configured the model using the stochastic gradient descent with momentum (SGDM) optimization algorithm with a β = 0.9 , a maximum number of epochs of 20, a dropout of 60%, and a mini-batch size of 15 samples.

5. Results

We processed the ECG data following the architecture of Figure 5, seeking the best accuracy rate. We used as a control variable the number of heartbeats collected in the range of one to four heartbeats.
Figure 7 presents the best working bin accuracy rate. In each bin, the control variable is presented with its respective performance; the leading score is highlighted by a cross, which represents a general mean of 94.82% ± 1.96%. Bins 1, 2, 3, 4, and 6 required four beats. Then, bins 5, 7, and 8 used three beats. Finally, bins 9 and 10 used one heartbeat. In conclusion, the collection of four, three, and one heartbeat(s) represented a scope of 50%, 30%, and 20 %, respectively, providing an outstanding configuration for the entire system. Therefore, as the RR increased, the results suggested that fewer beats were needed to obtain a good sex recognition rate.
Obtaining the best model by bin (Figure 8), the confusion matrix derivation values obtained a general sensitivity of 95.59% ± 2.16%, a specificity of 94.08% ± 3.32%, and a precision of 93.98% ± 2.88%. Taking as a reference point the results with one heartbeat (the ideal scenario), we proposed a category point-to-point subtraction between the results of Figure 8 and the single-heartbeat results (Table 2). This comparison is shown in Figure 9; note that bins 9 and 10 are not included in the graph because their resulting value was zero.
As shown in Figure 9, the accuracy measurements demonstrated positive differences across all bins. The most representative were bins 1–6 with values of 7.0%, 5.7%, 3.9%, 4.2%, 2.0%, and 2.3%, respectively. On the other hand, the sensitivity presented a positive difference of 16.8%, 8.5%, 5.1%, 5.6%, and 3.6% for bins 1–3 and 5–8, respectively, but showed negative differences in bins 4 and 8, with values of 0.3% and 2.3%, respectively. Finally, the specificity and the precision revealed positive differences of 2.9%, 2.7%, 8.6%, 1.0%, and 5.5% and 3.3%, 2.9%, 8.0%, 1.1%, and 5.3%, respectively, for bins 2–4, 6, and 8, and negative differences of 2.9%, 1.6%, and 2.1% and 0.8%, 1.2%, and 2.0%, respectively, for bins 1, 5, and 7.
In conclusion, from the accuracy perspective presented in Figure 7 and Figure 9, the collection of four and three heartbeats provided a better discrimination space than the collection of one heartbeat for sex recognition in the range of bins 1–6. In addition, as the RR increased, the accuracy distance between the improved heartbeat classification rate and single-heartbeat collection tended to be closer. The sensitivity and specificity presented positive increments for bins 2 (96.7% (↑ 8.5%), 93.4% (↑ 2.9%)); 3 (96.0% (↑ 5.1%), 95.5% (↑ 2.7%)); and 6 (97.6% (↑ 3.6%), 97.4% (↑ 1.0%)). Moreover, bins 1,5, and 7 demonstrated an increase in their sensitivity and decrease in their specificity, as follows: 1—95.9% (↑ 16.8%), 88.7% (↓ 2.9%); 5—96.0% (↑ 5.6%), 98.9% (↓ 1.6%); and 7—98.0% (↑ 2.5%), 96.0% (↓ 2.1%). In contrast, bins 4 and 8 generally presented a decrease in sensitivity and an increase in specificity: 4—96.9% (↓ 0.3%), 96.7% (↑ 8.6%); 8—98.8% (↓ 2.3%), 98.0% (↑ 5.5%). In addition, the precision presented an increase in bins 2 (↑ 3.3%), 3 (↑ 2.9%), 4 (↑ 8.0%), 6 (↑ 1.1%), and 8 (↑ 5.3%) and a decrease in bins 1 (↓−0.8%), 5 (↓−1.2%), and 7 (↓−2.0%). Finally, we present the general experimental metrics and scores in Table 2.

6. Discussion

In practice, users demand plug-and-play solutions, with a short response time (e.g., 1.3–2.3 s [19,43]), that are robust enough for any user context. On the contrary, current research in the field of sex recognition with ECG signals has three characteristics: a 12-lead configuration with sensitive electrode placement, an acquisition time of ten seconds that can be approximated to ten PQRST complexes, and analysis limited to patients in a resting position only. To make progress in this area, our research provided a step toward these user requirements. The architecture proposed in this paper, together with our experimental variable control, reduced by 40% the number of attachable electrodes to be installed on the final user, with an electrode setup comprised of four fewer electrodes compared with the 12-lead configuration. Furthermore, on average, we decreased the acquisition time by 6.9 s compared to related research and 21% compared to our previous work. Indeed, these differences could be increased when a few heartbeats were used, depending on the current heart rate interval. Lastly, regarding the classification of sex during random daily activities, our model was responsive of the heartbeat dynamics and used one heartbeat in some heart rate intervals.
From a critical perspective, it should be noted that the results of our study were bound by the characteristics of the Rochester database. In addition, our study did not include an evaluation of the models using external databases, as was performed by Siegersma [33] and Lyle [34]. In fact, this is an invitation for our next study to complement this scenario.
In summary, our classification approach achieved an accuracy of 94.82% ± 1.96% with bipolar XYZ signals, obtaining a competitive score in contrast to related research. Furthermore, our architecture required fewer heartbeats for analysis, providing faster acquisition and improving the user experience. As a future project, we propose to extend our experiment to heart biometrics and sex-oriented feature extraction for cardiac disease studies.

7. Conclusions

This research presented a deep-learning-oriented approach for recognizing the sex of a person using their ECG signals. Our work proposed a unique study that adapted sex classification based on the person’s current heart rate and used the number of heartbeats as a control variable. This sex recognition architecture reached an average accuracy of 94.82% ± 1.96%, achieving in some bins rates over 96%. In addition, our results suggested that as the RR increases, the number of heartbeats required to obtain the best classification tended to decrease. Indeed, higher heart rate intervals required only one heartbeat. The results of this research open up strategies for portable ECG solutions, due to users’ demands for services with fast responses. Our findings will support the introduction of this soft biometric attribute in a user authentication scenario for future work.

Author Contributions

Conceptualization, J.-L.C.L. and C.P.; methodology, J.-L.C.L. and C.P.; software, J.-L.C.L.; validation, J.-L.C.L. and C.P.; formal analysis, J.-L.C.L. and C.P.; investigation, J.-L.C.L.; resources, J.-L.C.L. and G.F.; data curation, J.-L.C.L.; writing—original draft preparation, J.-L.C.L.; writing—review and editing, J.-L.C.L. and C.P.; visualization, J.-L.C.L.; supervision, C.P.; project administration, J.-L.C.L.; funding acquisition, J.-L.C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Fundacion Universitaria Compensar grant number 1720222.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The Medical Center of the University of Rochester provided the data involved in this paper. Please email the authors for guidance.

Acknowledgments

The authors would mainly like to acknowledge the Fundación Universitaria Compensar, including their Telecommunications Engineering Department, Research Department, English Area, and Bilingual Department, for providing time and support to complete this document. Furthermore, we are grateful for the co-operation of the Centro de Excelencia y Apropiación en Internet de las Cosas (CEA-IoT) project. Additional gratitude is extended to our colleague Pablo Ospina for his unconditional professionalism in revising the paper and to Andrew Vontzalides and Stephanie Cuomo for their English proofreading services.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CNNConvolutional neural network
DNNDeep neural network
KNNk-nearest neighbors
PCLRPatient contrastive learning of representations
RRHeart rate
ROIRegion of interest
SPARSymmetric projection attractor reconstruction

References

  1. Little, W.; McGivern, R. Gender, Sex, and Sexuality. 2014. Available online: https://opentextbc.ca/introductiontosociology/chapter/chapter12-gender-sex-and-sexuality/ (accessed on 18 June 2022).
  2. Webster, M. Gender. 2019. Available online: https://www.merriam-webster.com/dictionary/gender (accessed on 18 June 2022).
  3. Connellan, J.; Baron-Cohen, S.; Wheelwright, S.; Batki, A.; Ahluwalia, J. Sex differences in human neonatal social perception. Infant Behav. Dev. 2000, 23, 113–118. [Google Scholar] [CrossRef]
  4. Lippa, R.A. Sex differences in personality traits and gender-related occupational preferences across 53 nations: Testing evolutionary and social-environmental theories. Arch. Sex. Behav. 2008, 39, 619–636. [Google Scholar] [CrossRef] [PubMed]
  5. Nguyen, D.T.; Kim, K.W.; Hong, H.G.; Koo, J.H.; Kim, M.C.; Park, K.R. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction. Sensors 2017, 17, 637. [Google Scholar] [CrossRef] [PubMed]
  6. Ghildiyal, A.; Sharma, S.; Verma, I.; Marhatta, U. Age and Gender Predictions using Artificial Intelligence Algorithm. In Proceedings of the 3rd International Conference on Intelligent Sustainable Systems (ICISS’20), Thoothukudi, India, 3–5 December 2020; pp. 371–375. [Google Scholar]
  7. Tsimperidis, I.; Yucel, C.; Katos, V. Age and Gender as Cyber Attribution Features in Keystroke Dynamic-Based User Classification Processes. Electronics 2021, 10, 835. [Google Scholar] [CrossRef]
  8. Nguyen-Quoc, H.; Hoang, V.T. Gender recognition based on ear images: A comparative experimental study. In Proceedings of the 2020 3rd International Seminar on Research of Information Technology and Intelligent Systems (ISRITI), Yogyakarta, Indonesia, 10–14 December 2020; pp. 451–456. [Google Scholar] [CrossRef]
  9. Cabra, J.L.; Parra, C.; Trujillo, L. Earprint touchscreen sensoring comparison between hand-crafted features and transfer learning for smartphone authentication. J. Internet Serv. Inf. Secur. JISIS 2022, 12, 16–29. [Google Scholar]
  10. Ikae, C.; Savoy, J. Gender identification on Twitter. J. Assoc. Inf. Sci. Technol. 2021, 73, 58–69. [Google Scholar] [CrossRef]
  11. Alkhawaldeh, R.S. DGR: Gender Recognition of Human Speech Using One-Dimensional Conventional Neural Network. Sci. Program. 2019, 2019, 7213717. [Google Scholar] [CrossRef]
  12. Lee, M.; Lee, J.H.; Kim, D.H. Gender recognition using optimal gait feature based on recursive feature elimination in normal walking. Expert Syst. Appl. 2022, 189, 116040. [Google Scholar] [CrossRef]
  13. Li, S.Z.; Jain, A.K. Encyclopedia of Biometrics, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2015; p. 1651. [Google Scholar]
  14. Moreno-Ospina, J.; Valencia-Quintero, F.; León-García, O.; Steibeck-Domínguez, M.; Moreno-Cáceres, N.; Yandar-Lobon, M. La Industria 4.0, Desde la Perspectiva Organizacional, 1st ed.; Fondo Editorial Universitario Servando Garcés de la Universidad Politécnica Territorial de Falcón Alonso Gamero: Coro, Venezuela, 2019; p. 143. [Google Scholar]
  15. Cabra, J.; Castro, D.; Colorado, J.; Mendez, D.; Trujillo, L. An IoT Approach for Wireless Sensor Networks Applied to e-Health Environmental Monitoring. In Proceedings of the IEEE 2018 International Congress on Cybermatics, Halifax, Canada, 30 July–3 August 2018; pp. 578–583. [Google Scholar]
  16. Cabra Lopez, J.L.; Parra, C.; Gomez, L.; Trujillo, L. Sex Recognition through ECG Signals aiming toward Smartphone Authentication. Appl. Sci. 2022, 12, 6573. [Google Scholar] [CrossRef]
  17. Attia, Z.I.; Friedman, P.A.; Noseworthy, P.A.; Lopez-Jimenez, F.; Ladewig, D.J.; Satam, G.; Pellikka, P.A.; Munger, T.M.; Asirvatham, S.J.; Scott, C.G.; et al. Age and Sex Estimation Using Artificial Intelligence From Standard 12-Lead ECGs. Circ. Arrhythmia Electrophysiol. 2019, 12, e007284. [Google Scholar] [CrossRef]
  18. Siegersma, K.; Van De Leur, R.; Onland-Moret, N.C.; Van Es, R.; Den Ruijter, H.M. Misclassification of sex by deep neural networks reveals novel ECG characteristics that explain a higher risk of mortality in women and in men. Eur. Heart J. 2021, 42, 3162. [Google Scholar] [CrossRef]
  19. Cabra, J.L.; Parra, C.; Mendez, D.; Trujillo, L. Mechanisms of Authentication toward Habitude Pattern Lock and ECG: An overview. J. Wirel. Mob. Netw. Ubiquitous Comput. Dependable Appl. JoWUA 2022, 12, 23–67. [Google Scholar]
  20. Jain, A.K.; Nandakumar, K.; Lu, X.; Park, U. Integrating Faces, Fingerprints, and Soft Biometric Traits for User Recognition. In Proceedings of the 2nd Biometric Authentication ECCV International Workshop (BioAW’04), Prague, Czech Republic, 15 May 2004; Springer: Berlin/Heidelberg, Germany, 2004; pp. 259–269. [Google Scholar]
  21. Cabra, J.L.; Mendez, D.; Trujillo, L.C. Wide Machine Learning Algorithms Evaluation Applied to ECG Authentication and Gender Recognition. In Proceedings of the 2nd International Conference on Biometric Engineering and Applications (ICBEA), Amsterdam, The Netherlands, 16–18 May 2018; ACM: New York, NY, USA, 2018; pp. 58–64. [Google Scholar]
  22. Uwaechia, A.N.; Ramli, D.A. A Comprehensive Survey on ECG Signals as New Biometric Modality for Human Authentication: Recent Advances and Future Challenges. IEEE Access 2021, 9, 2169–3536. [Google Scholar] [CrossRef]
  23. Cardiology Associates of Michigan. Men Vs. Women: How Their Hearts Differ And What It Means. 2019. Available online: https://www.cardofmich.com/men-women-heart-differences/ (accessed on 28 April 2022).
  24. Mieszczanska, H.; Pietrasik, G.; Piotrowicz, K.; McNitt, S.; Moss, A.J.; Zareba, W. Gender Related Differences in Electrocardiographic Parameters and Their Association with Cardiac Events in Patients After Myocardial Infarction. Am. J. Cardiol. 2008, 101, 20–24. [Google Scholar] [CrossRef]
  25. Nakagawa, M.; Ooie, T.; Ou, B.; Ichinose, M.; Yonemochi, H.; Saikawa, T. Gender differences in the dynamics of terminal T wave intervals. Pacing Clin. Electrophysiol. 2004, 27, 769–774. [Google Scholar] [CrossRef]
  26. Ergin, S.; Uysal, A.K.; Gunal, E.S.; Gunal, S.; Gulmezoglu, M.B. ECG based biometric authentication using ensemble of features. In Proceedings of the 9th Iberian Conference on Information Systems and Technologies (CISTI’14), Barcelona, Spain, 18–21 June 2014; pp. 1274–1279. [Google Scholar]
  27. Hammad, M.; Iliyasu, A.M.; Subasi, A.; Ho, E.S.L.; El-Latif, A.A.A. A Multitier Deep Learning Model for Arrhythmia Detection. IEEE Trans. Instrum. Meas. 2020, 70, 2502809. [Google Scholar] [CrossRef]
  28. Habib, A.; Karmakar, C.; Yearwood, J. Impact of ECG Dataset Diversity on Generalization of CNN Model for Detecting QRS Complex. IEEE Access 2019, 7, 93275–93285. [Google Scholar] [CrossRef]
  29. Dogan, R.O.; Kayikçioglu, T. R-peaks detection with convolutional neural network in electrocardiogram signal. In Proceedings of the 26th Signal Processing and Communications Applications Conference (SIU’18), Izmir, Turkey, 2–5 May 2018; pp. 2029–2032. [Google Scholar]
  30. Lee, K.J.; Lee, B. End-to-End Deep Learning Architecture for Separating Maternal and Fetal ECGs Using W-Net. IEEE Access 2022, 10, 39782–39788. [Google Scholar] [CrossRef]
  31. Gupta, K.; Bajaj, V.; Ansari, I.A. OSACN-Net: Automated Classification of Sleep Apnea Using Deep Learning Model and Smoothed Gabor Spectrograms of ECG Signal. IEEE Trans. Instrum. Meas. 2021, 71, 4002109. [Google Scholar] [CrossRef]
  32. Lee, J.N.; Kwak, K.C. Personal Identification Using a Robust Eigen ECG Network Based on Time-Frequency Representations of ECG Signals. IEEE Access 2019, 7, 48392–48404. [Google Scholar] [CrossRef]
  33. Siegersma, K.R.; Van de Leur, R.R.; Onland-Moret, N.C.; Leon, D.A.; Diez-Benavente, E.; Rozendaal, L.; Bots, M.L.; Coronel, R.; Appelman, Y.; Hofstra, L.; et al. Deep neural networks reveal novel sex-specific electrocardiographic features relevant for mortality risk. Eur. Heart J. Digit. Health 2022, 3, 245–254. [Google Scholar] [CrossRef] [PubMed]
  34. Lyle, J.V.; Nandi, M.; Aston, P.J. Symmetric Projection Attractor Reconstruction: Sex Differences in the ECG. Front. Cardiovasc. Med. 2021, 8, 709457. [Google Scholar] [CrossRef] [PubMed]
  35. Strodthoff, N.; Wagner, P.; Schaeffter, T.; Samek, W. Deep Learning for ECG Analysis: Benchmarks and Insights from PTB-XL. IEEE J. Biomed. Health Inform. 2021, 25, 1519–1528. [Google Scholar] [CrossRef] [PubMed]
  36. Diamant, N.; Reinertsen, E.; Song, S.; Aguirre, A.D.; Stultz, C.M.; Batra, P. Patient contrastive learning: A performant, expressive, and practical approach to electrocardiogram modeling. PLoS Comput. Biol. 2022, 18, e1009862. [Google Scholar] [CrossRef]
  37. Medical Center University of Rochester. Healthy Individuals. 2005. Available online: http://thew-project.org/database/e-hol-03-0202-003.html (accessed on 17 April 2023).
  38. Cables and Sensors. 12-Lead ECG Placement Guide with Illustrations. 2016. Available online: https://www.cablesandsensors.com/pages/12-lead-ecg-placement-guide-with-illustrations (accessed on 28 April 2023).
  39. brgfx. Vistas Frontal y Posterior del Esqueleto Aislado Sobre Fondo Blanco. 2021. Available online: https://www.freepik.es/vector-gratis/vistas-frontal-posterior-esqueleto-aislado-sobre-fondo-blanco_12321197.htm (accessed on 18 June 2022).
  40. Pokaprakarn, T.; Kitzmiller, R.R.; Moorman, R.; Lake, D.E.; Krishnamurthy, A.K.; Kosorok, M.R. Sequence to Sequence ECG Cardiac Rhythm Classification Using Convolutional Recurrent Neural Networks. IEEE J. Biomed. Health Inform. 2022, 26, 572–580. [Google Scholar] [CrossRef]
  41. Lai, D.; Fan, X.; Zhang, Y.; Chen, W. Intelligent and Efficient Detection of Life-Threatening Ventricular Arrhythmias in Short Segments of Surface ECG Signals. IEEE Sensors J. 2021, 21, 14110–14120. [Google Scholar] [CrossRef]
  42. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. In Proceedings of the 24th Conference on Computer Vision and Pattern Recognition (CVPR’15), New Orleans, LA, USA, 19–24 June 2015; pp. 1–9. [Google Scholar]
  43. Guerar, M.; Merlo, A.; Migliardi, M. Clickpattern: A pattern lock system resilient to smudge and side-channel attacks. J. Wirel. Mob. Netw. Ubiquitous Comput. Dependable Appl. 2017, 8, 64–78. [Google Scholar]
Figure 1. Pseudo-orthogonal lead configuration, based on [39].
Figure 1. Pseudo-orthogonal lead configuration, based on [39].
Data 08 00097 g001
Figure 2. Heart rate histogram by sex—10 bins.
Figure 2. Heart rate histogram by sex—10 bins.
Data 08 00097 g002
Figure 3. Experiment methodology.
Figure 3. Experiment methodology.
Data 08 00097 g003
Figure 4. Experiment block diagram, taken from [16].
Figure 4. Experiment block diagram, taken from [16].
Data 08 00097 g004
Figure 5. Experiment architecture.
Figure 5. Experiment architecture.
Data 08 00097 g005
Figure 6. Wavelet transformation examples. From four to one heartbeat(s).
Figure 6. Wavelet transformation examples. From four to one heartbeat(s).
Data 08 00097 g006
Figure 7. Classifier accuracy vs. heartbeats collected.
Figure 7. Classifier accuracy vs. heartbeats collected.
Data 08 00097 g007
Figure 8. Classification results by best heartbeat collection.
Figure 8. Classification results by best heartbeat collection.
Data 08 00097 g008
Figure 9. Comparison of derivative confusion matrix metrics against single-heartbeat results.
Figure 9. Comparison of derivative confusion matrix metrics against single-heartbeat results.
Data 08 00097 g009
Table 2. Classification results. color means higher value by bin.
Table 2. Classification results. color means higher value by bin.
# HBBinAccuracySensitivitySpecificityPrecision
410.90790.95860.85760.8699
310.87880.94550.81080.8361
210.87270.9280.81570.8382
110.83820.79080.8870.8781
420.92210.90990.9340.9313
320.92060.96670.87560.8837
220.91140.92120.90160.9025
120.86510.82540.90520.898
430.94280.95960.92630.9277
330.93230.90920.95540.9532
230.90960.94540.87410.8817
130.90380.90830.89930.8991
440.94650.92550.96710.9651
340.94510.96870.92160.9246
240.92480.960.88990.8963
140.90460.92870.88090.8849
450.92640.86270.98930.9876
350.95140.96020.94270.9429
250.93570.95420.91740.9191
150.93170.90440.95840.9551
460.96350.9760.95120.9517
360.94770.96690.92870.9303
260.93550.89660.97380.9711
160.9410.94030.94170.9403
470.95170.97980.92520.9252
370.95610.97330.93970.9388
270.94980.96010.940.9382
170.95480.94880.96060.9583
480.96640.95670.97530.9725
380.96910.95710.98020.9781
280.95910.98390.93640.9339
180.92780.98750.87340.8768
490.96190.94460.97720.9735
390.95480.93310.9740.9693
290.95410.93530.97070.9656
190.96450.97470.95560.9507
4100.95140.95840.94580.9326
3100.94670.97950.92090.9072
2100.9540.97050.94110.9284
1100.95810.96360.95380.9421
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cabra Lopez, J.-L.; Parra, C.; Forero, G. A Fast Deep Learning ECG Sex Identifier Based on Wavelet RGB Image Classification. Data 2023, 8, 97. https://doi.org/10.3390/data8060097

AMA Style

Cabra Lopez J-L, Parra C, Forero G. A Fast Deep Learning ECG Sex Identifier Based on Wavelet RGB Image Classification. Data. 2023; 8(6):97. https://doi.org/10.3390/data8060097

Chicago/Turabian Style

Cabra Lopez, Jose-Luis, Carlos Parra, and Gonzalo Forero. 2023. "A Fast Deep Learning ECG Sex Identifier Based on Wavelet RGB Image Classification" Data 8, no. 6: 97. https://doi.org/10.3390/data8060097

APA Style

Cabra Lopez, J. -L., Parra, C., & Forero, G. (2023). A Fast Deep Learning ECG Sex Identifier Based on Wavelet RGB Image Classification. Data, 8(6), 97. https://doi.org/10.3390/data8060097

Article Metrics

Back to TopTop