Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Mental Gravity: Depression as Spacetime Curvature of the Self, Mind, and Brain
Previous Article in Journal
Graph Regression Model for Spatial and Temporal Environmental Data—Case of Carbon Dioxide Emissions in the United States
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Semi-Supervised Stacked Autoencoder Using the Pseudo Label for Classification Tasks

1
College of Air and Missile Defense, Air Force Engineering University, Xi’an 710051, China
2
College of Air Traffic Control and Navigation, Air Force Engineering University, Xi’an 710051, China
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(9), 1274; https://doi.org/10.3390/e25091274
Submission received: 2 August 2023 / Revised: 20 August 2023 / Accepted: 28 August 2023 / Published: 30 August 2023
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
The efficiency and cognitive limitations of manual sample labeling result in a large number of unlabeled training samples in practical applications. Making full use of both labeled and unlabeled samples is the key to solving the semi-supervised problem. However, as a supervised algorithm, the stacked autoencoder (SAE) only considers labeled samples and is difficult to apply to semi-supervised problems. Thus, by introducing the pseudo-labeling method into the SAE, a novel pseudo label-based semi-supervised stacked autoencoder (PL-SSAE) is proposed to address the semi-supervised classification tasks. The PL-SSAE first utilizes the unsupervised pre-training on all samples by the autoencoder (AE) to initialize the network parameters. Then, by the iterative fine-tuning of the network parameters based on the labeled samples, the unlabeled samples are identified, and their pseudo labels are generated. Finally, the pseudo-labeled samples are used to construct the regularization term and fine-tune the network parameters to complete the training of the PL-SSAE. Different from the traditional SAE, the PL-SSAE requires all samples in pre-training and the unlabeled samples with pseudo labels in fine-tuning to fully exploit the feature and category information of the unlabeled samples. Empirical evaluations on various benchmark datasets show that the semi-supervised performance of the PL-SSAE is more competitive than that of the SAE, sparse stacked autoencoder (SSAE), semi-supervised stacked autoencoder (Semi-SAE) and semi-supervised stacked autoencoder (Semi-SSAE).

1. Introduction

Deep learning has been a focus of machine learning research since it was proposed by Hinton et al. [1]. As a typical deep learning algorithm, the stacked autoencoder (SAE) [2] extracts hierarchical abstract features from samples by the autoencoder (AE), and then maps the abstract feature to the output by a classifier or regression algorithm. Compared with traditional neural networks, the multilayer structure of SAE represents a strong feature extraction capability, avoiding the limitations of traditional machine learning algorithms in manual feature selection [3]. Meanwhile, the greedy layer-wise training of the SAE determines the network parameters layer by layer and accelerates the convergence speed [4]. By virtue of excellent performance, the SAE has been applied to mechanical fault diagnosis [5,6], disease association prediction [7,8] and network intrusion detection [9,10].
The SAE has been extensively studied, and many methods of improvement have been introduced into the SAE. Vincent et al. [11] combined the SAE with the local denoising criterion and proposed the stacked denoising autoencoder (SDAE). Different from the SAE, the SDAE employs noise-corrupted samples to reconstruct noise-free samples, and it enhances the robustness of the abstract feature. To obtain a sparse feature representation, Ng et al. [12] integrated the sparsity constraint into the SAE and proposed the stacked sparse autoencoder (SSAE). The SSAE can reduce the activation of hidden nodes and use a few network nodes to extract representative abstract features. Masci et al. [13] proposed the stacked convolutional autoencoder (SCAE) by replacing the fully connected layer with convolutional and pooling layers to preserve the spatial information of the training images. By introducing the attention mechanism into the SAE, Tang et al. [14] constructed the stacked attention autoencoder (SAAE) to improve the feature extraction capability. Tawfik et al. [15] utilized the SAE to extract unsupervised features and merge the multimodal medical image. In addition, many other methods [16,17,18,19] have been proposed for the development and application of the SAE.
However, the manual labeling of large numbers of samples is impossible due to the limited knowledge and efficiency. In many fields, such as speech emotion recognition [20], medical image classification [21] and remote sensing image detection [22], the unprocessed training samples are usually only partially labeled, while the majority of samples are unlabeled. The supervised learning of the SAE requires sample labels to train the network and is unable to exploit the feature and category information contained in unlabeled samples, making it difficult to improve its generalization performance for the semi-supervised classification tasks. To tackle this problem, some studies in recent years have combined the SAE with semi-supervised learning. For the classification of partially labeled network traffic samples, Aouedi et al. [23] proposed the semi-supervised stacked autoencoder (Semi-SAE) to realize a semi-supervised learning of the SAE. This method needs unsupervised feature extraction for all samples in the pre-training stage and fine-tuning of the network parameters based on the classification loss of the labeled samples. By introducing the sparsity criterion into the Semi-SAE, Xiao et al. [24] proposed the seme-supervised stacked sparse autoencoder (Semi-SSAE). The Kullback–Leibler (KL) divergence regularization term added to the loss function improves the sparsity of the network parameters, and the Semi-SSAE is applied to cancer prediction. These improved SAE algorithms use only part of the information from the unlabeled samples in the feature extraction stage and have a limited generalization performance for semi-supervised classification tasks.
The pseudo label [25] is a simple and efficient method for implementing semi-supervised learning. It utilizes labeled samples to predict the class of unlabeled samples and integrates labeled and pseudo-labeled samples to train the network. Semi-supervised learning methods based on the pseudo label have been gradually applied to automatic speech recognition [26] and image semantic segmentation [27]. To overcome the limitations of the traditional supervised SAE and to improve the generalization performance, the pseudo label-based semi-supervised stacked autoencoder (PL-SSAE) is proposed by combining the SAE with the pseudo label. The PL-SSAE first stacks the AE to extract the feature information in all samples through layer-wise pre-training. Then, the supervised classification and iterative fine-tuning on the labeled samples are used for the class prediction of the unlabeled samples. Finally, the pseudo-label regularization term is constructed, and the labeled and pseudo-labeled samples are integrated to complete the training of the network. Different from the SAE and Semi-SAE, the PL-SSAE is able to exploit both feature information from unlabeled samples for feature extraction and category information for classification and fine-tuning, aiming to improve its semi-supervised learning performance. To the best of our knowledge, the PL-SSAE is the first attempt to introduce the pseudo label into the SAE, and it extends the implementation methods of the semi-supervised SAE.
The research contributions of this study can be summarized as follows:
  • A new semi-supervised SAE named the PL-SSAE is proposed. By integrating the pseudo label with the SAE, the pseudo labels of the unlabeled samples are generated and the category information in the unlabeled samples is effectively exploited to improve the generalization performance of the PL-SSAE. The experimental results on various benchmark datasets show that the semi-supervised classification performance of the PL-SSAE outperforms the SAE, SSAE, Semi-SAE and Semi-SSAE.
  • The pseudo-label regularization term is constructed. The pseudo-label regularization term represents the classification loss of the pseudo-labeled samples, and it is added to the loss function to control the loss balance between the labeled and pseudo-labeled samples and to prevent over-fitting.
The rest of this study is organized as follows. In Section 2, a brief introduction to the AE and SAE is described. In Section 3, the network structure and training process of the proposed PL-SSAE are detailed. In Section 4, the evaluation implementation and results on benchmark datasets are presented. In Section 5, the conclusion of this study is summarized.

2. Related Works

2.1. Autoencoder

The AE is an unsupervised algorithm and consists of an encoder and a decoder. The encoder maps the input to the abstract representation and the decoder maps the abstract representation to the output. The network structure of the AE is shown in Figure 1.
For the input samples X = { x i } i = 1 N , the AE encodes the samples using linear mapping and a non-linear activation function:
H = g ( W e X + b e )
where W e is the weight matrix between the input layer and the hidden layer, b e is the bias of the hidden layer and g ( ) is the activation function. The decoder completes the decoding of the abstract feature to receive the reconstructed samples:
X ^ = g ( W d X + b d )
where W d is the weights matrix between the hidden layer and the output layer and b d is the bias of the output layer. The AE requires an optimization algorithm to fine-tune the network parameters. The reconstruction error is minimized to learn representative abstract features in the samples. The loss function of the AE is formulated as follows:
J A E = 1 2 i = 1 n x ^ i - x i 2 2

2.2. Stacked Autoencoder

The SAE is a supervised algorithm and consists of a stacked AE and a classifier. The AE extracts the hierarchical feature layer by layer, and the classifier maps the final abstract feature to the output. The SAE usually has a symmetric structure and includes encoding and decoding. However, the decoding process is often removed, and the final feature representation is used for classification and regression tasks. Suppose the training samples are { X , Y } = { ( x i , y i ) } i = 1 N , the network structure is d , L 1 , L 2 , , L k , m and the activation function is g ( ) . The network structure of the SAE is shown in Figure 2.
The SAE needs pre-training and fine-tuning to train the network. The pre-training stage determines the initial network parameters and extracts abstract features through the greedy layer-wise training of the AE. The fine-tuning stage computes the classification error and optimizes the network parameters.
In pre-training, the output H i of the i th hidden layer is the input of the AE, and the input weights W i + 1 and bias b i + 1 of the ( i + 1 ) th hidden layer can be obtained by the trained AE. H k is the final extracted feature and it is used as the input of the classifier to compute the output weights W and complete the classification mapping.
In fine-tuning, the SAE requires the calculation of the classification error of the training samples and backpropagates the error to optimize the network parameters using the gradient descent algorithm. When the cross-entropy error is used, the network loss function of the SAE is expressed as follows:
J S A E = - 1 N i = 1 N c = 1 m p i c l o g ( y ^ i c )
where y ^ i c is the predicted probability that the i th sample belongs to class c , and p i c is the sign function. If the true label of the i th sample is c , p i c = 1 , otherwise p i c = 0 .

3. Semi-Supervised Stacked Autoencoder Based on the Pseudo Label

3.1. Pseudo Label

Traditional SAE can extract abstract features from labeled samples and complete the prediction or classification. However, the supervised learning of the SAE is only applicable to labeled samples, and unlabeled samples in the training data cannot be effectively utilized. The SAE is unable to use the feature and category information from unlabeled samples, and this limits the generalization performance of the SAE for semi-supervised tasks. Therefore, the PL-SSAE aims to propose a new semi-supervised SAE by introducing the pseudo-labeling method into the SAE. The PL-SSAE uses unlabeled samples for feature extraction and classification by generating the pseudo label and adding a regularization loss. The PL-SSAE makes full use of the feature and category information contained in unlabeled samples to improve the generalization performance of the SAE for semi-supervised problems. Compared with the SAE, the innovations of the PL-SSAE are the pre-training of the unlabeled samples, the generation of the pseudo label, and the construction of the pseudo-label regularization.
As a new approach to semi-supervised learning, the pseudo-labeling method aims to employ the network trained on labeled samples to predict unlabeled samples. Based on the clustering hypothesis, the most probable results are utilized as the pseudo labels for the unlabeled samples and the network is retrained with the pseudo-labeled samples. Thus, the application of the pseudo-labeling method requires three steps. The first step is to train the network with labeled samples. The second step is to predict the class of unlabeled samples and generate the pseudo label. The third step is to retrain the network with labeled and pseudo-labeled samples. The pseudo-labeling method represents both the labeling of the unlabeled samples and the semi-supervised training process of the network. Compared with other semi-supervised learning methods, the pseudo-labeling method can effectively exploit the category information contained in the unlabeled samples and improve the semi-supervised prediction and classification performance.

3.2. Network Structure

According to the requirements of the pseudo-labeling method and the SAE, the PL-SSAE divides the first step of the pseudo-labeling method into unsupervised pre-training and supervised fine-tuning. Suppose that the labeled samples are { X l , Y l } , the unlabeled samples are X u , the network structure is d , L 1 , L 2 , , L k , m and the activation function is g ( ) . The network framework of the PL-SSAE is shown in Figure 3. The PL-SSAE consists of four stages: unsupervised pre-training, supervised fine-tuning, pseudo-label generation and semi-supervised fine-tuning.
In the unsupervised pre-training, similar to the SAE, the PL-SSAE trains the AE to assign network parameters layer by layer. However, unlike the SAE, the PL-SSAE requires both labeled and unlabeled samples in pre-training to fully exploit the feature information contained in the unlabeled samples. Meanwhile, using all samples at this stage avoids the repeated pre-training of pseudo-labeled samples and reduces the computational complexity. The relationship between the output of the i th hidden layer and the output of the ( i + 1 ) th hidden layer is expressed as follows:
H i = { g ( W 1 X + b 1 ) , i = 1 g ( W i H i - 1 + b i ) , 1 < i k
where W i and b i are the input weights and bias of the i th hidden layer, respectively, and X = { X l , X u } is the set of labeled and unlabeled samples. Through the greedy layer-wise pre-training, the PL-SSAE obtains the connection weights and biases of all hidden layers to achieve the assignment of network parameters.
In the supervised fine-tuning, the PL-SSAE calculates the classification loss of the labeled samples and optimizes the parameters of the pre-trained network. For the labeled samples X l , the PL-SSAE obtains their predicted labels Y ^ l through feature extraction and classification. The classification loss between the predicted labels and the true labels is calculated by Formula (4), and the connection weights { W i } i = 1 k and bias { b i } i = 1 k of each hidden layer are adjusted by the stochastic gradient descent algorithm to determine the mapping function. The mapping function from the samples to the labels is formulated as follows:
f : f ( X ) = W g ( W k g ( ( W 2 g ( W 1 X + b 1 ) + b 2 ) ) + b k )
In the pseudo-label generation, the PL-SSAE predicts the labels and determines the pseudo labels of the unlabeled samples with the supervised, trained network. For the unlabeled samples { x i } i = 1 N , their prediction probabilities on different classes { y i j } j = 1 m are calculated through forward propagation and label mapping. The label with the highest prediction probability is taken as the pseudo label of each unlabeled sample by the following formula:
y i = argmax   j , 1 j m y i j , i = 1 , 2 , , N
In semi-supervised fine-tuning, the PL-SSAE inputs the labeled samples { X l , Y l } and pseudo-labeled samples { X u , Y u } into the network and computes the classification loss to optimize the network parameters. Since the pseudo labels are not necessarily the true labels of the unlabeled samples, the PL-SSAE introduces a regularization parameter to keep the loss balance between the labeled and pseudo-labeled samples. Using the cross-entropy function as a measure of the classification loss, the classification loss of the labeled samples J l , the classification loss of the unlabeled samples J u and the total loss of the network J P L - S S A E are expressed as follows:
J l = - 1 N l i = 1 N l c = 1 m p i c l l o g ( y ^ i c l )
J u = - 1 N u j = 1 N u c = 1 m p j c u l o g ( y ^ j c u )
J P L - S S A E = J l + λ J u
where N l and N u are the number of the labeled and unlabeled samples, respectively, y ^ i c l and y ^ i c u are the prediction probabilities of the i th labeled sample and the j th unlabeled samples belonging to the class c . p i c l and p j c u are both the sign functions. If the true label of the i th labeled sample is c , p i c l = 1 , otherwise p i c l = 0 . If the pseudo label of the j th pseudo-labeled sample is c , p j c u = 1 , otherwise p j c u = 0 . The classification loss of the pseudo-labeled samples is the regularization term in the loss function to prevent over-fitting. By optimizing the network parameters, the network loss is gradually reduced, and the PL-SSAE classifies the labeled and pseudo-labeled samples more accurately.

3.3. Training Process

According to the network framework, the training process of the PL-SSAE consists of four stages: unsupervised pre-training, supervised fine-tuning, pseudo-label generation and semi-supervised fine-tuning. In the unsupervised pre-training stage, the network parameters are initialized by greedy layer-wise training on the labeled and unlabeled samples. In the supervised fine-tuning stage, the classification loss of the labeled samples is calculated, and the network parameters are optimized by the stochastic gradient descent algorithm. In the pseudo-label generation stage, the trained network predicts the class of the unlabeled samples and assigns pseudo labels to them. In the semi-supervised fine-tuning stage, the classification loss of the labeled and pseudo-labeled samples is computed to adjust the network parameters and complete the network training. Algorithm 1 presents the training details of the PL-SSAE.
Algorithm 1: Training process of the PL-SSAE.
Input: The labeled samples { X l , Y l } , the unlabeled samples X u , the number of hidden nodes { L i } i = 1 k , the regularization parameter λ , the number of mini-batch sizes s , the number of iteration t , learning rate α , and the activation function g ( )
Output: The mapping function f : R d R m .
The unsupervised pre-training
1:  for  i = 1  to  k  do
2:    if i = 1
3:     Let X = { X l , X u } be the input and output of the first AE
4:    else
5:     Let H i - 1 be the input and output of the i th AE
6:    Randomly initialize the network parameters of the i th AE
7:    for  j = 1  to  t  do
8:     Obtain mini-batch samples { x r } r = 1 s from the input sample
9:     Compute the hidden output H i of the AE by Equation (1)
10:      Calculate the reconstructed samples { x ^ r } r = 1 s by Equation (2)
11:      Compute the reconstruction loss of the AE by Equation (3)
12:      Update the network parameters based on the stochastic gradient descent algorithm
13:    end for
14:    Assign the network parameters { W i , b i } of the i th AE to the i th hidden layer
15:    Calculate the output H i of the i th hidden layer by Equation (5)
16:  end for
The supervised fine-tuning
17:  Input the labeled samples { X l , Y l } into the network
18:  for  j = 1  to  t  do
19:    Obtain mini-batch samples { x r } r = 1 s from the input samples
20:    Predict the labels { y ^ r } r = 1 s of the mini-batch samples by Equation (6)
21:    Calculate the classification loss by Equation (4)
22:    Update the network parameters { W i , b i } i = 1 k and W based on the stochastic gradient descent algorithm
23:  end for
The pseudo-label generation
24:  Input the unlabeled samples X u into the network
25:  Compute the class prediction of the unlabeled samples by Equation (6)
26:  Generate the pseudo labels Y u of the unlabeled samples by Equation (7)
The semi-supervised fine-tuning
27:  Input the labeled samples { X l , Y l } and the pseudo-labeled samples { X u , Y u } into the network
28:  for  j = 1  to  t  do
29:    Obtain mini-batch samples { x r } r = 1 s from the input samples
30:    Compute the class prediction of the input samples by Equation (6)
31:    Calculate the total classification loss J P L - S S A E by Equations (8)–(10)
32:    Update the network parameters { W i , b i } i = 1 k and W
33:  end for
34:  return the mapping function f : f ( X ) = W g ( W k g ( ( W 2 g ( W 1 X + b 1 ) + b 2 ) ) + b k )

4. Experiments

To verify the semi-supervised classification performance of the proposed PL-SSAE, the following evaluations were designed and carried out:
Experiment 1: Influence of different hyperparameters. Observe the accuracy change in the PL-SSAE with a variable regularization parameter, variable percentage of labeled samples and variable number of hidden nodes, then analyze their influence on the classification performance of the PL-SSAE.
Experiment 2: Comparison of semi-supervised classification. Record the classification accuracy of the SAE, SSAE, Semi-SAE, Semi-SSAE and PL-SSAE with different percentages of labeled samples and compare the semi-supervised learning capability of different algorithms.
Experiment 3: Comparison of comprehensive performance. Observe the accuracy, precision, F1-measure, G-mean, training time and testing time of the SAE, SSAE, Semi-SAE, Semi-SSAE and PL-SSAE to compare their generalization performance and computational complexity.

4.1. Experimental Settings

4.1.1. Data Description

Various benchmark datasets used in the evaluations are Rectangles, Convex, USPS [28], MNIST [29] and Fashion-MNIST [30]. The datasets are taken from the UCI Machine Learning Repository [31] and have been normalized to [ 0 , 1 ] . Details of the benchmark datasets are shown in Table 1.

4.1.2. Implementation Details

All evaluations were carried out in Pytorch 1.9, running on a desktop with a 3.6 GHz Intel 12,700 K CPU, Nvidia RTX 3090 graphics, 32 GB RAM and a 2 TB hard disk. To avoid the uncertainty and ensure a fair comparison, all reported results are the averages of 20 repeated experiments, and the same network structure is utilized for different algorithms. The network structure of each algorithm used in Experiments 2 and 3 is shown in Table 2.
The experimental details of Experiment 1 are as follows: The dataset is MNIST, the batch size is 100, the number of iterations is 100, the learning rate is 0.01 and the activation function is a ReLU function. Suppose parameter p represents the percentage of labeled samples in the training data. When changing the regularization parameter and the percentage of labeled samples, the network structure is 784-300-200-100-10, the range of regularization parameter is λ { 0 , 0.1 , 0.2 , , 1 } and the range of label percentage is p { 5 , 10 , 15 , , 50 } . When changing the number of hidden nodes, the network structure is 784 - L 1 - L 2 - 10 , the range of L 1 is L 1 { 100 , 200 , , 900 , 1000 } , the range of L 2 is L 2 { 100 , 200 , , 900 , 1000 } , the regularization parameter is λ = 0.5 and the label percentage is p = 20 .
The experimental details for Experiment 2 are as follows: The datasets are Convex, USPS, MNIST and Fashion-MNSIT. The batch size is 100, the number of iterations is 100, the learning rate is 0.01, the activation function is a ReLU function and the range of the label percentage is p { 5 , 10 , 15 , , 50 } . The sparsity parameter of the SSAE and Semi-SSAE is ρ = 0.05 , and the regularization parameter of the PL-SSAE is λ = 0.5 .
The experimental details for Experiment 3 are as follows: The batch size is 100, the number of iterations is 100, the learning rate is 0.01, the activation function is the ReLU function, and the range of label percentage is p { 5 , 10 , 15 , 20 } . The sparsity parameter of the SSAE and Semi-SSAE is ρ = 0.05 and the regularization parameter of the PL-SSAE is λ = 0.5 . For multiclass classification tasks, the precision, F1-measure and G-mean are the averages of different classes.

4.2. Influence of Different Hyperparameters

As predetermined parameters of the network, the hyperparameters affect the semi-supervised learning and classification performance of the PL-SSAE. The regularization parameter, the percentage of labeled samples and the number of hidden nodes are important hyperparameters for the PL-SSAE. The regularization parameter controls the balance between the empirical loss and the regularization loss. The percentage of labeled samples determines the number of labeled and pseudo-labeled samples. The number of hidden nodes controls the structural complexity and fitting ability of the network. To analyze the specific influence of different hyperparameters, a variable regularization parameter, a variable percentage of labeled samples and a variable number of hidden nodes are utilized to observe the accuracy change in the PL-SSAE. The generalization performance of the PL-SSAE with different regularization parameters and label percentages is shown in Figure 4. The generalization performance and training time of the PL-SSAE with different numbers of hidden nodes are shown in Figure 5.
As is shown in Figure 4, the semi-supervised classification performance of the PL-SSAE varies with the regularization parameter and the percentage of labeled samples. When the label percentage p is fixed, the classification accuracy of the PL-SSAE increases and then decreases as the regularization parameter λ increases. When the regularization parameter λ is fixed, the classification accuracy increases as the label percentage p increases. This is because the regularization parameter λ controls the importance of the pseudo-label loss in the loss function. A proper regularization parameter λ allows the PL-SSAE to exploit the feature and category information contained in unlabeled samples to improve its semi-supervised learning. However, an excessively large λ will cause the PL-SSAE to ignore the labeled samples, and the difference between the pseudo labels and the true labels will lead to an under-fitting. Therefore, it is important to choose appropriate regularization parameters for different samples. However, the trial-and-error method used for regularization parameter selection in the PL-SSAE is time-consuming and inefficient. Meanwhile, the labeled samples are the prior knowledge for the network. With the increase in label percentage p , the number of labeled samples in the training data increases and more category information improves the generalization performance of the network.
As is shown in Figure 5, the classification accuracy and training time of the PL-SSAE vary with the number of hidden nodes. As the number of hidden nodes increases, the generalization performance of the PL-SSAE increases and then decreases. The reason is that the hidden nodes control the function approximation ability of the network. As the number of hidden nodes increases, the generated pseudo labels are closer to the true labels and more category information contained in pseudo-labeled samples improves the semi-supervised learning of the PL-SSAE. However, too many hidden nodes will lead to the over-fitting of the network, and the difference between the training and testing samples will cause the classification accuracy to decrease. In addition, the training time of the PL-SSAE increases with the increase in hidden nodes. This is because the number of hidden nodes is positively correlated with the computational complexity of the network. When the computational power is fixed, the increase in the computational complexity leads to an increase in the training time.

4.3. Comparison of Semi-Supervised Classification

The semi-supervised classification performance is a direct reflection of the ability to learn from unlabeled training samples. To evaluate the semi-supervised classification performance of different algorithms, it is necessary to adopt different percentages of labeled samples, then record the accuracy change on the testing samples and plot the accuracy curves. The experiment in this section focuses on comparing the PL-SSAE with the SAE, SSAE, Semi-SAE and Semi-SSAE. The variation in classification accuracy of each algorithm on datasets with different label percentages is shown in Figure 6.
As shown in Figure 6, the semi-supervised classification performance of the PL-SSAE outperforms that of the SAE, SSAE and Semi-SAE and Semi-SSAE on different datasets. As the label percentage increases, the number of labeled training samples increases. Thus, more label information is exploited to learn the function mapping, and the generalization performance of each algorithm gradually increases. The classification accuracy of the PL-SSAE is higher than other algorithms at different label percentages. The reason is that the PL-SSAE is an effective semi-supervised algorithm. Compared with the supervised SAE and SSAE, the PL-SSAE uses the feature information and category information of the unlabeled samples to make the learned mapping function closer to the real mapping. Compared with the Semi-SAE and Semi-SSAE, the PL-SSAE not only utilizes the unlabeled samples for feature extraction but also exploits the pseudo-label information for classification mapping. The advantage of the PL-SSAE in the semi-supervised classification becomes more apparent when the percentage of labeled samples is small. However, when there are sufficient labeled samples, PL-SSAE tends to have no performance advantage and the inconsistency between pseudo labels and true labels will reduce the generalization performance of the PL-SSAE.

4.4. Comparison of Comprehensive Performance

To test the comprehensive performance of the PL-SSAE, all benchmark datasets mentioned above are used to compare the PL-SSAE with the SAE, SSAE, Semi-SAE and Semi-SSAE. Different metrics, such as accuracy, precision, F1-measure and G-mean, of each algorithm with different label percentages are recorded to evaluate the semi-supervised performance. The training and testing times of each algorithm are recorded to compare the computational complexity. The classification accuracy, precision, F1-measure, G-mean, training time and testing time of each algorithm are shown in Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8, respectively (the numbers in bold indicate the best results). Since the experimental results are the averages of repeated experiments, the standard deviation of the results is listed after the average to reflect the performance stability of the algorithm.
As shown in Table 3, Table 4, Table 5 and Table 6 the comprehensive performance of the PL-SSAE is better than that of the SAE, SSAE, Semi-SAE and Semi-SSAE. For each dataset, the PL-SSAE has higher classification accuracy, precision, F1-measure and G-mean than other algorithms with different label percentages. The reason is that the SAE and SSAE do not use unlabeled samples in the training process, and the Semi-SAE and Semi-SSAE only use unlabeled samples in the feature extraction process. The PL-SSAE introduces the pseudo label and makes appropriate use of the labeled samples to generate the pseudo labels of the unlabeled samples. The category information contained in the pseudo-labeled samples guides the feature extraction and class mapping of the network, and this improves the semi-supervised learning and classification performance of the PL-SSAE. Moreover, the PL-SSAE integrates the pseudo-label regularization into the loss function. The balance between the classification loss of the labeled and pseudo-labeled samples avoids over-fitting and improves the generalization performance.
As shown in Table 7 and Table 8, the training time of PL-SSAE is slightly higher than that of the SAE, SSAE, Semi-SAE and Semi-SSAE, while the testing time of each algorithm is the same. The PL-SSAE requires additional fine-tuning of the pseudo-labeled samples. As a result, the computational complexity and training time of the PL-SSAE is twice that of the other algorithms. However, given the improvement in generalization performance, the increase in training time for the PL-SSAE is worthwhile. In the comparison of testing speed, the testing time is related to the sample size and network structure. Therefore, different algorithms with the same testing samples and network structure have the same testing speed.

5. Conclusions

To overcome the limitations of traditional SAE for unlabeled samples, this study integrates the pseudo label into the SAE and proposes a new semi-supervised SAE called PL-SSAE. The PL-SSAE assigns the pseudo labels to the unlabeled samples by the network trained on the labeled samples and adds a pseudo-label regularization term to the loss function. Different from the SAE, the PL-SSAE exploits the feature and category information contained in the unlabeled samples to guide the feature extraction and classification of the network. Various evaluations on different datasets show that the PL-SSAE outperforms the SAE, SSAE, Semi-SAE and Semi-SSAE.
However, the different hyperparameters of the PL-SSAE in this study are determined by the time-consuming trial-and-error method. Thus, it is important to combine the PL-SSAE with the particle swarm optimization algorithm [32] or the ant colony algorithm [33] to achieve automatic optimization of the hyperparameters. In addition, the PL-SSAE only determines the pseudo labels by taking the maximum value of the prediction probabilities. This method tends to introduce noise. Therefore, a more effective method needs to be investigated to further generate more reasonable pseudo labels.

Author Contributions

Conceptualization, J.L. and X.W.; Methodology, J.L.; investigation, J.L. and Q.X.; writing—original draft preparation, J.L.; writing—review and editing, Y.S. and W.Q. All authors have read and agreed to the published version of the manuscript.

Funding

The research leading to these results has received funding from the National Natural Science Foundation of China (61876189, 61273275, 61806219 and 61703426) and the Natural Science Basic Research Plan in Shaanxi Province (No. 2021JM—226).

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hinton, G.E.; Salakhutdinov, R.R. Reducing the dimensionality of data with neural networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef] [PubMed]
  2. Bengio, Y.; Lamblin, P.; Popovici, D. Greedy layer-wise training of deep networks. Adv. Neural Inf. Process. Syst. 2007, 19, 153–160. [Google Scholar]
  3. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [PubMed]
  4. Bengio, Y.; Courville, A.; Vincent, P. Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1798–1828. [Google Scholar] [CrossRef] [PubMed]
  5. Shao, H.D.; Xia, M.; Wan, J.F. Modified stacked autoencoder using adaptive Morlet wavelet for intelligent fault diagnosis of rotating machinery. IEEE/ASME Trans. Mechatron. 2021, 27, 24–33. [Google Scholar] [CrossRef]
  6. Jia, N.; Cheng, Y.; Liu, Y.Y.; Tian, Y.Y. Intelligent fault diagnosis of rotating machines based on wavelet time-frequency diagram and optimized stacked denoising auto-encoder. IEEE Sens. J. 2022, 22, 17139–17150. [Google Scholar] [CrossRef]
  7. Wang, S.D.; Lin, B.Y.; Zhang, Y.Y.; Qiao, S.B. SGAEMDA: Predicting miRNA-disease associations based on stacked graph autoencoder. Cells 2022, 11, 3984. [Google Scholar] [CrossRef]
  8. Wang, L.; You, Z.H.; Li, J.Q.; Hiang, Y.A. IMS-CDA: Prediction of circRNA-disease associations from the integration of multisource similarity information with deep stacked autoencoder model. IEEE Trans. Cybern. 2020, 51, 5522–5531. [Google Scholar] [CrossRef]
  9. Dao, T.N.; Lee, H.J. Stacked autoencoder-based probabilistic feature extraction for on-device network intrusion detection. IEEE Internet Things J. 2021, 9, 14438–14451. [Google Scholar] [CrossRef]
  10. Karthic, S.; Kumar, S.M. Wireless intrusion detection based on optimized LSTM with stacked auto encoder network. Intell. Autom. Soft Comput. 2022, 34, 439–453. [Google Scholar] [CrossRef]
  11. Vincent, P.; Larochelle, H.; Lajoie, I.; Bengio, Y.; Manzagol, P.A. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 2010, 11, 3371–3408. [Google Scholar]
  12. Ng, A. Sparse Autoencoder. Available online: http://graphics.stanford.edu/courses/cs233-21-spring/ReferencedPapers/SAE.pdf (accessed on 25 April 2023).
  13. Masci, J.; Meier, U.; Cireşan, D. Stacked convolutional auto-encoders for hierarchical feature extraction. In Proceedings of the International Conference on Artificial Neural Networks, Espoo, Finland, 14–17 June 2011; pp. 52–59. [Google Scholar]
  14. Tang, C.F.; Luktarhan, N.; Zhao, Y.X. SAAE-DNN: Deep learning method on intrusion detection. Symmetry 2020, 12, 1695. [Google Scholar] [CrossRef]
  15. Tawfik, N.; Elnemr, H.A.; Fakhr, M.; Dessouky, M.I. Multimodal medical image fusion using stacked auto-encoder in NSCT domain. J. Digit. Imaging 2022, 35, 1308–1325. [Google Scholar] [CrossRef] [PubMed]
  16. Yang, D.S.; Qin, J.; Pang, Y.H.; Huang, T.W. A novel double-stacked autoencoder for power transformers DGA signals with an imbalanced data structure. IEEE Trans. Ind. Electron. 2021, 69, 1977–1987. [Google Scholar] [CrossRef]
  17. Chen, J.M.; Fan, S.S.; Yang, C.H.; Zhou, C.; Zhu, H.Q. Stacked maximal quality-driven autoencoder: Deep feature representation for soft analyzer and its application on industrial processes. Inf. Sci. 2022, 596, 280–303. [Google Scholar] [CrossRef]
  18. Liu, P.J.; Pan, F.C.; Zhou, X.F.; Li, S.; Zeng, P.Y. Dsa-PAML: A parallel automated machine learning system via dual-stacked autoencoder. Neural Comput. Appl. 2022, 34, 12985–13006. [Google Scholar] [CrossRef]
  19. Xu, J.H.; Zhou, W.; Chen, Z.B.; Ling, S.Y. Binocular rivalry oriented predictive autoencoding network for blind stereoscopic image quality measurement. IEEE Trans. Instrum. Meas. 2020, 70, 5001413. [Google Scholar] [CrossRef]
  20. Pourebrahim, Y.; Razzazi, F.; Sameti, H. Semi-supervised parallel shared encoders for speech emotion recognition. Digit. Signal Process. 2021, 118, 103205. [Google Scholar] [CrossRef]
  21. Peng, Z.; Tian, S.W.; Yu, L.; Zhang, D.Z.; Wu, W.D.; Zhou, S.F. Semi-supervised medical image classification with adaptive threshold pseudo-labeling and unreliable sample contrastive loss. Biomed. Signal Process. Control 2023, 79, 104142. [Google Scholar] [CrossRef]
  22. Protopapadakis, E.; Doulamis, A.; Doulamis, N.; Maltezos, E. Stacked autoencoders driven by semi-supervised learning for building extraction from near infrared remote sensing imagery. Remote Sens. 2021, 13, 371. [Google Scholar] [CrossRef]
  23. Aouedi, O.; Piamrat, K.; Bagadthey, D. Handling partially labeled network data: A semi-supervised approach using stacked sparse autoencoder. Comput. Netw. 2022, 207, 108742. [Google Scholar] [CrossRef]
  24. Xiao, Y.W.; Wu, J.; Lin, Z.L.; Zhao, X.D. A semi-supervised deep learning method based on stacked sparse auto-encoder for cancer prediction using RNA-seq data. Comput. Methods Programs Biomed. 2018, 166, 99–105. [Google Scholar] [CrossRef]
  25. Lee, D.H. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Proceedings of the International Conference on Machine Learning, Atlanta, Georgia, 13–21 June 2013; p. 896. [Google Scholar]
  26. Higuchi, Y.; Moritz, N.; Le Roux, J.; Hori, T. Momentum pseudo-labeling: Semi-supervised ASR with continuously improving pseudo-labels. IEEE J. Sel. Top. Signal Process. 2022, 16, 1424–1438. [Google Scholar] [CrossRef]
  27. Wang, J.X.; Ding, C.H.Q.; Chen, S.B.; He, G.G.; Luo, B. Semi-supervised remote sensing image semantic segmentation via consistency regularization and average update of pseudo-label. Remote Sens. 2020, 12, 3603. [Google Scholar] [CrossRef]
  28. Hull, J.J. A database for handwritten text recognition research. IEEE Trans. Pattern Anal. Mach. Intell. 1994, 16, 550–554. [Google Scholar] [CrossRef]
  29. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
  30. Xiao, H.; Rasul, K.; Vollgraf, R. Fashion-mnist: A Novel Image Dataset for Benchmarking Machine Learning Algorithms. arXiv 2017, arXiv:1708.07747. [Google Scholar]
  31. Blake, C.L.; Merz, C.J. UCI Repository of Machine Learning Databases. Available online: http://archive.ics.uci.edu/m (accessed on 10 May 2023).
  32. Xu, X.; Ren, W. A hybrid model of stacked autoencoder and modified particle swarm optimization for multivariate chaotic time series forecasting. Appl. Soft Comput. 2022, 116, 108321. [Google Scholar] [CrossRef]
  33. Abdelmaboud, A.; Al-Wesabi, F.N.; Al Duhayyim, M.; Eisa, T.A.E.; Hamza, M.A. Machine learning enabled e-learner non-verbal behavior detection in IoT environment. CMC-Comput. Mater. Contin. 2022, 72, 679–693. [Google Scholar] [CrossRef]
Figure 1. Network structure of the AE.
Figure 1. Network structure of the AE.
Entropy 25 01274 g001
Figure 2. Network structure of the SAE.
Figure 2. Network structure of the SAE.
Entropy 25 01274 g002
Figure 3. Network framework of the PL-SSAE.
Figure 3. Network framework of the PL-SSAE.
Entropy 25 01274 g003
Figure 4. The influence of the regularization parameter and label percentage on the generalization performance.
Figure 4. The influence of the regularization parameter and label percentage on the generalization performance.
Entropy 25 01274 g004
Figure 5. The influence of the hidden nodes on (a) accuracy and (b) training time.
Figure 5. The influence of the hidden nodes on (a) accuracy and (b) training time.
Entropy 25 01274 g005
Figure 6. Comparison of the semi-supervised classification on (a) Convex, (b) USPS, (c) MNIST and (d) Fashion-MNIST datasets.
Figure 6. Comparison of the semi-supervised classification on (a) Convex, (b) USPS, (c) MNIST and (d) Fashion-MNIST datasets.
Entropy 25 01274 g006
Table 1. The datasets used in the experiments.
Table 1. The datasets used in the experiments.
DatasetsAttributesClassesTraining DataTesting Data
Rectangles7842120050,000
Convex7842800050,000
USPS2561072912007
MNIST7841060,00010,000
Fashion-MNIST7841060,00010,000
Table 2. The network structure of SAE, SSAE, Semi-SAE, Semi-SSAE and PL-SSAE.
Table 2. The network structure of SAE, SSAE, Semi-SAE, Semi-SSAE and PL-SSAE.
DatasetsNetwork Structure
SAESSAESemi-SAESemi-SSAEPL-SSAE
Rectangles784-200-100-2
Convex784-200-100-2
USPS256-200-100-10
MNIST784-400-200-100-10
Fashion-MNIST784-200-100-50-10
Table 3. The accuracy comparison of SAE, SSAE, Semi-SAE, Semi-SSAE and PL-SSAE.
Table 3. The accuracy comparison of SAE, SSAE, Semi-SAE, Semi-SSAE and PL-SSAE.
DatasetsAccuracy (%)
SAESSAESemi-SAESemi-SSAEPL-SSAE
Rectanglesp = 562.06 ± 0.6862.72 ± 0.7262.85 ± 0.8362.86 ± 0.5264.65 ± 0.40
p = 1064.41 ± 0.5564.95 ± 0.7265.24 ± 0.5365.33 ± 0.3866.24 ± 0.27
p = 1571.61 ± 1.2571.73 ± 1.1772.12 ± 1.4072.20 ± 1.1273.17 ± 1.05
p = 2074.28 ± 1.3274.61 ± 1.0574.95 ± 0.8575.23 ± 1.0376.01 ± 1.13
Convexp = 559.50 ± 0.5560.39 ± 0.6060.38 ± 0.6360.64 ± 0.6061.32 ± 0.46
p = 1063.82 ± 0.4663.89 ± 0.6763.93 ± 0.8964.10 ± 0.5164.76 ± 0.64
p = 1565.67 ± 0.8665.84 ± 0.8165.74 ± 0.6165.92 ± 0.4966.51 ± 0.66
p = 2066.01 ± 0.6166.10 ± 0.5466.54 ± 0.2966.79 ± 0.4267.27 ± 0.35
USPSp = 588.24 ± 0.2488.84 ± 0.3589.40 ± 0.2289.52 ± 0.1090.25 ± 0.10
p = 1090.59 ± 0.3990.52 ± 0.3290.54 ± 0.2490.66 ± 0.2391.62 ± 0.33
p = 1591.75 ± 0.2291.79 ± 0.2091.83 ± 0.2291.92 ± 0.0692.32 ± 0.18
p = 2092.04 ± 0.1992.07 ± 0.1392.28 ± 0.1992.35 ± 0.1192.88 ± 0.11
MNISTp = 592.83 ± 0.2093.41 ± 0.0793.97 ± 0.1794.11 ± 0.0995.45 ± 0.12
p = 1094.77 ± 0.1195.10 ± 0.1295.34 ± 0.1195.53 ± 0.1096.49 ± 0.11
p = 1595.46 ± 0.0795.89 ± 0.1096.21 ± 0.1596.29 ± 0.0697.08 ± 0.08
p = 2095.98 ± 0.1396.23 ± 0.1096.57 ± 0.1196.60 ± 0.1197.23 ± 0.04
Fashion-MNISTp = 577.41 ± 1.4778.50 ± 1.2680.01 ± 1.5780.50 ± 1.3382.44 ± 1.52
p = 1081.37 ± 1.1281.64 ± 0.9782.03 ± 1.1582.07 ± 0.9183.33 ± 1.07
p = 1582.28 ± 0.5482.50 ± 0.5782.69 ± 0.6782.77 ± 0.6183.60 ± 0.60
p = 2082.54 ± 0.7182.66 ± 0.6882.78 ± 0.3882.86 ± 0.4183.82 ± 0.64
Results in bold are better than other algorithms.
Table 4. The precision comparison of SAE, SSAE, Semi-SAE, Semi-SSAE and PL-SSAE.
Table 4. The precision comparison of SAE, SSAE, Semi-SAE, Semi-SSAE and PL-SSAE.
DatasetsPrecision (%)
SAESSAESemi-SAESemi-SSAEPL-SSAE
Rectanglesp = 562.95 ± 1.3663.30 ± 1.1263.70 ± 1.0963.89 ± 1.3065.20 ± 0.64
p = 1065.11 ± 1.4965.10 ± 1.3265.98 ± 0.9565.67 ± 0.9867.14 ± 0.78
p = 1571.61 ± 1.0171.05 ± 0.9972.46 ± 1.2972.31 ± 1.1373.71 ± 1.36
p = 2073.34 ± 1.8073.92 ± 1.7374.04 ± 1.1574.10 ± 1.0375.42 ± 1.03
Convexp = 558.38 ± 1.0558.96 ± 1.0559.13 ± 1.0059.19 ± 0.9760.44 ± 0.82
p = 1061.55 ± 0.7761.58 ± 1.1661.99 ± 1.0762.10 ± 1.0762.74 ± 1.38
p = 1564.93 ± 0.9564.94 ± 1.0964.96 ± 0.8265.12 ± 0.7465.61 ± 0.53
p = 2065.35 ± 0.9865.95 ± 1.1965.41 ± 1.2166.03 ± 0.9366.76 ± 0.98
USPSp = 588.22 ± 0.2688.51 ± 0.2888.75 ± 0.2488.86 ± 0.1689.55 ± 0.06
p = 1089.91 ± 0.4389.87 ± 0.3689.97 ± 0.2890.07 ± 0.2391.23 ± 0.39
p = 1591.06 ± 0.2891.12 ± 0.3691.19 ± 0.2191.35 ± 0.1191.76 ± 0.26
p = 2091.45 ± 0.2391.53 ± 0.2591.77 ± 0.2191.85 ± 0.1292.26 ± 0.18
MNISTp = 592.92 ± 0.2193.66 ± 0.0793.93 ± 0.1894.06 ± 0.1095.38 ± 0.13
p = 1094.75 ± 0.1295.06 ± 0.1195.31 ± 0.1095.51 ± 0.0896.47 ± 0.12
p = 1595.46 ± 0.0695.87 ± 0.1096.19 ± 0.1696.27 ± 0.0697.05 ± 0.08
p = 2095.95 ± 0.1496.20 ± 0.0996.55 ± 0.1196.57 ± 0.1197.23 ± 0.05
Fashion-MNISTp = 577.35 ± 1.5678.08 ± 1.0879.86 ± 1.3680.35 ± 1.2082.20 ± 1.04
p = 1081.43 ± 1.3082.00 ± 1.0082.09 ± 1.4482.37 ± 1.0783.48 ± 1.10
p = 1582.38 ± 0.6582.55 ± 0.5882.79 ± 0.5682.94 ± 0.7783.68 ± 0.79
p = 2082.52 ± 0.7082.74 ± 0.7982.87 ± 0.5882.91 ± 0.6083.95 ± 0.75
Results in bold are better than other algorithms.
Table 5. The F1-measure comparison of SAE, SSAE, Semi-SAE, Semi-SSAE and PL-SSAE.
Table 5. The F1-measure comparison of SAE, SSAE, Semi-SAE, Semi-SSAE and PL-SSAE.
DatasetsF1-Measure (%)
SAESSAESemi-SAESemi-SSAEPL-SSAE
Rectanglesp = 560.48 ± 1.0860.50 ± 0.8660.73 ± 0.6760.90 ± 0.7262.85 ± 0.65
p = 1063.59 ± 1.0764.37 ± 1.0464.59 ± 1.3564.93 ± 1.2866.19 ± 1.49
p = 1571.16 ± 0.8672.26 ± 1.0772.16 ± 1.1572.54 ± 0.9873.62 ± 1.02
p = 2074.79 ± 1.2974.97 ± 1.3175.40 ± 0.9875.79 ± 1.3876.56 ± 0.83
Convexp = 562.17 ± 1.0962.98 ± 1.4862.99 ± 1.1062.98 ± 1.2163.70 ± 0.98
p = 1065.83 ± 1.3466.20 ± 1.2266.06 ± 1.5366.84 ± 1.2167.50 ± 1.04
p = 1566.69 ± 1.0268.09 ± 1.3467.84 ± 0.8168.14 ± 0.9668.89 ± 0.92
p = 2067.85 ± 0.8068.50 ± 0.8968.94 ± 1.1369.06 ± 0.7669.74 ± 0.82
USPSp = 587.22 ± 0.3887.84 ± 0.7888.37 ± 0.2788.58 ± 0.1389.28 ± 0.12
p = 1089.77 ± 0.4389.71 ± 0.3689.75 ± 0.2789.88 ± 0.2490.97 ± 0.37
p = 1591.00 ± 0.2691.02 ± 0.2591.10 ± 0.2491.16 ± 0.0891.56 ± 0.21
p = 2091.30 ± 0.2391.29 ± 0.2091.57 ± 0.2191.64 ± 0.1692.20 ± 0.15
MNISTp = 592.87 ± 0.2093.63 ± 0.0793.89 ± 0.1894.04 ± 0.1095.40 ± 0.12
p = 1094.71 ± 0.1295.14 ± 0.1195.29 ± 0.1195.48 ± 0.1096.42 ± 0.13
p = 1595.40 ± 0.0695.86 ± 0.1096.17 ± 0.1696.26 ± 0.0697.05 ± 0.08
p = 2095.93 ± 0.1496.29 ± 0.1096.54 ± 0.1196.56 ± 0.1197.22 ± 0.03
Fashion-MNISTp = 576.37 ± 1.6977.45 ± 1.1379.17 ± 1.2779.87 ± 1.4481.44 ± 1.70
p = 1081.02 ± 1.0881.56 ± 1.0481.64 ± 1.3281.80 ± 0.6382.22 ± 0.96
p = 1581.96 ± 0.5582.19 ± 0.6582.34 ± 0.8782.59 ± 0.9183.01 ± 0.77
p = 2082.17 ± 0.8882.29 ± 0.4682.69 ± 0.4982.77 ± 0.3683.76 ± 0.53
Results in bold are better than other algorithms.
Table 6. The G-mean comparison of SAE, SSAE, Semi-SAE, Semi-SSAE and PL-SSAE.
Table 6. The G-mean comparison of SAE, SSAE, Semi-SAE, Semi-SSAE and PL-SSAE.
DatasetsG-Mean (%)
SAESSAESemi-SAESemi-SSAEPL-SSAE
Rectanglesp = 595.97 ± 0.1296.42 ± 0.0596.57 ± 0.1096.66 ± 0.0597.43 ± 0.07
p = 1097.03 ± 0.0797.29 ± 0.0797.36 ± 0.0697.47 ± 0.0698.01 ± 0.08
p = 1597.43 ± 0.0497.65 ± 0.0697.86 ± 0.0997.91 ± 0.0498.36 ± 0.04
p = 2097.73 ± 0.0797.94 ± 0.0598.07 ± 0.0798.08 ± 0.0698.45 ± 0.02
Convexp = 586.87 ± 0.7087.52 ± 0.8688.45 ± 0.9488.74 ± 0.9089.94 ± 0.94
p = 1089.27 ± 0.6789.43 ± 0.5889.66 ± 0.6989.68 ± 0.4790.28 ± 0.91
p = 1589.81 ± 0.3289.96 ± 0.3790.05 ± 0.4090.11 ± 0.3690.62 ± 0.48
p = 2089.96 ± 0.4390.36 ± 0.4090.64 ± 0.2390.76 ± 0.2591.15 ± 0.37
USPSp = 561.93 ± 0.5762.30 ± 0.9662.25 ± 1.1062.38 ± 0.8864.36 ± 0.23
p = 1064.29 ± 0.4864.87 ± 0.7264.94 ± 0.5965.27 ± 0.4165.80 ± 0.38
p = 1571.40 ± 1.0771.53 ± 0.8871.74 ± 0.8771.89 ± 0.9972.77 ± 1.21
p = 2074.22 ± 1.3374.54 ± 1.0374.90 ± 1.0975.13 ± 1.0076.24 ± 0.97
MNISTp = 558.64 ± 1.1059.81 ± 1.0159.73 ± 0.7759.94 ± 0.7560.47 ± 0.49
p = 1062.89 ± 0.6162.92 ± 1.1763.09 ± 1.0463.29 ± 0.9764.10 ± 1.04
p = 1565.05 ± 1.0865.27 ± 0.8665.23 ± 0.6865.34 ± 1.1066.39 ± 0.87
p = 2065.69 ± 0.7865.69 ± 0.8365.98 ± 0.5566.24 ± 0.4866.81 ± 0.23
Fashion-MNISTp = 592.64 ± 0.1793.03 ± 0.1893.37 ± 0.1793.58 ± 0.0593.89 ± 0.12
p = 1094.27 ± 0.2394.23 ± 0.1694.24 ± 0.1494.31 ± 0.1694.89 ± 0.21
p = 1594.99 ± 0.1694.98 ± 0.1195.03 ± 0.1695.04 ± 0.0595.32 ± 0.12
p = 2095.11 ± 0.1495.21 ± 0.1295.24 ± 0.1295.29 ± 0.1095.72 ± 0.08
Results in bold are better than other algorithms.
Table 7. The training time of SAE, SSAE, Semi-SAE, Semi-SSAE and PL-SSAE.
Table 7. The training time of SAE, SSAE, Semi-SAE, Semi-SSAE and PL-SSAE.
DatasetsTraining Time (s)
SAESSAESemi-SAESemi-SSAEPL-SSAE
Rectanglesp = 54.307 ± 0.5014.551 ± 1.1315.530 ± 0.0016.199 ± 0.87611.833 ± 0.672
p = 104.697 ± 0.5364.887 ± 0.0665.695 ± 0.4596.608 ± 0.91112.321 ± 0.812
p = 154.989 ± 0.5964.700 ± 0.0205.965 ± 0.6946.693 ± 0.67912.432 ± 0.813
p = 205.136 ± 0.4965.185 ± 0.4996.288 ± 0.6406.758 ± 0.37813.065 ± 1.012
Convexp = 55.790 ± 0.0915.779 ± 0.47617.188 ± 0.83519.968 ± 0.74737.163 ± 2.359
p = 107.635 ± 0.4817.785 ± 0.44019.104 ± 0.80521.013 ± 0.71939.100 ± 2.732
p = 159.367 ± 0.4549.802 ± 0.72920.317 ± 0.88022.618 ± 0.98341.352 ± 2.340
p = 2011.757 ± 0.47612.191 ± 0.97522.115 ± 0.83624.155 ± 0.91043.395 ± 2.210
USPSp = 51.793 ± 0.1631.828 ± 0.1579.151 ± 0.56311.218 ± 0.35020.096 ± 0.991
p = 103.156 ± 0.2783.713 ± 0.70910.136 ± 0.78811.989 ± 1.23020.803 ± 1.714
p = 154.689 ± 0.6214.800 ± 0.93011.104 ± 1.00012.959 ± 0.79421.658 ± 1.142
p = 205.999 ± 0.5226.321 ± 1.14811.951 ± 0.98714.367 ± 0.79622.060 ± 1.392
MNISTp = 518.957 ± 0.89617.142 ± 1.472149.063 ± 2.258172.881 ± 1.375367.162 ± 2.683
p = 1036.686 ± 0.73835.380 ± 1.557160.620 ± 2.823183.834 ± 1.573384.812 ± 3.075
p = 1551.161 ± 1.55454.447 ± 1.151173.278 ± 2.472194.655 ± 2.311398.623 ± 3.822
p = 2079.166 ± 1.57082.322 ± 1.539186.880 ± 2.461209.533 ± 2.135434.780 ± 2.810
Fashion-MNISTp = 516.120 ± 0.63517.557 ± 0.902149.418 ± 1.191172.441 ± 0.942368.544 ± 2.431
p = 1033.148 ± 1.17835.654 ± 1.837160.116 ± 1.692184.956 ± 1.280380.945 ± 2.933
p = 1549.904 ± 1.30453.817 ± 1.138171.385 ± 1.382197.478 ± 1.815397.639 ± 3.021
p = 2077.729 ± 1.62281.660 ± 1.800189.076 ± 0.940212.044 ± 2.509437.919 ± 2.892
Table 8. The testing time of SAE, SSAE, Semi-SAE, Semi-SSAE and PL-SSAE.
Table 8. The testing time of SAE, SSAE, Semi-SAE, Semi-SSAE and PL-SSAE.
DatasetsTesting Time (s)
SAESSAESemi-SAESemi-SSAEPL-SSAE
Rectanglesp = 50.050 ± 0.0070.046 ± 0.0010.046 ± 0.0010.046 ± 0.0010.047 ± 0.001
p = 100.051 ± 0.0070.047 ± 0.0020.047 ± 0.0020.047 ± 0.0020.047 ± 0.001
p = 150.042 ± 0.0040.046 ± 0.0010.047 ± 0.0010.047 ± 0.0010.046 ± 0.001
p = 200.047 ± 0.0060.046 ± 0.0010.047 ± 0.0020.047 ± 0.0010.046 ± 0.002
Convexp = 50.039 ± 0.0070.042 ± 0.0020.040 ± 0.0010.040 ± 0.0010.040 ± 0.001
p = 100.038 ± 0.0060.040 ± 0.0010.039 ± 0.0010.039 ± 0.0010.041 ± 0.001
p = 150.047 ± 0.0100.040 ± 0.0010.040 ± 0.0020.040 ± 0.0010.039 ± 0.001
p = 200.042 ± 0.0080.042 ± 0.0040.039 ± 0.0010.040 ± 0.0020.040 ± 0.001
USPSp = 50.008 ± 0.0010.005 ± 0.0010.005 ± 0.0010.005 ± 0.0010.004 ± 0.001
p = 100.008 ± 0.0020.005 ± 0.0020.005 ± 0.0010.005 ± 0.0010.005 ± 0.002
p = 150.006 ± 0.0010.005 ± 0.0010.005 ± 0.0020.004 ± 0.0010.004 ± 0.001
p = 200.007 ± 0.0010.005 ± 0.0010.005 ± 0.0010.005 ± 0.0020.005 ± 0.001
MNISTp = 50.017 ± 0.0020.013 ± 0.0030.012 ± 0.0040.014 ± 0.0010.012 ± 0.002
p = 100.014 ± 0.0020.014 ± 0.0010.015 ± 0.0070.012 ± 0.0040.012 ± 0.001
p = 150.016 ± 0.0020.013 ± 0.0030.012 ± 0.0050.013 ± 0.0030.013 ± 0.002
p = 200.016 ± 0.0010.014 ± 0.0010.014 ± 0.0010.013 ± 0.0030.015 ± 0.003
Fashion-MNISTp = 50.014 ± 0.0010.014 ± 0.0020.013 ± 0.0030.014 ± 0.0010.012 ± 0.001
p = 100.013 ± 0.0030.014 ± 0.0010.012 ± 0.0040.014 ± 0.0010.014 ± 0.002
p = 150.014 ± 0.0010.014 ± 0.0010.014 ± 0.0040.014 ± 0.0010.014 ± 0.001
p = 200.013 ± 0.0040.013 ± 0.0030.012 ± 0.0060.014 ± 0.0020.014 ± 0.003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lai, J.; Wang, X.; Xiang, Q.; Quan, W.; Song, Y. A Semi-Supervised Stacked Autoencoder Using the Pseudo Label for Classification Tasks. Entropy 2023, 25, 1274. https://doi.org/10.3390/e25091274

AMA Style

Lai J, Wang X, Xiang Q, Quan W, Song Y. A Semi-Supervised Stacked Autoencoder Using the Pseudo Label for Classification Tasks. Entropy. 2023; 25(9):1274. https://doi.org/10.3390/e25091274

Chicago/Turabian Style

Lai, Jie, Xiaodan Wang, Qian Xiang, Wen Quan, and Yafei Song. 2023. "A Semi-Supervised Stacked Autoencoder Using the Pseudo Label for Classification Tasks" Entropy 25, no. 9: 1274. https://doi.org/10.3390/e25091274

APA Style

Lai, J., Wang, X., Xiang, Q., Quan, W., & Song, Y. (2023). A Semi-Supervised Stacked Autoencoder Using the Pseudo Label for Classification Tasks. Entropy, 25(9), 1274. https://doi.org/10.3390/e25091274

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop