Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
A Kind of Optoelectronic Memristor Model and Its Applications in Multi-Valued Logic
Next Article in Special Issue
A Robust Constellation Diagram Representation for Communication Signal and Automatic Modulation Classification
Previous Article in Journal
The Inflection Point of Single Event Transient in SiGe HBT at a Cryogenic Temperature
Previous Article in Special Issue
Automatic Modulation Classification Based on CNN and Multiple Kernel Maximum Mean Discrepancy
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved SVM with Earth Mover’s Distance Regularization and Its Application in Pattern Recognition

1
School of Electronic Engineering, Xidian University, Xi’an 710071, China
2
School of Marine Science and Technology, Northwestern Polytechnical University, Xi’an 710072, China
3
Oil & Gas Technology Institute of Changqing Oilfifield Company, Xi’an 710021, China
4
College of Economics & Management, Shandong University of Science and Technology, Qingdao 266590, China
5
School of Computer and Information Science, Southwest University, Chongqing 400715, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(3), 645; https://doi.org/10.3390/electronics12030645
Submission received: 3 December 2022 / Revised: 22 January 2023 / Accepted: 24 January 2023 / Published: 28 January 2023
(This article belongs to the Special Issue Machine Learning for Radar and Communication Signal Processing)

Abstract

:
A support vector machine (SVM) aims to achieve an optimal hyperplane with a maximum interclass margin and has been widely utilized in pattern recognition. Traditionally, a SVM mainly considers the separability of boundary points (i.e., support vectors), while the underlying data structure information is commonly ignored. In this paper, an improved support vector machine with earth mover’s distance (EMD-SVM) is proposed. It can be regarded as an improved generalization of the standard SVM, and can automatically learn the distribution between the classes. To validate its performance, we discuss the necessity of the structural information of EMD-SVM in the linear and nonlinear cases, respectively. Experimental validation was designed and conducted in different application fields, which have shown its superior and robust performance.

1. Introduction

A support vector machine (SVM) is a supervised machine learning model that has been widely utilized in pattern recognition [1], such as text classification [2,3], face recognition [4,5], radar [6], sonar [7], etc. Generally, the basic idea of SVM is to maximize the minimum margin from the samples to the classification hyperplane. Built on the SVM, variants, including some discriminative classifiers featuring large-margin theory, have been proposed to improve the SVM or overcome its limitations [8,9,10,11,12,13].
For classification tasks, the standard SVM aims to find a hyperplane that allows diverse classes to be separated with a maximal margin. However, traditional SVMs mainly consider the separability of boundary points, while the underlying data structure information is commonly ignored. It is known that in real-world applications, different data sets may have different distributions, and from a statistical perspective, the structural information should be the key factor. Breiman et al. argued this point and showed that maximizing the minimum margin was not the key factor in model generalization [14,15]. Then, Reyzin found that the margin theory was still helpful to model generalization, but the margin distribution seemed more dominant [16]. In this case, a classifier is expected to capture the data structure or distribution information, and a more reasonable discriminant boundary would be available when dealing with the complex structured dataset in certain classification tasks. Gao proved that the margin mean and margin variance do have an essential influence on the generalization performance of the classifier [17]. Subsequently, large margin machine (LDM) and its modified version, i.e., optimal margin distribution learning machine (ODM), are proposed to maximize or minimize the margin mean and margin variance, respectively [18,19]. Considering the sensitivity of the number of samples and its inclination to generate an imbalanced margin distribution, Cheng considered the statistical characteristics with marginal distribution and constructed a double distribution support vector machine (DDSVM) [20]. Additionally, due to the utilization of the sample distribution information, the improved SVMs have shown a superior performance [21,22,23,24,25,26,27].
One method is to introduce structural information into the SVM. Belkin et al. [28] proposed a laplacian support vector machine (LapSVM) by constructing a laplacian matrix of the manifold structure of the dataset and embedding a manifold regularization term inside the SVM. This approach is called the semi-supervised learning task. Based on this, a structured large margin machine (SLMM) [29] is proposed to capture the structural information by using clustering techniques, which have proved to be sensitive to data distribution. However, the SLMM is optimized by second-order cone programming (SOCP), which has a large computational complexity. Furthermore, research has improved the SVM from the perspective of the objective function, of which the most representative method is structural regularized support vector machine (SRSVM) [30]. Similar to the SLMM, the SRSVM also obtains structural information by using the clustering method, whereas SRSVM integrates the structural information directly into the objective function of a traditional SVM, rather than into the constraints. That is, SRSVM can also be solved by quadratic programming, hence a SVM with minimum within-class scatter (WCS-SVM) was proposed to combine minimum within-class scatter with SVM [31]. Additionally, it is further extended to a fuzzy version coined FSVM with minimum within-class scatter (WCS-FSVM) [32]. To enhance the discriminative ability, Zhang introduced Fisher regularization into SVM to form a Fisher regularization support vector machine (FisherSVM) [33] that minimizes the within-class samples.
Overall, the structural SVM has matured to the extent that it can utilize structural information from the data and improve the generalization capacities of the model. It is usually expected to construct a classification model by explicitly mining structural information. Therefore, the available model is sensitive to data structure information, thus resulting in a general improvement in the model. Motivated by the aforementioned analysis, a novel pattern recognition classifier, namely a support vector machine based on earth mover’s distance (EMD-SVM), is proposed to learn the distribution information between classes automatically. Specifically, we utilized earth mover’s distance [34] to capture structural information for data explicitly, and then the structural information will be embedded into the SVM, which acts as a regular term of the objective function optimized by quadratic programming. Additionally, we extended the EMD-SVM formulation from the linear classification to the nonlinear case. Considering the great success and state-of-the art performance of deep neural networks in machine vision and signal processing fields [35,36,37,38,39,40,41], we replaced fully-connected layers in a standard CNN using SVM to cope with classification tasks [42,43,44,45,46], which enabled improvements to be made in the recognition performance and generalization ability of CNN.
In terms of this, an improved support vector machine with earth mover’s distance (EMD-SVM) is proposed, which can be regarded as an improved generalization of the standard SVM. The main contributions of this study can be summarized as follows,
(1)
We propose a new strategy to capture the underlying data structural information and thus improve the SVM classifier.
(2)
The principles of the EMD-SVM in the linear and nonlinear cases are discussed in detail, respectively. It is proved to be a convex optimization problem and can be solved by the QP technique.
(3)
We conduct experimental verification on three kinds of classification datasets, including UCI, image recognition, and radar emitter recognition, which have shown that the performance of the proposed EMD-SVM is superior and robust.
The rest of this paper is organized as follows. Section 2 briefly describes SVM and Earth Mover’s Distance (EMD). The proposed EMD-SVM is introduced in Section 3, which is followed by numerical results in Section 4. Section 5 presents the conclusions.

2. Related Work

2.1. Support Vector Machine

Taking binary classification for instance, we review the principle of SVM. Usually, we use a testing set D to evaluate the discriminative ability of the classifier for new samples, and then use the “testing error” on it as an approximation of the generalization error. Considering the training samples set D = { ( x 1 , y 1 ) , ( x 2 , y 2 ) , , ( x n , y n ) } , y i { 1 , + 1 } , the standard SVM aims to find a hyperplane f = ω T x + b , which can separate the samples of different classes with a margin of 2 ω . Its objective function can be given as follows,
min 1 2 ω T ω s . t . y i ( ω T x i + b ) 1 , i = 1 , 2 , , n
For the linear non-separable cases, by using ξ i 0 , i = 1 , 2 , , n and penalty factors to penalize the samples that violate inequality constraints, the following soft-margin SVM can be obtained,
min 1 2 ω T ω + C i = 1 n ξ i s . t . y i ( ω T x i + b ) 1 ξ i     ξ i 0 , i = 1 , 2 , , n
where ξ i is the slack variables, the matrix C denotes to the trade-off between errors of training data and generalization [47].
Then, the standard SVM can be trained by solving a dual quadratic programming problem. The dual problem can be formulated as below,
max 1 2 i = 1 n j = 1 n α i α j y i y j x i T x j + i = 1 n α i s . t . 0 α i C , i = 1 , , n     i = 1 n α i y i = 0

2.2. The Earth Mover‘s Distance

The Earth Mover’s Distance is defined as the minimal cost that transforms one distribution into the other. It is proposed to solve the transportation problem through linear optimization. Next, we explain the algorithm by referring to the cargo transportation problem.
Suppose there are two distributions P = { ( p i , u p i ) } i = 1 m and Q = { ( q j , u q j ) } j = 1 n , where p i is the supplier, u p i is the quantity of goods it owns, q j is the warehouse, and u q j is the quantity of goods it can accommodate. Then, the EMD distance can be expressed as a linear optimization problem as below,
W O R K ( P , Q , F ) = i = 1 m j = 1 n d i j f i j s . t . f i j 0 1 i m , 1 j n     j = 1 n f i j ω p i 1 i m     i = 1 m f i j ω q j 1 j n     i = 1 m j = 1 n f i j = min ( i = 1 m ω p i , i = 1 m ω q j )       d i j = | p i q j |
where, d i j represents the distance between p i and q j , ω p i and ω q j denotes the total supply and accommodation capacity, respectively. Here we expect to find a flow F = [ f i j ] that minimizes the overall transportation costs, where f i j is the flow from p i to q j . Then, the EMD distance can be normalized as,
E M D ( P , Q ) = i = 1 m j = 1 n d i j f i j i = 1 m j = 1 n f i j

3. EMD-SVM Model

In this section, the EMD-SVM model is expounded by taking binary classification as an example. Then, the principles of the EMD-SVM in the linear and nonlinear cases are discussed in detail, respectively.

3.1. EMD-SVM for Linear Case

The EMD-SVM model for the linear case can be given as,
min 1 2 ω T ω + λ 2 ω T E d ω + C i = 1 n ξ i s . t . y i ( ω T x i + b ) 1 ξ i     ξ i 0 , i = 1 , 2 , , n
where E d = 1 e m d I , I is the identity matrix, e m d represents the EMD distance between the two kinds of distributions, and λ is a parameter used to regulate the relative importance of the distance representation within the distribution of the two classes.
Incorporating the constraints, we can rewrite Equation (5) as a primal Lagrangian equation,
L ( ω , b , α ) = 1 2 ω T ω + λ 2 ω T E d ω + C i = 1 n ξ i + i = 1 n α i [ 1 ξ i y i ( ω T x i + b ) ] i = 1 n μ i ξ i
where the KKT conditions for the primal problem can be obtained as follows,
L ω = ( I + λ E d ) ω i = 1 n α i y i x i = 0
L b = i = 1 n α i y i = 0
L ξ i = C α i μ i = 0
α i 0 , μ i 0
y i ( ω T x i + b ) 1 + ξ i 0
α i ( y i ( ω T x i + b ) 1 + ξ i ) = 0
ξ i 0 , μ i ξ i 0
Substitute Equations (8)–(10) into Equation (7), we can obtain
1 2 ω T ( I + λ E d ) ω + C i = 1 n ξ i + i = 1 n α i [ 1 ξ i y i ( ω T x i + b ) ] i = 1 n μ i ξ i = 1 2 i = 1 n α i y i x i T ( I + λ E d ) 1 j = 1 n α j y j x j + i = 1 n α i i = 1 n α i y i j = 1 n α j y j x j T ( I + λ E d ) 1 x i = 1 2 i = 1 n j = 1 n α i α j y i y j x i T ( I + λ E d ) 1 x j + i = 1 n α i
Then, we can transform the primal Lagrangian equation into the dual problem,
max 1 2 i = 1 n j = 1 n α i α j y i y j x i T ( I + λ E d ) 1 x j + i = 1 n α i s . t . 0 α i C , i = 1 , , n     i = 1 n α i y i = 0
Hence, we can obtain the solution α i by using the QP techniques. In predicting the class labels for testing data x , the classifier function can be derived as below,
C l a s s   x = s g n [ ω T x + b ] = s g n [ i = 1 n α i y i x i T ( I + λ E d ) 1 x + b ]

3.2. EMD-SVM for Nonlinear Case

Like the SVM, we can also construct the kernel functions for EMD-SVM to cope with nonlinear problems. We can construct a mapping function Φ so that to map the training data to a higher-level feature space H, i.e., Φ : R d H . Then, the kernel transposition problem can be described as [47],
W O R K ( ϕ ( P ) , ϕ ( Q ) , F ) = i = 1 m j = 1 n d ϕ ( p i ) ϕ ( q j ) f i j
where the ground distance d ϕ ( p i ) ϕ ( q j ) between ϕ ( p i ) and ϕ ( q j ) can be calculated by,
d ϕ ( p i ) ϕ ( q j ) = ϕ ( p i ) ϕ ( q j ) 2
If there were a “kernel function” K such that K ( x i , x j ) = ϕ ( x i ) T ϕ ( x j ) , we would use K to rewrite the Equation (19) as,
d ϕ ( p i ) ϕ ( q j ) = K ( p i , p i ) K ( p i , q j ) K ( q j , p i ) + K ( q j , q j )
Then, the kernel of the EMD-SVM can be defined,
min 1 2 ω T ω + λ 2 ω T ϕ ( X ) E d ϕ T ( X ) ω + C i = 1 n ξ i s . t . y i ( ω T ϕ ( x i ) + b ) 1 ξ i     ξ i 0 , i = 1 , 2 , , n
where E d represents the EMD distance of the two kinds of distributions in the kernel space.
The Largrangian form for this problem can be formed as below,
L ( ω , b , α ) = 1 2 ω T ω + λ 2 ω T ϕ ( X ) E d ϕ T ( X ) ω + C i = 1 n ξ i + i = 1 n α i [ 1 ξ i y i ( ω T ϕ ( x i ) + b ) ] i = 1 n μ i ξ i
Setting the partial derivative of L ( ω , b , α ) with respect to w and b equal to zero,
ω = [ I + λ ϕ ( X ) E d ϕ T ( X ) ] 1 i = 1 n α i y i ϕ ( x i )
i = 1 n α i y i = 0
C α i μ i = 0
The dual problem can be further formed as,
max 1 2 i = 1 n j = 1 n α i α j y i y j ϕ T ( x i ) ( I + λ ϕ ( X ) E d ϕ T ( X ) ) 1 ϕ ( x j ) + i = 1 n α i s . t . 0 α i C , i = 1 , , n     i = 1 n α i y i = 0
According to Woodbury’s formula, we could have,
( I + U B V ) 1 = A 1 A 1 U B ( B + B V A 1 U B ) 1 B V A 1
Then, we have,
[ I + λ ϕ ( X ) E d ϕ T ( X ) ] 1 = I λ ϕ ( X ) E d [ E d + λ E d ϕ T ( X ) ϕ ( X ) E d ] 1 E d ϕ T ( X ) = I λ ϕ ( X ) E d [ E d + λ E d K E d ] 1 E d ϕ T ( X ) = I λ ϕ ( X ) P ϕ T ( X )
where
P = E d [ E d + λ E d K E d ] E d
Let K i : denote the i -th row of K , K : j denote the j -th column of K , then the dual problem can be cast as
1 2 i = 1 n j = 1 n α i α j y i y j [ K i j λ K i : P K : j ] + i = 1 n α i s . t . 0 α i C , i = 1 , , n     i = 1 n α i y i = 0
Once the solution α are obtained from the above convex optimization problem, we can obtain the hyperplane. Then, the class label of a data point x R n can be determined as
C l a s s   x = s g n [ ω T ϕ ( x ) + b ] = s g n [ i = 1 n α i y i K ( x i , x ) λ i = 1 n α i y i K i : P K ( X , x ) + b ]

4. Experimental Results and Discussion

In this section, the EMD-SVM is evaluated on synthetic and real-world datasets. We compared the performance of the proposed EMD-SVM with standard SVM and some representative large margin methods, including SRSVM, LDM, ODM, and ELM [48]. We first evaluated the effectiveness of the proposed EMD-SVM on a synthetic dataset to illustrate the impact of data distribution information on classification. Then, we evaluated the performance of these methods on UCI datasets and the Caltech 101 dataset. Next, we utilized a deep convolutional neural network to extract convolutional features and discuss the performance of EMD-SVM based on deep convolutional features. Finally, the proposed EMD-SVM was applied to the radar emitter recognition task. All the experiments were carried out on a PC with a 3.50 GHz CPU and 48 GB RAM.

4.1. Recognition Performance of EMD-SVM on Synthetic Dataset

The two-dimensional synthetic dataset consists of three groups of randomly generated Gaussian distributions. The blue plus represents the positive sample and the red star represents the negative samples. Table 1 describes the attributes of the dataset. The hyperplanes of linear SVM, LDM, and EMD-SVM are displayed in Figure 1.
As can be seen from Figure 1, the positive class has a vertical distribution and the negative class is composed of two horizontal Gaussian distributions. Moreover, the distribution N1 has more samples than N2. It can be seen from Figure 1 that due to the neglect of the structural information, SVM cannot compete with a complex structured dataset; Specifically, SVM ignores the cluster N2 which has fewer samples than cluster N1. The hyperplane only focuses on the separability between cluster P and cluster N1. LDM adopts margin mean and variance to characterize the margin distribution and optimizes it to achieve a better generalization performance. On the other hand, considering the structured distance information and the separability between the two distributions, the proposed EMD-SVM can also obtain a more reasonable hyperplane.

4.2. Recognition Performance of EMD-SVM with Hand-Crafted Classifier on UCI Datasets

In this section, we verify the performance of the proposed EMD-SVM on UCI datasets. The attributes of these datasets are presented in Table 2.
We randomly selected half of the samples as the training set and the rest as the testing set. In the linear case, for ODM, the parameter D is selected from the set [ 0 ,   0.1 ,   ,   0.5 ] , the regularization parameter C1 and C2 are selected from the set [ 2 8 , , 2 8 ] , for SVM, SRSVM, EMD-SVM and LDM, the parameter C is selected from the set [ 10 3 , , 10 3 ] , and parameters λ , λ 1 and λ 2 are selected from the set [ 2 8 , , 2 8 ] . For ELM, the number of hidden neurons is defined as 1000, and the activation function is Sigmoid. In the nonlinear case, the RBF kernel k ( x i , x j ) = exp ( 1 2 σ 2 x i x j 2 ) is used for all algorithms. The width of the RBF kernel is selected from [ 2 8 , , 2 8 ] . Experiments were repeated 10 times with different data partitions.
We compared the average accuracy of all the algorithms. Table 3 shows the accuracy result with linear kernel and Table 4 shows the accuracy result with RBF kernel.
From the results, we can draw the following conclusions,
(1)
The EMD-SVM combines the earth mover’s distance with standard SVM, which can introduce the data distribution information into the traditional SVM. The outstanding performance of EMD-SVM on most datasets further validates the necessity of distribution information for the classifier’s design.
(2)
Although SRSVM can achieve comparable recognition results with EMD-SVM, its recognition performance is highly affected by the clustering method, as SRSVM is based on the clustering structure. In practical applications, different clustering methods must be used for different problems.
(3)
LDM and ODM use the margin mean and variance to describe the margin distribution, while the first- and second-order statistics are often used to characterize Gaussian-distributed data, which has certain limitations. In contrast, EMD-SVM adopts EMD distance instead of Euclidean distance to describe the data distribution. The distribution information is then incorporated into the SVM object function in the form of regular terms, thus guiding SVM to learn the optimal classification boundary under this distribution metric.

4.3. Experiments on Caltech101 Dataset

In this subsection, we conduct an experiment on the Caltech101 dataset. Caltech101 is a digital image dataset provided by the California Institute of Technology, which contains a total of 9146 images divided into 101 attributes (including face, plane, animal, etc.) and a background category. We chose nine types of images for this experiment: airplanes, bonsai, cars, dolphins, electric guitars, easy faces, helicopters, leopards, and motorbikes. The features of SIFT, LBP, and PHOG are extracted from these images and the attributes are presented in Table 5.
We randomly selected 80 images from each category as datasets, 64 of them as training samples and the remaining 16 as test samples. Ten independent experiments were conducted to evaluate the performance of the proposed EMD-SVM. We used linear kernel in the experiment and the parameters were selected in a similar way as in the UCI dataset experiments. For multi-class problems, the “one-to-one” strategy is adopted. We compared the average accuracy of all the algorithms and the results are shown in Table 6.
It is clear that the EMD-SVM achieves better accuracy than the SVM, SRSVM, LDM, ODM and ELM methods in the multi-class classification problem. This indicates that distribution information can help to determine a better discriminant boundary. Moreover, the performance of LDM and ODM on the Caltech101 dataset further shows that characterizing the data distribution with first- and second-order statistics still has some limitations.

4.4. Recognition Performance of EMD-SVM Based on Deep Convolutional Features

In this section, we discuss the performance of EMD-SVM and other algorithms on deep convolutional features. We adopted the classical AlexNet as the pretrained CNN model, which contains five convolutional layers and three fully connected layers, and further details of the network can be referred to in [40]. The DSLR and Amazon datasets were used to verify the effectiveness of EMD-SVM in the deep feature. The CNN model was pretrained by the ImageNet dataset and fine-tuned by the DSLR and Amazon datasets. Then, we extracted the fine-tuned deep features Fc6 and Fc7 as the inputs of the above five algorithms for classification, respectively. Table 7 shows the details of the four deep features.
In the experiment, we randomly chose 50% of the samples as the training set and the rest as the testing set. Ten independent experiments were conducted to achieve a more stable result. A linear kernel was used in the experiment and the parameters were selected in a similar way as in the UCI dataset experiments. Table 8 compares the accuracy results of EMD-SVM and other methods.
As can be seen, the overall performance of EMD-SVM is better than SVM and other methods. In addition, as the large margin algorithms LDM and ODM apply the ideas of maximizing margin mean and minimizing margin variance to the SVM model, compared with SVM, LDM and ODM are also highly competitive with SVM. The results demonstrate that considering the distribution of data can improve the classifier’s classification performance on complex data.
Additionally, we compared EMD-SVM with an MLP with two hidden layers with 1024 and 512 neurons, respectively. The accuracies of EMD-SVM and MLP are shown in Table 9. It can be seen that MLP can achieve recognition results comparable to those of linear EMD-SVM, but still somewhat inferior to nonlinear EMD-SVM. Compared to MLP, EMD-SVM is based on the minimization of structural risk rather than empirical risk, thus avoiding the overfitting problem. By obtaining a structured description of the data distribution, it reduces the requirements for the size and distribution of the data, and has excellent generalization capabilities.

4.5. Recognition Performance of EMD-SVM for Radar Emitter Recognition

In order to test the effectiveness of our EMD-SVM in realistic applications, we conducted experiments on radar emitter recognition. The collected data are radar emitter signals with the same type and parameters. We extracted the FFT, welch power spectrum, ambiguous function slice, and cyclic spectrum slice (denoted as Data1, Data2, Data3, and Data4, respectively). The attributes of these datasets are presented in Table 10. The corresponding waveforms of class1-class4 signals in Data4 are shown in Figure 2.
To reduce the computation time, the PCA algorithm was utilized to extract 90% of the energy. We randomly chose the 80% percent samples as the training set and the remaining 20% percent samples as the test set. The experiment was repeated 10 times to generate 10 independent results for each dataset, and we compared the average accuracy and the standard deviation of all the algorithms. A linear kernel was used in the experiment and the parameters were selected in a similar way to the UCI dataset experiments. The results of the four radar datasets are shown in Table 11, and it can be seen that our EMD-SVM still achieves superior results.

5. Conclusions

In this paper, we propose a novel SVM classifier with earth mover’s distance, which can automatically learn the distribution between the classes. The EMD-SVM can be seen as a generalization of the standard SVM by calculating the EMD distance, and we discuss the principle of the EMD-SVM in linear and nonlinear cases, respectively. The experimental results indicate that the proposed EMD-SVM has a superior and robust performance. In the future, we will pay more attention to overcoming the drawbacks of a long training time for large-scale datasets and sensitivity to hyper-parameters of kernel functions. It would also be interesting to generalize the idea of EMD-SVM to other learning settings.

Author Contributions

Conceptualization, R.F. and Z.G.; methodology, R.F. and H.D.; software, R.F.; validation, H.L.; formal analysis, X.L.; investigation, R.F.; data curation, X.L. and Z.G.; writing—original draft preparation, R.F. and R.T.; writing—review and editing, H.D.; funding acquisition, H.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China (Grant No. 62201439) and China Postdoctoral Science Foundation (Grant No. 2022M722493). The authors would like to thank the anonymous reviewers for their valuable comments and suggestions.

Data Availability Statement

Conflicts of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationship that could be construed as a potential conflict of interest.

References

  1. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  2. Joachims, T. A statistical learning learning model of text classification for support vector machines. In Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, New Orleans, LA, USA, 9–13 September 2001; pp. 128–136. [Google Scholar] [CrossRef]
  3. Lilleberg, J.; Zhu, Y.; Zhang, Y. Support vector machines and Word2vec for text classification with semantic features. In Proceedings of the 2015 IEEE 14th International Conference on Cognitive Informatics & Cognitive Computing (ICCI* CC), Beijing, China, 6–8 July 2015; pp. 136–140. [Google Scholar] [CrossRef]
  4. Osuna, E.; Freund, R.; Girosit, F. Training support vector machines: An application to face detection. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, PR, USA, 17–19 June 1997; pp. 130–136. [Google Scholar] [CrossRef]
  5. Ghimire, D.; Jeong, S.; Lee, J.; Park, S.H. Facial expression recognition based on local region specific features and support vector machines. Multimedia Tools Appl. 2017, 76, 7803–7821. [Google Scholar] [CrossRef] [Green Version]
  6. Eryildirim, A.; Onaran, I. Pulse Doppler Radar Target Recognition using a Two-Stage SVM Procedure. IEEE Trans. Aerosp. Electron. Syst. 2011, 47, 1450–1457. [Google Scholar] [CrossRef] [Green Version]
  7. Dong, H.; Wang, H.; Shen, X.; He, K. Parameter matched stochastic resonance with damping for passive sonar detection. J. Sound Vib. 2019, 458, 479–496. [Google Scholar] [CrossRef]
  8. Suykens, J.A.K.; Vandewalle, J. Least Squares Support Vector Machine Classifiers. Neural Process. Lett. 1999, 9, 293–300. [Google Scholar] [CrossRef]
  9. Jayadeva; Khemchandani, R.; Chandra, S. Twin Support Vector Machines for Pattern Classification. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 905–910. [Google Scholar] [CrossRef]
  10. Lin, C.-F.; Wang, S.-D. Fuzzy support vector machines. IEEE Trans. Neural Netw. 2002, 13, 464–471. [Google Scholar] [CrossRef]
  11. Ding, S.; Zhu, Z.; Zhang, X. An overview on semi-supervised support vector machine. Neural Comput. Appl. 2017, 28, 969–978. [Google Scholar] [CrossRef]
  12. Iranmehr, A.; Masnadi-Shirazi, H.; Vasconcelos, N. Cost-sensitive support vector machines. Neurocomputing 2019, 343, 50–64. [Google Scholar] [CrossRef] [Green Version]
  13. Huang, S.; Cai, N.; Pacheco, P.P.; Narandes, S.; Wang, Y.; Xu, W. Applications of Support Vector Machine (SVM) Learning in Cancer Genomics. Cancer Genom. Proteom. 2018, 15, 41–51. [Google Scholar] [CrossRef] [Green Version]
  14. Vapnik, V. The Nature of Statistical Learning Theory; Springer: Berlin/Heidelberg, Germany, 1999. [Google Scholar] [CrossRef]
  15. Breiman, L. Prediction Games and Arcing Algorithms. Neural Comput. 1999, 11, 1493–1517. [Google Scholar] [CrossRef] [PubMed]
  16. Reyzin, L.; Schapire, R.E. How boosting the margin can also boost classifier complexity. In Proceedings of the 23rd International Conference on Machine Learning, Pittsburgh, PA, USA, 25–29 June 2006; pp. 753–760. [Google Scholar] [CrossRef]
  17. Gao, W.; Zhou, Z.-H. On the doubt about margin explanation of boosting. Artif. Intell. 2013, 203, 1–18. [Google Scholar] [CrossRef]
  18. Zhang, T.; Zhou, Z.-H. Large margin distribution machine. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 24–27 August 2014; pp. 313–322. [Google Scholar] [CrossRef] [Green Version]
  19. Teng, Z.; Zhou, Z.-H. Optimal margin distribution machine. IEEE Trans. Knowl. Data Eng. 2020, 32, 1143–1156. [Google Scholar] [CrossRef] [Green Version]
  20. Cheng, F.; Zhang, J.; Li, Z.; Tang, M. Double distribution support vector machine. Pattern Recognit. Lett. 2017, 88, 20–25. [Google Scholar] [CrossRef]
  21. Zhou, Y.-H.; Zhou, Z.-H. Large Margin Distribution Learning with Cost Interval and Unlabeled Data. IEEE Trans. Knowl. Data Eng. 2016, 28, 1749–1763. [Google Scholar] [CrossRef]
  22. Wang, H.; Wang, Y.; Zhou, Z.; Ji, X.; Gong, D.; Zhou, J.; Li, Z.; Liu, W. CosFace: Large Margin Cosine Loss for Deep Face Recognition. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 5265–5274. [Google Scholar] [CrossRef] [Green Version]
  23. Elsayed, G.; Krishnan, D.; Mobahi, H.; Regan, K.; Bengio, S. Large margin deep networks for classification. Adv. Neural Inf. Process. Syst. 2018, 31, 850–860. [Google Scholar]
  24. Cheng, F.; Zhang, J.; Wen, C.; Liu, Z.; Li, Z. Large cost-sensitive margin distribution machine for imbalanced data classification. Neurocomputing 2017, 224, 45–57. [Google Scholar] [CrossRef]
  25. Zhan, K.; Wang, H.; Huang, H.; Xie, Y. Large margin distribution machine for hyperspectral image classification. J. Electron. Imaging 2016, 25, 63024. [Google Scholar] [CrossRef]
  26. Rastogi, R.; Anand, P.; Chandra, S. Large-margin Distribution Machine-based regression. Neural Comput. Appl. 2020, 32, 3633–3648. [Google Scholar] [CrossRef]
  27. Abe, S. Unconstrained large margin distribution machines. Pattern Recognit. Lett. 2017, 98, 96–102. [Google Scholar] [CrossRef] [Green Version]
  28. Belkin, M.; Niyogi, P.; Sindhwani, V. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. J. Mach. Learn. Res. 2006, 7, 2399–2434. [Google Scholar]
  29. Yeung, D.S.; Wang, D.; Ng, W.W.Y.; Tsang, E.C.C.; Wang, X. Structured large margin machines: Sensitive to data distributions. Mach. Learn. 2007, 68, 171–200. [Google Scholar] [CrossRef] [Green Version]
  30. Xue, H.; Chen, S.; Yang, Q. Structural Regularized Support Vector Machine: A Framework for Structural Large Margin Classifier. IEEE Trans. Neural Netw. 2011, 22, 573–587. [Google Scholar] [CrossRef]
  31. An, W.; Liang, M. A new intrusion detection method based on SVM with minimum within-class scatter. Secur. Commun. Netw. 2013, 6, 1064–1074. [Google Scholar] [CrossRef]
  32. An, W.; Liang, M. Fuzzy support vector machine based on within-class scatter for classification problems with outliers or noises. Neurocomputing 2013, 110, 101–110. [Google Scholar] [CrossRef]
  33. Zhang, L.; Zhou, W.-D. Fisher-regularized support vector machine. Inf. Sci. 2016, 343–344, 79–93. [Google Scholar] [CrossRef]
  34. Rubner, Y.; Tomasi, C.; Guibas, L.J. The Earth Mover’s Distance as a Metric for Image Retrieval. Int. J. Comput. Vis. 2000, 40, 99–121. [Google Scholar] [CrossRef]
  35. Kendall, A.; Gal, Y. What uncertainties do we need in bayesian deep learning for computer vision? Adv. Neural Inf. Process. Syst. 2017, 30, 5580–5590. [Google Scholar] [CrossRef]
  36. Zhu, Z.; Yi, Z.; Li, S.; Li, L. Deep Muti-Modal Generic Representation Auxiliary Learning Networks for End-to-End Radar Emitter Classification. Aerospace 2022, 9, 732. [Google Scholar] [CrossRef]
  37. Li, L.; Dong, Z.; Zhu, Z.; Jiang, Q. Deep-learning Hopping Capture Model for Automatic Modulation Classification of Wireless Communication Signals. IEEE Trans. Aerosp. Electron. Syst. 2022, 1–12. [Google Scholar] [CrossRef]
  38. Ribeiro, F.D.S.; Calivá, F.; Swainson, M.; Gudmundsson, K.; Leontidis, G.; Kollias, S. Deep Bayesian Self-Training. Neural Comput. Appl. 2020, 32, 4275–4291. [Google Scholar] [CrossRef] [Green Version]
  39. Zhang, X.; Zhou, X.; Lin, M.; Sun, J. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6848–6856. [Google Scholar] [CrossRef] [Green Version]
  40. Zhu, Z.; Ji, H.; Zhang, W.; Li, L.; Ji, T. Complex Convolutional Neural Network for Signal Representation and Its Application to Radar Emitter Recognition. IEEE Commun. Lett. 2023, 1. [Google Scholar] [CrossRef]
  41. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 60, 84–90. [Google Scholar] [CrossRef] [Green Version]
  42. Nagi, J.; Di Caro, G.A.; Giusti, A.; Nagi, F.; Gambardella, L.M. Convolutional Neural Support Vector Machines: Hybrid Visual Pattern Classifiers for Multi-robot Systems. In Proceedings of the 11th International Conference on Machine Learning and Applications, Boca Raton, FL, USA, 12–15 December 2012; pp. 27–32. [Google Scholar] [CrossRef]
  43. Niu, X.-X.; Suen, C.Y. A novel hybrid CNN–SVM classifier for recognizing handwritten digits. Pattern Recognit. 2012, 45, 1318–1325. [Google Scholar] [CrossRef]
  44. Elleuch, M.; Maalej, R.; Kherallah, M. A New Design Based-SVM of the CNN Classifier Architecture with Dropout for Offline Arabic Handwritten Recognition. Procedia Comput. Sci. 2016, 80, 1712–1723. [Google Scholar] [CrossRef] [Green Version]
  45. Tao, Q.-Q.; Zhan, S.; Li, X.-H.; Kurihara, T. Robust face detection using local CNN and SVM based on kernel combination. Neurocomputing 2016, 211, 98–105. [Google Scholar] [CrossRef]
  46. Wu, H.; Huang, Q.; Wang, D.; Gao, L. A CNN-SVM combined model for pattern recognition of knee motion using mechanomyography signals. J. Electromyogr. Kinesiol. 2018, 42, 136–142. [Google Scholar] [CrossRef] [PubMed]
  47. Burges, C.J.C. A Tutorial on Support Vector Machines for Pattern Recognition. Data Min. Knowl. Discov. 1998, 2, 121–167. [Google Scholar] [CrossRef]
  48. Huang, G.-B.; Wang, D.H.; Lan, Y. Extreme learning machines: A survey. Int. J. Mach. Learn. Cybern. 2011, 2, 107–122. [Google Scholar] [CrossRef]
Figure 1. Hyperplanes of SVM and EMD-SVM. (a) SVM; (b) LDM; (c) EMD-SVM.
Figure 1. Hyperplanes of SVM and EMD-SVM. (a) SVM; (b) LDM; (c) EMD-SVM.
Electronics 12 00645 g001
Figure 2. Waveforms of class1–class4 signals in Data4, (a) class1; (b) class2; (c) class3; (d) class4.
Figure 2. Waveforms of class1–class4 signals in Data4, (a) class1; (b) class2; (c) class3; (d) class4.
Electronics 12 00645 g002
Table 1. The attributes of the synthetic dataset.
Table 1. The attributes of the synthetic dataset.
SamplesGaussian DistributionNumMeanCovariance
Positive Gaussian distribution P200 [0; 5] [0.3,0; 0,5]
Negative Gaussian distribution N1180 [6; 8] [1,0; 0,0.5]
Gaussian distribution N220 [3; 2] [1.5,0; 0,1.5]
Table 2. Attributes of experimental datasets.
Table 2. Attributes of experimental datasets.
DatasetFeatureNumClass
Sonar602082
Breast92772
Cryotherapy6902
Fertility91002
Wdbc305692
Ionosphere343512
Hepatitis191552
Spectf442672
Pima87682
Heart133032
Tae51513
Iris41503
Table 3. Accuracy comparisons with linear kernel.
Table 3. Accuracy comparisons with linear kernel.
DatasetSVMSRSVMLDMODMELMEMD-SVM
Sonar77.11 ± 3.8477.40 ± 1.5076.83 ± 3.8076.35 ± 4.3078.26 ± 3.7977.69 ± 4.12
Breast70.86 ± 2.7871.80 ± 2.2070.72 ± 2.0670.58 ± 2.2657.62 ± 6.6471.80 ± 2.65
Cryotherapy83.91 ± 5.1085.35 ± 5.9270.39 ± 4.9968.77 ± 7.9878.77 ± 7.3385.64 ± 6.24
Fertility86.80 ± 3.5586.80 ± 3.5586.80 ± 3.5586.80 ± 3.5574.2 ± 4.3687.00 ± 4.02
Wdbc95.12 ± 1.1396.63 ± 0.9994.88 ± 1.1191.68 ± 1.8387.75 ± 2.0996.04 ± 1.14
Ionosphere87.10 ± 1.6987.84 ± 1.7684.03 ± 2.7484.49 ± 2.4080.85 ± 2.0587.73 ± 2.3
Hepatitis80.75 ± 7.6483.75 ± 4.2883.75 ± 3.5885.50 ± 4.6876.25 ± 2.7183.75 ± 4.28
Spectf78.81 ± 3.7579.85 ± 3.5079.18 ± 2.3279.10 ± 2.2563.88 ± 5.3479.78 ± 2.84
Pima76.07 ± 1.8876.11 ± 1.8466.90 ± 1.9467.14 ± 2.0960.91 ± 2.4876.30 ± 1.84
Heart83.09 ± 2.6383.28 ± 3.1583.36 ± 2.1582.76 ± 2.1468.94 ± 4.4883.42 ± 3.05
Tae48.68 ± 3.5951.04 ± 4.9444.73 ± 5.1044.87 ± 4.5150.52 ± 4.8551.45 ± 3.89
Iris97.60 ± 2.4197.80 ± 1.5697.07 ± 1.2497.07 ± 1.3977.06 ± 6.8597.87 ± 1.05
Note: the bold value indicates the best accuracy on each dataset.
Table 4. Accuracy comparisons with RBF kernel.
Table 4. Accuracy comparisons with RBF kernel.
DatasetSVMSRSVMLDMODMELMEMD-SVM
Sonar87.88 ± 2.4487.40 ± 1.8487.5 ± 2.2686.54 ± 2.3686.25 ± 2.3187.98 ± 2.36
Breast72.52 ± 3.5074.39 ± 3.0074.60 ± 2.8174.68 ± 3.5074.31 ± 3.0174.46 ± 3.31
Cryotherapy84.00 ± 4.2985.33 ± 5.3683.11 ± 8.3282.22 ± 7.3382.66 ± 7.9685.33 ± 6.21
Fertility87.00 ± 4.0286.80 ± 3.5587.00 ± 4.0286.80 ± 3.5586.80 ± 3.5587.00 ± 4.02
Wdbc94.63 ± 1.4494.60 ± 1.1591.61 ± 1.5191.65 ± 1.3791.64 ± 1.3794.35 ± 0.89
Ionosphere94.71 ± 1.5694.89 ± 1.2094.94 ± 1.0594.94 ± 1.0594.94 ± 1.0595.28 ± 1.11
Hepatitis83.75 ± 4.2883.75 ± 4.2978.5 ± 4.7478.25 ± 4.4278.25 ± 4.4184.00 ± 4.74
Spectf80.22 ± 2.3179.78 ± 3.2576.27 ± 2.1676.34 ± 2.3376.26 ± 2.1680.30 ± 2.86
Pima76.33 ± 1.3476.74 ± 1.7076.84 ± 1.8876.77 ± 1.7776.85 ± 1.9776.82 ± 1.47
Heart83.22 ± 2.6583.88 ± 3.4484.14 ± 3.3784.01 ± 3.2284.14 ± 2.7684.41 ± 2.68
Tae47.63 ± 5.7146.32 ± 3.3350.92 ± 7.0750.92 ± 7.0741.57 ± 3.7350.92 ± 7.07
Iris97.73 ± 1.2697.87 ± 1.5997.06 ± 1.6498.00 ± 1.1392.00 ± 3.9298.00 ± 1.13
Note: the bold value indicates the best accuracy on each dataset.
Table 5. Attributes of Caltech101 feature.
Table 5. Attributes of Caltech101 feature.
Caltech101 FeatureFeatureNumClass
LBP377209
SIFT3007209
PHOG407209
Table 6. Accuracy comparisons with linear kernel.
Table 6. Accuracy comparisons with linear kernel.
DatasetSVMSRSVMLDMODMELMEMD-SVM
LBP60.34 ± 3.5860.87 ± 2.1840.90 ± 4.7849.09 ± 2.4960.55 ± 4.9361.11 ± 1.96
SIFT82.36 ± 2.1482.43 ± 2.7574.24 ± 3.1678.12 ± 3.3778.75 ± 1.8383.26 ± 3.14
PHOG50.69 ± 2.9951.59 ± 3.6041.46 ± 3.4345.35 ± 3.6344.03 ± 6.0851.94 ± 3.87
Note: the bold value indicates the best accuracy on each dataset.
Table 7. Details of the four deep features.
Table 7. Details of the four deep features.
Dataset CNN FeatureNumClass
DSLR-Fc6409615710
DSLR-Fc7409615710
Amazon-Fc6409695810
Amazon-Fc7409695810
Table 8. Accuracy comparisons with linear kernel.
Table 8. Accuracy comparisons with linear kernel.
DatasetSVMSRSVMLDMODMELMEMD-SVM
DSLR-Fc693.92 ± 2.3194.05 ± 3.1695.18 ± 2.5895.44 ± 2.5494.55 ± 2.3196.45 ± 2.04
DSLR-Fc794.93 ± 2.0594.93 ± 2.6096.20 ± 2.0696.32 ± 2.2694.81 ± 3.2996.96 ± 1.99
AMAZON-Fc694.07 ± 0.7594.17 ± 0.6694.53 ± 0.8394.34 ± 0.6691.19 ± 0.6394.47 ± 0.69
AMAZON-Fc794.55 ± 0.9194.82 ± 0.8094.90 ± 0.8294.92 ± 0.7592.88 ± 1.0495.01 ± 0.81
Note: the bold value indicates the best accuracy on each dataset.
Table 9. Accuracy comparisons with RBF kernel.
Table 9. Accuracy comparisons with RBF kernel.
DatasetSVMMLPEMD-SVM
(Linear Kernel)
EMD-SVM
(RBF Kernel)
DSLR-Fc693.92 ± 2.3196.71 ± 1.5996.45 ± 2.0496.95 ± 3.16
DSLR-Fc794.93 ± 2.0596.89 ± 2.2196.96 ± 1.9996.99 ± 1.34
AMAZON-Fc694.07 ± 0.7594.40 ± 0.8894.47 ± 0.6994.58 ± 0.94
AMAZON-Fc794.55 ± 0.9195.21 ± 1.2195.01 ± 0.8195.26 ± 1.02
Note: the bold value indicates the best accuracy on each dataset.
Table 10. Attributes of the radar datasets.
Table 10. Attributes of the radar datasets.
Dataset FeatureNumClass
Data1500083327
Data2819283327
Data3499983327
Data4250083327
Table 11. Accuracy comparisons with linear kernel.
Table 11. Accuracy comparisons with linear kernel.
DatasetSVMSRSVMLDMODMELMEMD-SVM
Data154.97 ± 3.6555.44 ± 3.2446.23 ± 3.4450.23 ± 4.0546.59 ± 2.5855.51 ± 3.45
Data248.02 ± 3.5248.26 ± 3.3040.72 ± 3.6543.29 ± 2.4445.27 ± 3.8948.74 ± 3.36
Data345.69 ± 3.0945.69 ± 3.0943.47 ± 4.2842.81 ± 1.7646.29 ± 3.3846.41 ± 3.27
Data450.65 ± 2.4350.71 ± 2.5543.89 ± 2.0445.63 ± 2.5946.28 ± 3.4951.14 ± 1.95
Note: the bold value indicates the best accuracy on each dataset.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Feng, R.; Dong, H.; Li, X.; Gu, Z.; Tian, R.; Li, H. An Improved SVM with Earth Mover’s Distance Regularization and Its Application in Pattern Recognition. Electronics 2023, 12, 645. https://doi.org/10.3390/electronics12030645

AMA Style

Feng R, Dong H, Li X, Gu Z, Tian R, Li H. An Improved SVM with Earth Mover’s Distance Regularization and Its Application in Pattern Recognition. Electronics. 2023; 12(3):645. https://doi.org/10.3390/electronics12030645

Chicago/Turabian Style

Feng, Rui, Haitao Dong, Xuri Li, Zhaochuang Gu, Runyang Tian, and Houde Li. 2023. "An Improved SVM with Earth Mover’s Distance Regularization and Its Application in Pattern Recognition" Electronics 12, no. 3: 645. https://doi.org/10.3390/electronics12030645

APA Style

Feng, R., Dong, H., Li, X., Gu, Z., Tian, R., & Li, H. (2023). An Improved SVM with Earth Mover’s Distance Regularization and Its Application in Pattern Recognition. Electronics, 12(3), 645. https://doi.org/10.3390/electronics12030645

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop