SAR Target Recognition via Supervised Discriminative Dictionary Learning and Sparse Representation of the SAR-HOG Feature
"> Figure 1
<p>Scheme of the ratio of average (ROA). (<b>a</b>) The ratio of the local means for the horizontal direction. (<b>b</b>) The ratio of the local means for the vertical direction.</p> "> Figure 2
<p>Scheme of SAR-HOG.</p> "> Figure 3
<p>The recognition rate versus the average region size and histogram bins. (<b>a</b>) The recognition rate versus the average region size <math display="inline"> <semantics> <mrow> <mi>w</mi> <mi>i</mi> <mi>n</mi> </mrow> </semantics> </math>. <math display="inline"> <semantics> <mrow> <mi>w</mi> <mi>i</mi> <mi>n</mi> </mrow> </semantics> </math> is 1, 3, 5, 7, 9, 11, 13 and 15. (<b>b</b>) The recognition rate versus the histogram bins. Bins are 3, 5, 7, 9, 11, 13 and 17.</p> "> Figure 4
<p>The recognition rate versus <span class="html-italic">b</span> and <math display="inline"> <semantics> <msub> <mi>P</mi> <mi>k</mi> </msub> </semantics> </math> in supervised discriminative dictionary learning (SDDL). (<b>a</b>) The recognition rate versus <span class="html-italic">b</span>. <span class="html-italic">b</span> is 0.01, 0.05, 0.1, 0.5 and 1. (<b>b</b>) The recognition rate versus <math display="inline"> <semantics> <msub> <mi>P</mi> <mi>k</mi> </msub> </semantics> </math>. <math display="inline"> <semantics> <msub> <mi>P</mi> <mi>k</mi> </msub> </semantics> </math> is 32, 48, 64, 80, 96, 112 and 128.</p> "> Figure 5
<p>Recognition rates of different methods versus the training set.</p> ">
Abstract
:1. Introduction
2. Related Work
2.1. Work Related to Feature Extraction for SAR ATR
2.2. Work Related to Classification for SAR ATR
3. SAR-HOG
3.1. Gradient Computation
3.2. Orientation Binning
3.3. Normalization and Feature Description
4. SAR ATR Algorithm via Supervised Discriminative Dictionary Learning of SAR-HOG
4.1. Review of the Sparse Representation Classifier and Dictionary Learning
4.2. Supervised Discriminative Dictionary Learning
4.3. Optimization Procedure
- Updating sparse codes : With all of the other factors fixed in Equation (13), the k-th sparse codes matrix can be obtained by solving the following problem:
- Updating class-specific sub-dictionaries : With the sparse codes , the shared sub-dictionary and the class-specific sub-dictionaries fixed, the k-th class-specific sub-dictionary can be obtained by solving the following problem:
- Updating the shared sub-dictionary : When and are fixed, we can obtain by solving the following problem:
Algorithm 1: Supervised discriminative dictionary learning (SDDL). |
Input: |
feature vectors of the ROIs: , ; class labels ; regularization parameters, ; the size of sub-dictionaries, , ; |
the stopping thresholds and or maximum iterations M. |
Initialization: |
Initialize , , , and with K-SVD. |
Initialize , and by Equation (12). |
Repeat |
for to K do |
Update the sparse codes of with LARS algorithm (see Equation (14)). |
end for |
for to K do |
Update each by using gradient descent algorithm (see Equation (15)). |
end for |
Update by using gradient descent algorithm (see Equation (16)). |
Until reaching the stopping criterion (see Equation (17)). |
Output: The desired dictionary and the classifier parameter obtained by Equation (18). |
4.4. SAR ATR Algorithm
Algorithm 2: SAR ATR via SDDLSR. |
Input: |
Slice images of ROIs from the training set and the testing set and class labels in the training set. |
Feature extraction: |
Compute features of slice images. In this paper, SAR-HOG features are computed following the description in Section 3. |
Classification: |
Learn the dictionary and the classifier parameter using Algorithm 1. |
Decompose the test sample onto and identify by the decision rule (see Equation (20)). |
Output: The identity of . |
5. Experiments and Discussions
5.1. Experiment Setup
5.2. Parameter Setting
5.3. Effectiveness of SAR-HOG
5.4. Ten Targets’ ATR under SOC
5.5. Four Targets ATR under EOC
5.6. Experiment on Small Training Set
5.7. Time Consumption
6. Conclusions
Acknowledgments
Author Contributions
Conflicts of Interest
References
- Zhao, Q.; Principe, J.C. Support vector machine for SAR automatic target recognition. IEEE Trans. Aerosp. Electron. Syst. 2001, 2, 643–654. [Google Scholar] [CrossRef]
- Srinivas, U.; Monga, V.; Raj, R.G. SAR automatic target recognition using discriminative graphical models. IEEE Trans. Aerosp. Electron. Syst. 2014, 1, 591–606. [Google Scholar] [CrossRef]
- Dong, G.G.; Kuang, G.Y.; Wang, N.; Zhao, L.J.; Lu, J. SAR target recognition via joint sparse representation of monogenic signal. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 7, 3316–3328. [Google Scholar] [CrossRef]
- Mishra, A.K. Validation of PCA and LDA for SAR ATR. IEEE Tencon 2008, 10, 1–6. [Google Scholar]
- Huang, X.Y.; Qiao, H.; Zhang, B. SAR target configuration recognition using tensor global and local discriminant embedding. IEEE Geosci. Remote Sens. Lett. 2016, 2, 222–226. [Google Scholar] [CrossRef]
- Zhou, J.X.; Shi, Z.G.; Cheng, X.; Fu, Q. Automatic target recognition of SAR images based on global scattering center model. IEEE Trans. Geosci. Remote Sens. 2011, 10, 3713–3729. [Google Scholar]
- Clemente, C.; Pallotta, L.; Proudler, I.; Maio, A.D.; Soraghan, J.J.; Farina, A. Pseudo-Zernike based multi-pass automatic target recognition from multi-channel SAR. IET Radar Sonar Navig. 2015, 9, 457–466. [Google Scholar] [CrossRef]
- Ding, J.; Chen, B.; Liu, H.W.; Huang, M.Y. Convolutional neural network with data augmentation for SAR target recognition. IEEE Geosci. Remote Sens. Lett. 2016, 3, 1–5. [Google Scholar] [CrossRef]
- Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 25 June 2005.
- Wright, J.; Yang, A.; Ganesh, A.; Sastry, S.; Ma, Y. Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 2, 210–227. [Google Scholar] [CrossRef] [PubMed]
- Olshausen, B.A.; Field, D.J. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 1996, 6, 607–609. [Google Scholar] [CrossRef] [PubMed]
- Mairal, J.; Ponce, J.; Sapiro, G.; Zisserman, A.; Bach, F. Supervised dictionary learning. In Proceedings of the Neural Information Processing Systems 21, Vancouver, BC, Canada, 8–10 December 2008.
- Ramirez, I.; Sprechmann, P.; Sapiro, G. Classification and clustering via dictionary learning with structured incoherence and shared features. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010.
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 10, 91–110. [Google Scholar] [CrossRef]
- Weinmann, M. Visual features: From early concepts to modern computer vision. In Advanced Topics in Computer Vision, Advances in Computer Vision and Pattern Recognition; Farinella, G.M., Battiato, S., Cipolla, R., Eds.; Springer-Verlag: London, UK, 2013; pp. 1–34. [Google Scholar]
- Tuytelaars, T.; Mikolajczyk, K. Local invariant feature detectors: A survey. Found. Trends Comput. Graph. Vis. 2008, 1, 177–280. [Google Scholar] [CrossRef] [Green Version]
- Dai, D.; Yang, W.; Sun, H. Multilevel local pattern histogram for SAR image classification. IEEE Geosci. Remote Sens. Lett. 2011, 3, 225–229. [Google Scholar] [CrossRef]
- Cui, S.; Dumitru, C.O.; Datcu, M. Ratio-Detector-Based feature extraction for very high resolution SAR image patch indexing. IEEE Geosci. Remote Sens. Lett. 2013, 9, 1175–1179. [Google Scholar]
- Dellinger, F.; Delon, J.; Gousseau, Y.; Michel, J.; Tupin, F. SAR-SIFT: A SIFT-Like algorithm for SAR images. IEEE Trans. Geosci. Remote Sens. 2015, 1, 453–466. [Google Scholar] [CrossRef] [Green Version]
- Yang, F.; Gao, W.; Xu, B.; Yang, J. Multi-Frequency polarimetric SAR classification based on riemannian manifold and simultaneous sparse representation. Remote Sens. 2015, 7, 8469–8488. [Google Scholar] [CrossRef]
- Sun, X.; Nasrabadi, N.M.; Tran, T.D. Task-Driven dictionary learning for hyperspectral image classification with structured sparsity constraints. IEEE Trans. Geosci. Remote Sens. 2015, 8, 4457–4471. [Google Scholar] [CrossRef]
- Aharon, M.; Elad, M.; Bruckstein, A. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 2006, 11, 4311–4322. [Google Scholar] [CrossRef]
- Mairal, J.; Bach, F.; Ponce, J.; Sapiro, G. Online dictionary learning for sparse coding. Proc. ICML 2009, 6, 689–696. [Google Scholar]
- Mairal, J.; Bach, F.; Ponce, J. Task-driven dictionary learning. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 4, 791–804. [Google Scholar] [CrossRef] [PubMed]
- Gao, S.; Tsang, I.W.H.; Ma, Y. Learning category-specific dictionary and shared dictionary for fine-grained image categorization. IEEE Trans. Image Process. 2014, 2, 623–634. [Google Scholar]
- Zhang, Q.; Li, B. Discriminative K-SVD for dictionary learning in face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA, 13–18 June 2010.
- Jiang, Z.L.; Lin, Z.; Davis, L.S. Label consistent K-SVD: Learning a discriminative dictionary for recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 11, 2651–2664. [Google Scholar] [CrossRef] [PubMed]
- Gangeh, M.J.; Farahat, A.K.; Ghodsi, A.; Kamel, M.S. Supervised dictionary learning and sparse representation–A Review. Comput. Sci. 2015, 2, 1–56. [Google Scholar]
- Keydel, E.; Lee, S.; Moore, J. MSTAR extended operating conditions: A tutorial. Proc. SPIE 1997, 4, 228–242. [Google Scholar]
- Touzi, R.; Lopes, A.; Bousquet, P. A statistical and geometrical edge detector for SAR images. IEEE Trans. Geosci. Remote Sens. 1988, 11, 764–773. [Google Scholar] [CrossRef]
- Mairal, J.; Bach, F.; Ponce, J. Sparse modeling for image and vision processing. Found. Trends Comput. Graph. Vis. 2014, 12, 85–283. [Google Scholar] [CrossRef] [Green Version]
- Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2011, 3, 1–27. [Google Scholar] [CrossRef]
Target | 2S1 | BRDM2 | BTR60 | D7 | T62 | ZIL131 | ZSU23/4 | BRT70 | T72 | BMP2 |
---|---|---|---|---|---|---|---|---|---|---|
Training () | 299 | 298 | 256 | 299 | 299 | 299 | 299 | 233 | (SN_132) 232 | (SN_9563) 233 |
Testing () | 274 | 274 | 195 | 274 | 273 | 274 | 274 | 196 | (SN_132) 196 (SN_812) 195 (SN_s7) 191 | (SN_9563) 195 (SN_9566) 196 (SN_c21) 196 |
Target | 2S1 | BRDM2 | ZSU23/4 | T72(SN_A64) |
---|---|---|---|---|
Training () | 299 | 298 | 299 | 299 |
Testing () | 288 | 287 | 288 | 288 |
Testing () | 303 | 303 | 303 | 303 |
Parameter | Bins | Cell (Pixels) | Block (Cells) | Stride (Pixels) | λ | b | M | ||
---|---|---|---|---|---|---|---|---|---|
Value | 11 | 11 | 16 | 96 | 20 |
Block (Cells) | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | ||
4 | 0.9661 | 0.9783 | 0.9783 | 0.9739 | 0.9704 | 0.9652 | 0.9513 | 0.9235 | 0.9139 | |
6 | 0.9609 | 0.9713 | 0.9731 | 0.9661 | 0.9383 | 0.9261 | 0.9304 | 0.9348 | 0.9287 | |
cell (pixels) | 8 | 0.9661 | 0.9714 | 0.9661 | 0.9522 | 0.9357 | 0.9453 | 0.9470 | - | - |
10 | 0.9505 | 0.9392 | 0.9383 | 0.9427 | 0.9505 | - | - | - | - | |
12 | 0.9392 | 0.9340 | 0.9314 | 0.9349 | - | - | - | - | - |
Block (Cells) | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | ||
4 | 36.5591 | 24.3237 | 28.3787 | 19.0616 | 16.4551 | 9.5129 | 16.6214 | 5.3417 | 8.0049 | |
6 | 4.8953 | 5.0265 | 3.2544 | 2.5986 | 1.1269 | 2.0317 | 0.2095 | 0.3557 | 0.6282 | |
cell (pixels) | 8 | 2.0408 | 1.1931 | 1.1706 | 0.5677 | 0.0744 | 0.1299 | 0.2157 | - | - |
10 | 0.5514 | 0.3604 | 0.2184 | 0.0379 | 0.0747 | - | - | - | - | |
12 | 0.2355 | 0.0817 | 0.0255 | 0.0404 | - | - | - | - | - |
stride (pixels) | 0 | 8 | 16 | 24 |
recognition rate | 0.9505 | 0.9505 | 0.9661 | 0.9653 |
2S1 | BRDM2 | BTR60 | D7 | T62 | ZIL131 | ZSU23/4 | BRT70 | T72 | BMP2 | |
---|---|---|---|---|---|---|---|---|---|---|
2S1 | 0.9380 | 0 | 0 | 0 | 0.0073 | 0 | 0.0547 | 0 | 0 | 0 |
BRDM2 | 0.0051 | 0.9590 | 0 | 0 | 0 | 0 | 0 | 0.0308 | 0 | 0.0051 |
BTR60 | 0 | 0.0154 | 0.8769 | 0 | 0 | 0.0103 | 0 | 0.0872 | 0.0051 | 0.0051 |
D7 | 0 | 0 | 0 | 0.9818 | 0.0109 | 0.0073 | 0 | 0 | 0 | 0 |
T62 | 0 | 0 | 0 | 0.0440 | 0.9413 | 0.0037 | 0.0110 | 0 | 0 | 0 |
ZIL131 | 0 | 0 | 0.0036 | 0.0073 | 0 | 0.9891 | 0 | 0 | 0 | 0 |
ZSU23/4 | 0.0146 | 0 | 0 | 0 | 0 | 0 | 0.9854 | 0 | 0 | 0 |
BRT70 | 0 | 0 | 0.0051 | 0 | 0 | 0 | 0 | 0.9898 | 0 | 0.0051 |
T72 | 0 | 0.0086 | 0.0120 | 0 | 0 | 0 | 0 | 0.0258 | 0.8746 | 0.0790 |
BMP2 | 0 | 0.0221 | 0.0017 | 0 | 0 | 0 | 0 | 0.0358 | 0.0801 | 0.8603 |
recognition rate | 0.9406 |
Training()—Testing() | Training()—Testing() | |||||||
---|---|---|---|---|---|---|---|---|
2S1 | BRDM2 | ZSU23/4 | T72(SN_A64) | 2S1 | BRDM2 | ZSU23/4 | T72(SN_A64) | |
2S1 | 0.9757 | 0.0243 | 0 | 0 | 0.7459 | 0.2013 | 0.0462 | 0.0066 |
BRDM2 | 0.0523 | 0.8014 | 0.0070 | 0.1394 | 0.0528 | 0.8218 | 0.0066 | 0.1188 |
ZSU23/4 | 0.0382 | 0.0069 | 0.9514 | 0.0035 | 0.1254 | 0.0792 | 0.7195 | 0.0759 |
T72(SN_A64) | 0.0104 | 0 | 0.0521 | 0.9375 | 0.1155 | 0.0495 | 0.2475 | 0.5875 |
recognition rate | 0.9165 | 0.7186 |
2S1 | BRDM2 | BTR60 | D7 | T62 | ZIL131 | ZSU23/4 | BRT70 | T72 | BMP2 | |
---|---|---|---|---|---|---|---|---|---|---|
2S1 | 0.9782 | 0 | 0 | 0 | 0.0036 | 0 | 0.0182 | 0 | 0 | 0 |
BRDM2 | 0.0036 | 0.9307 | 0.0109 | 0 | 0 | 0 | 0 | 0.0109 | 0.0365 | 0.0073 |
BTR60 | 0 | 0.0154 | 0.9538 | 0 | 0 | 0 | 0 | 0.0256 | 0 | 0.0051 |
D7 | 0 | 0 | 0 | 0.9854 | 0 | 0.0146 | 0 | 0 | 0 | 0 |
T62 | 0 | 0 | 0 | 0.0256 | 0.9634 | 0.0110 | 0 | 0 | 0 | 0 |
ZIL131 | 0 | 0 | 0 | 0.0109 | 0 | 0.9891 | 0 | 0 | 0 | 0 |
ZSU23/4 | 0.0073 | 0 | 0 | 0 | 0 | 0 | 0.9927 | 0 | 0 | 0 |
BRT70 | 0 | 0.0051 | 0.0204 | 0 | 0 | 0 | 0 | 0.9745 | 0 | 0 |
T72 | 0 | 0.0309 | 0.0241 | 0 | 0 | 0 | 0 | 0.0241 | 0.8986 | 0.0223 |
BMP2 | 0 | 0.0716 | 0.0153 | 0 | 0 | 0 | 0 | 0.0937 | 0.1056 | 0.7138 |
recognition rate | 0.9380 |
Training()—Testing() | Training()—Testing() | |||||||
---|---|---|---|---|---|---|---|---|
2S1 | BRDM2 | ZSU23/4 | T72(SN_A64) | 2S1 | BRDM2 | ZSU23/4 | T72(SN_A64) | |
2S1 | 0.9618 | 0.0208 | 0.0069 | 0.0104 | 0.8548 | 0.1221 | 0.0099 | 0.0132 |
BRDM2 | 0.0244 | 0.9686 | 0.0035 | 0.0035 | 0.0594 | 0.9340 | 0.0033 | 0.0033 |
ZSU23/4 | 0.0347 | 0.0208 | 0.9271 | 0.0174 | 0.1023 | 0.1584 | 0.6634 | 0.0759 |
T72(SN_A64) | 0.0938 | 0.0243 | 0.1285 | 0.7535 | 0.2739 | 0.2409 | 0.1386 | 0.3465 |
recognition rate | 0.9028 | 0.6997 |
2S1 | BRDM2 | BTR60 | D7 | T62 | ZIL131 | ZSU23/4 | BRT70 | T72 | BMP2 | |
---|---|---|---|---|---|---|---|---|---|---|
2S1 | 0.9891 | 0 | 0 | 0 | 0 | 0 | 0.0109 | 0 | 0 | 0 |
BRDM2 | 0 | 0.9635 | 0.0073 | 0 | 0 | 0 | 0 | 0.0182 | 0.0073 | 0.0036 |
BTR60 | 0.0051 | 0 | 0.9333 | 0 | 0 | 0 | 0 | 0.0410 | 0.0103 | 0.0103 |
D7 | 0 | 0 | 0 | 0.9927 | 0 | 0.0073 | 0 | 0 | 0 | 0 |
T62 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
ZIL131 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
ZSU23/4 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 |
BRT70 | 0 | 0.0051 | 0.0051 | 0 | 0 | 0 | 0 | 0.9694 | 0 | 0.0204 |
T72 | 0 | 0.0172 | 0.0155 | 0 | 0 | 0 | 0 | 0.0086 | 0.8883 | 0.0704 |
BMP2 | 0 | 0.0290 | 0.0051 | 0 | 0 | 0 | 0 | 0.0239 | 0.0545 | 0.8876 |
recognition rate | 0.9624 |
Training()—Testing() | Training()—Testing() | |||||||
---|---|---|---|---|---|---|---|---|
2S1 | BRDM2 | ZSU23/4 | T72(SN_A64) | 2S1 | BRDM2 | ZSU23/4 | T72(SN_A64) | |
2S1 | 0.9792 | 0.0035 | 0.0104 | 0.0069 | 0.8515 | 0.0495 | 0.0660 | 0.0330 |
BRDM2 | 0.0070 | 0.9930 | 0 | 0 | 0.0165 | 0.9571 | 0 | 0.0264 |
ZSU23/4 | 0.0035 | 0.0069 | 0.9722 | 0.0174 | 0.0297 | 0.0165 | 0.7954 | 0.1584 |
T72(SN_A64) | 0.0417 | 0 | 0.0382 | 0.9201 | 0.1452 | 0.0495 | 0.1749 | 0.6304 |
recognition rate | 0.9661 | 0.8086 |
2S1 | BRDM2 | BTR60 | D7 | T62 | ZIL131 | ZSU23/4 | BRT70 | T72 | BMP2 | |
---|---|---|---|---|---|---|---|---|---|---|
2S1 | 0.9854 | 0 | 0 | 0 | 0 | 0 | 0.0146 | 0 | 0 | 0 |
BRDM2 | 0 | 0.9526 | 0.0109 | 0 | 0 | 0 | 0 | 0.0219 | 0.0036 | 0.0109 |
BTR60 | 0.0051 | 0.0205 | 0.8769 | 0 | 0 | 0 | 0 | 0.0410 | 0.0410 | 0.0154 |
D7 | 0 | 0 | 0 | 0.9854 | 0.0073 | 0.0073 | 0 | 0 | 0 | 0 |
T62 | 0 | 0 | 0 | 0.0037 | 0.9963 | 0 | 0 | 0 | 0 | 0 |
ZIL131 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
ZSU23/4 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 |
BRT70 | 0 | 0.0051 | 0.0306 | 0 | 0 | 0 | 0 | 0.9490 | 0 | 0.0153 |
T72 | 0 | 0.0103 | 0.0137 | 0 | 0 | 0 | 0 | 0.0120 | 0.8986 | 0.0653 |
BMP2 | 0 | 0.0494 | 0.0307 | 0 | 0 | 0 | 0 | 0.0477 | 0.1295 | 0.7428 |
recognition rate | 0.9387 |
2S1 | BRDM2 | BTR60 | D7 | T62 | ZIL131 | ZSU23/4 | BRT70 | T72 | BMP2 | |
---|---|---|---|---|---|---|---|---|---|---|
2S1 | 0.9927 | 0 | 0 | 0 | 0 | 0 | 0.0073 | 0 | 0 | 0 |
BRDM2 | 0 | 0.9051 | 0.0255 | 0 | 0 | 0 | 0 | 0.0292 | 0.0182 | 0.0219 |
BTR60 | 0 | 0.0051 | 0.9128 | 0 | 0 | 0 | 0 | 0.0462 | 0.0154 | 0.0205 |
D7 | 0 | 0 | 0 | 0.9927 | 0 | 0.0073 | 0 | 0 | 0 | 0 |
T62 | 0 | 0 | 0 | 0.0037 | 0.9963 | 0 | 0 | 0 | 0 | 0 |
ZIL131 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
ZSU23/4 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 |
BRT70 | 0 | 0.0153 | 0.0204 | 0 | 0 | 0 | 0 | 0.9439 | 0 | 0.0204 |
T72 | 0 | 0.0275 | 0.0395 | 0 | 0 | 0 | 0 | 0.0052 | 0.8162 | 0.1117 |
BMP2 | 0 | 0.0324 | 0.0136 | 0 | 0 | 0 | 0 | 0.0273 | 0.0596 | 0.8671 |
recognition rate | 0.9427 |
2S1 | BRDM2 | BTR60 | D7 | T62 | ZIL131 | ZSU23/4 | BRT70 | T72 | BMP2 | |
---|---|---|---|---|---|---|---|---|---|---|
2S1 | 0.9927 | 0 | 0 | 0 | 0 | 0 | 0.0073 | 0 | 0 | 0 |
BRDM2 | 0 | 0.9124 | 0.0109 | 0 | 0 | 0 | 0 | 0.0438 | 0.0255 | 0.0073 |
BTR60 | 0 | 0.0103 | 0.9385 | 0 | 0 | 0 | 0 | 0.0359 | 0.0051 | 0.0103 |
D7 | 0 | 0 | 0 | 0.9818 | 0.0109 | 0.0073 | 0 | 0 | 0 | 0 |
T62 | 0 | 0 | 0 | 0.0037 | 0.9963 | 0 | 0 | 0 | 0 | 0 |
ZIL131 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
ZSU23/4 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 |
BRT70 | 0 | 0.0051 | 0.0102 | 0 | 0 | 0 | 0 | 0.9643 | 0 | 0.0204 |
T72 | 0 | 0.0258 | 0.0378 | 0 | 0 | 0 | 0 | 0.0120 | 0.8333 | 0.0911 |
BMP2 | 0 | 0.0290 | 0.0102 | 0 | 0 | 0 | 0 | 0.0392 | 0.0596 | 0.8620 |
recognition rate | 0.9481 |
2S1 | BRDM2 | BTR60 | D7 | T62 | ZIL131 | ZSU23/4 | BRT70 | T72 | BMP2 | |
---|---|---|---|---|---|---|---|---|---|---|
2S1 | 0.9781 | 0 | 0 | 0 | 0 | 0 | 0.0219 | 0 | 0 | 0 |
BRDM2 | 0 | 0.9453 | 0.0146 | 0 | 0 | 0 | 0 | 0.0219 | 0.0146 | 0.0036 |
BTR60 | 0 | 0.0051 | 0.9436 | 0 | 0 | 0 | 0 | 0.0103 | 0.0256 | 0.0154 |
D7 | 0 | 0 | 0 | 0.9854 | 0.0036 | 0.0109 | 0 | 0 | 0 | 0 |
T62 | 0 | 0 | 0 | 0.0073 | 0.9927 | 0 | 0 | 0 | 0 | 0 |
ZIL131 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
ZSU23/4 | 0.0036 | 0 | 0 | 0 | 0.0036 | 0 | 0.9927 | 0 | 0 | 0 |
BRT70 | 0 | 0 | 0.0204 | 0 | 0 | 0 | 0 | 0.9694 | 0 | 0.0102 |
T72 | 0 | 0.0086 | 0.0275 | 0 | 0 | 0 | 0 | 0.0086 | 0.8814 | 0.0739 |
BMP2 | 0 | 0.0290 | 0.0375 | 0 | 0 | 0 | 0 | 0.0426 | 0.0784 | 0.8126 |
recognition rate | 0.9501 |
Training()—Testing() | Training()—Testing() | |||||||
---|---|---|---|---|---|---|---|---|
2S1 | BRDM2 | ZSU23/4 | T72(SN_A64) | 2S1 | BRDM2 | ZSU23/4 | T72(SN_A64) | |
2S1 | 0.9549 | 0 | 0.0243 | 0.0208 | 0.7723 | 0.1089 | 0.0264 | 0.0924 |
BRDM2 | 0.0105 | 0.9895 | 0 | 0 | 0.0363 | 0.9571 | 0 | 0.0066 |
ZSU23/4 | 0 | 0 | 0.8993 | 0.1007 | 0.0099 | 0.0759 | 0.5578 | 0.3564 |
T72(SN_A64) | 0.0625 | 0 | 0.1285 | 0.8090 | 0.1815 | 0.1056 | 0.0957 | 0.6172 |
recognition rate | 0.9132 | 0.7261 |
Training()—Testing() | Training()—Testing() | |||||||
---|---|---|---|---|---|---|---|---|
2S1 | BRDM2 | ZSU23/4 | T72(SN_A64) | 2S1 | BRDM2 | ZSU23/4 | T72(SN_A64) | |
2S1 | 0.9653 | 0 | 0.0174 | 0.0174 | 0.8152 | 0.0528 | 0.0759 | 0.0561 |
BRDM2 | 0.0732 | 0.8955 | 0 | 0.0314 | 0.0627 | 0.8383 | 0.0066 | 0.0924 |
ZSU23/4 | 0.0139 | 0 | 0.9583 | 0.0278 | 0.0858 | 0.0594 | 0.6568 | 0.1980 |
T72(SN_A64) | 0.0417 | 0 | 0.0764 | 0.8819 | 0.0891 | 0.0627 | 0.2640 | 0.5842 |
recognition rate | 0.9253 | 0.7236 |
Training()—Testing() | Training()—Testing() | |||||||
---|---|---|---|---|---|---|---|---|
2S1 | BRDM2 | ZSU23/4 | T72(SN_A64) | 2S1 | BRDM2 | ZSU23/4 | T72(SN_A64) | |
2S1 | 0.9514 | 0 | 0.0139 | 0.0347 | 0.8647 | 0.0726 | 0.0297 | 0.0330 |
BRDM2 | 0.0488 | 0.9408 | 0.0035 | 0.0070 | 0.0990 | 0.8845 | 0.0033 | 0.0132 |
ZSU23/4 | 0.0035 | 0.0139 | 0.9444 | 0.0382 | 0.0627 | 0.0891 | 0.6007 | 0.2475 |
T72(SN_A64) | 0.0451 | 0 | 0.0694 | 0.8854 | 0.0924 | 0.0660 | 0.2706 | 0.5710 |
recognition rate | 0.9305 | 0.7302 |
Training()—Testing() | Training()—Testing() | |||||||
---|---|---|---|---|---|---|---|---|
2S1 | BRDM2 | ZSU23/4 | T72(SN_A64) | 2S1 | BRDM2 | ZSU23/4 | T72(SN_A64) | |
2S1 | 0.9549 | 0.0104 | 0.0139 | 0.0208 | 0.7360 | 0.1221 | 0.0528 | 0.0891 |
BRDM2 | 0.0174 | 0.9756 | 0 | 0.0070 | 0.0462 | 0.8581 | 0.0066 | 0.0891 |
ZSU23/4 | 0 | 0.0069 | 0.9375 | 0.0556 | 0.0627 | 0.0264 | 0.6865 | 0.2244 |
T72(SN_A64) | 0.0208 | 0.0035 | 0.1389 | 0.8368 | 0.1320 | 0.0660 | 0.1320 | 0.6700 |
recognition rate | 0.9262 | 0.7376 |
Method | Computing Time of SAR-HOG per Image | Training Time | Testing Time |
---|---|---|---|
SVM | 5.7660 | 11.2620 | |
kNN | 0.6180 | 1.0440 | |
SRC | 0.9660 | 0.0362 | 1.1820 |
LCK-SVD | 113.76 | 0.8160 | |
proposed | 1404.72 | 0.8280 |
© 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Song, S.; Xu, B.; Yang, J. SAR Target Recognition via Supervised Discriminative Dictionary Learning and Sparse Representation of the SAR-HOG Feature. Remote Sens. 2016, 8, 683. https://doi.org/10.3390/rs8080683
Song S, Xu B, Yang J. SAR Target Recognition via Supervised Discriminative Dictionary Learning and Sparse Representation of the SAR-HOG Feature. Remote Sensing. 2016; 8(8):683. https://doi.org/10.3390/rs8080683
Chicago/Turabian StyleSong, Shengli, Bin Xu, and Jian Yang. 2016. "SAR Target Recognition via Supervised Discriminative Dictionary Learning and Sparse Representation of the SAR-HOG Feature" Remote Sensing 8, no. 8: 683. https://doi.org/10.3390/rs8080683
APA StyleSong, S., Xu, B., & Yang, J. (2016). SAR Target Recognition via Supervised Discriminative Dictionary Learning and Sparse Representation of the SAR-HOG Feature. Remote Sensing, 8(8), 683. https://doi.org/10.3390/rs8080683