Double Weight-Based SAR and Infrared Sensor Fusion for Automatic Ground Target Recognition with Deep Learning
"> Figure 1
<p>SAR and IR sensor fusion-based target recognition schemes: (<b>a</b>) pixel-level fusion; (<b>b</b>) feature-level fusion; and (<b>c</b>) decision-level fusion.</p> "> Figure 2
<p>Necessity for decision-level SAR and IR sensor fusion: (<b>a</b>) operational concept of automatic target recognition; (<b>b</b>) pros and cons for each fusion level.</p> "> Figure 3
<p>Proposed DW-SIF system for ground target recognition.</p> "> Figure 4
<p>Simultaneous SAR and IR image generation flow using the OKTAL simulation environment (OKTAL-SE).</p> "> Figure 5
<p>Examples of SAR and IR target generation using OKTAL-SE.</p> "> Figure 6
<p>Details of 14-layered deep convolutional network: (<b>a</b>) 14 layered SAR-CNN; (<b>b</b>) 14 layered IR-CNN.</p> "> Figure 7
<p>Estimated SAR/IR sensor offline weights.</p> "> Figure 8
<p>Offline confidence and online confidence-based SAR and IR fusion flow.</p> "> Figure 9
<p>Effect of the offline weights (<math display="inline"> <semantics> <mrow> <msub> <mi mathvariant="bold-italic">α</mi> <mrow> <mi>S</mi> <mi>A</mi> <mi>R</mi> </mrow> </msub> <mo>,</mo> <msub> <mi mathvariant="bold-italic">α</mi> <mrow> <mi>I</mi> <mi>R</mi> </mrow> </msub> </mrow> </semantics> </math>) in SAR-IR sensor fusion.</p> "> Figure 10
<p>Effect of the online weights (<math display="inline"> <semantics> <mrow> <msub> <mi>β</mi> <mrow> <mi>S</mi> <mi>A</mi> <mi>R</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>β</mi> <mrow> <mi>I</mi> <mi>R</mi> </mrow> </msub> </mrow> </semantics> </math>) in SAR-IR sensor fusion (linear fusion scheme).</p> "> Figure 11
<p>Composition of the SAR-IR target database: (<b>a</b>) 16 SAR targets (<b>top</b>) and 72 aspect views of T72 at the depression angle <math display="inline"> <semantics> <msup> <mn>20</mn> <mo>∘</mo> </msup> </semantics> </math> (<b>bottom</b>); (<b>b</b>) corresponding 16 IR targets (<b>top</b>) and 72 aspect views of T72 at the depression angle <math display="inline"> <semantics> <msup> <mn>75</mn> <mo>∘</mo> </msup> </semantics> </math> (<b>bottom</b>).</p> "> Figure 12
<p>Performance comparison between IR-CNN, transfer learning, and HOG-SVM on the IR database.</p> "> Figure 13
<p>Training parameter analysis: (<b>a</b>) recognition rate vs. training DB size; (<b>b</b>) training time vs. training DB size.</p> "> Figure 14
<p>Training results: (<b>a</b>) SAR-CNN; (<b>b</b>) IR-CNN.</p> "> Figure 15
<p>Analysis of SAR noise: (<b>a</b>) K-distribution of a MSTAR SAR image; (<b>b</b>) K-distribution of synthesized SAR image.</p> "> Figure 16
<p>Performance evaluation results for noise variations: (<b>a</b>) recognition rate vs signal-to-noise rate [PSNR]; (<b>b</b>) test examples of the SAR and IR images at different PSNRs.</p> "> Figure 17
<p>Confusion matrices at PSNR 16.3 dB: (<b>a</b>) SC; (<b>b</b>) IC; (<b>c</b>) <math display="inline"> <semantics> <mi>α</mi> </semantics> </math>-sum; (<b>d</b>) <math display="inline"> <semantics> <mi>β</mi> </semantics> </math>-sum; (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>α</mi> <mi>β</mi> </mrow> </semantics> </math>-sum (proposed linear fusion scheme I); (<b>f</b>) BF [<a href="#B57-remotesensing-10-00072" class="html-bibr">57</a>]; (<b>g</b>) NN; and (<b>h</b>) <math display="inline"> <semantics> <mrow> <mi>α</mi> <mi>β</mi> </mrow> </semantics> </math>-NN (proposed nonlinear fusion scheme II).</p> "> Figure 18
<p>Performance evaluation results for blur variation: (<b>a</b>) recognition rate vs blurring level [<math display="inline"> <semantics> <mi>σ</mi> </semantics> </math>]; (<b>b</b>) test examples of SAR and IR images at different blur levels.</p> "> Figure 19
<p>Confusion matrices at a blur level <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>2.7</mn> </mrow> </semantics> </math>: (<b>a</b>) SC; (<b>b</b>) IC; (<b>c</b>) <math display="inline"> <semantics> <mi>α</mi> </semantics> </math>-sum; (<b>d</b>) <math display="inline"> <semantics> <mi>β</mi> </semantics> </math>-sum; (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>α</mi> <mi>β</mi> </mrow> </semantics> </math>-sum (the proposed linear fusion scheme I); (<b>f</b>) BF [<a href="#B57-remotesensing-10-00072" class="html-bibr">57</a>]; (<b>g</b>) NN; and (<b>h</b>) <math display="inline"> <semantics> <mrow> <mi>α</mi> <mi>β</mi> </mrow> </semantics> </math>-NN (the proposed nonlinear fusion scheme II).</p> "> Figure 20
<p>Performance evaluation results for the rotational variation: (<b>a</b>) recognition rate vs. rotation angle [<math display="inline"> <semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics> </math>]; (<b>b</b>) test examples of SAR and IR images at different rotation levels.</p> "> Figure 21
<p>Confusion matrices at the rotation angle <math display="inline"> <semantics> <mrow> <mi>k</mi> <mo>=</mo> <msup> <mn>3.5</mn> <mo>∘</mo> </msup> </mrow> </semantics> </math>: (<b>a</b>) SC; (<b>b</b>) IC; (<b>c</b>) <math display="inline"> <semantics> <mi>α</mi> </semantics> </math>-sum; (<b>d</b>) <math display="inline"> <semantics> <mi>β</mi> </semantics> </math>-sum; (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>α</mi> <mi>β</mi> </mrow> </semantics> </math>-sum (proposed linear fusion scheme I); (<b>f</b>) BF [<a href="#B57-remotesensing-10-00072" class="html-bibr">57</a>]; (<b>g</b>) NN; and (<b>h</b>) <math display="inline"> <semantics> <mrow> <mi>α</mi> <mi>β</mi> </mrow> </semantics> </math>-NN (proposed nonlinear fusion scheme II).</p> "> Figure 22
<p>Performance evaluation results for the translational variation: (<b>a</b>) recognition rate vs. the translation level [pixel]; (<b>b</b>) test examples of SAR and IR images at different translation levels.</p> "> Figure 23
<p>Confusion matrices at the translation level of 1.5 pixels: (<b>a</b>) SC; (<b>b</b>) IC; (<b>c</b>) <math display="inline"> <semantics> <mi>α</mi> </semantics> </math>-sum; (<b>d</b>) <math display="inline"> <semantics> <mi>β</mi> </semantics> </math>-sum; (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>α</mi> <mi>β</mi> </mrow> </semantics> </math>-sum (proposed linear fusion scheme I); (<b>f</b>) BF [<a href="#B57-remotesensing-10-00072" class="html-bibr">57</a>]; (<b>g</b>) NN; and (<b>h</b>) <math display="inline"> <semantics> <mrow> <mi>α</mi> <mi>β</mi> </mrow> </semantics> </math>-NN (proposed nonlinear fusion scheme II).</p> ">
Abstract
:1. Introduction
2. Background of SAR-IR Fusion Level and Fusion Method
3. Proposed Double Weight-Based SAR-IR Fusion for ATR
3.1. SAR-IR Database Construction
3.2. 14 Layered-Deep Convolutional Neural Network Classifier
3.3. Proposed Double Weight-Based SAR-IR Fusion Method
4. Preparation of DB and Classifiers
4.1. Composition SAR-IR Target Database
4.2. Comparison of Base Classifiers
4.3. Analysis of CNN Training
5. Experimental Results
6. Conclusions
Acknowledgments
Author Contributions
Conflicts of Interest
References
- Bhanu, B. Automatic target recognition: State of the art survey. IEEE Trans. Aerosp. Electron. Syst. 1986, 22, 364–379. [Google Scholar] [CrossRef]
- Ratches, J.A. Review of current aided/automatic target acquisition technology for military target acquisition tasks. Opt. Eng. 2011, 50, 072001. [Google Scholar] [CrossRef]
- Kim, S. High-speed incoming infrared target detection by fusion of spatial and temporal detectors. Sensors 2015, 15, 7267–7293. [Google Scholar] [CrossRef] [PubMed]
- Yang, D.; Li, X.; Xiao, S. Ground targets detection and tracking based on integrated information in infrared images. In Proceedings of the IEEE 10th International Conference on Signal Processing (ICSP), Beijing, China, 24–28 October 2010; pp. 910–915. [Google Scholar]
- Khan, J.F.; Alam, M.S. Target detection in cluttered forward-looking infrared imagery. Opt. Eng. 2005, 44, 076404. [Google Scholar] [CrossRef]
- Ye, W.; Paulson, C.; Wu, D. Target detection for very high-frequency synthetic aperture radar ground surveillance. IET Comput. Vis. 2012, 6, 101–110. [Google Scholar] [CrossRef]
- Kaplan, L. Improved SAR target detection via extended fractal feature. IEEE Trans. Aerosp. Electron. Syst. 2001, 37, 436–451. [Google Scholar] [CrossRef]
- Cooke, T.; Redding, N.; Schroeder, J.; Zhang, J. Comparison of selected features for target detection in synthetic aperture radar imagery. Digit. Signal Process. 2000, 10, 286–296. [Google Scholar] [CrossRef]
- Kim, S.; Song, W.J.; Kim, S.H. Robust ground target detection by SAR and IR sensor fusion using adaboost-based feature selection. Sensors 2016, 16, 1117. [Google Scholar] [CrossRef] [PubMed]
- Wegner, J.D.; Inglada, J.; Tison, C. Automatic fusion of SAR and optical imagery based on line features. In Proceedings of the 7th European Conference on Synthetic Aperture Radar, Friedrichshafen, Germany, 2–5 June 2008; pp. 1–4. [Google Scholar]
- Jaeger, U.; Maier-Herburger, H.; Stahl, C.; Heinze, N.; Willersinn, D. IR and SAR automatic target detection benchmarks. Proc. SPIE 2004, 5426, 400–408. [Google Scholar]
- Lamdan, Y.; Wolfson, H.J. Geometric hashing: A general and efficient model-based recognition scheme. In Proceedings of the 2nd International Conference on Computer Vision, Tampa, FL, USA, 5–8 December 1988; pp. 238–249. [Google Scholar]
- Bharadwaj, P.; Carin, L. Infrared-image classification using hidden Markov trees. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 1394–1398. [Google Scholar] [CrossRef]
- Quan, L.; Jianjun, Z. Wavelet-based feature extraction and recognition of infrared target. In Proceedings of the First International Conference on Innovative Computing, Information and Control, Beijing, China, 30 August–1 September 2006; pp. 1–4. [Google Scholar]
- Gray, G.J.; Aouf, N.; Richardson, M.A.; Butters, B.; Walmsley, R.; Nicholls, E. Feature-based target recognition in infrared images for future unmanned aerial vehicles. J. Battle Field Technol. 2011, 14, 27–36. [Google Scholar]
- Zhan, T.; Sang, N. Forward-looking infrared target recognition based on histograms of oriented gradients. Proc. SPIE 2011, 8003, 80030S. [Google Scholar]
- Sadjadi, F.A.; Mahalanobis, A. Robust automatic target reconition in FLIR Imagery. Proc. SPIE 2012, 8391, 839105. [Google Scholar]
- Zhang, F.; Liu, S.; Wang, D.; Guan, W. Aircraft recognition in infrared image using wavelet moment invariants. Image Vis. Comput. 2009, 27, 313–318. [Google Scholar] [CrossRef]
- Li, L.; Ren, Y. Infrared target recognition based on combined feature and improved adaboost algorithm. Adv. Intell. Soft Comput. 2011, 105, 707–712. [Google Scholar]
- Zhou, J.; Cheng, Z.S.X.; Fu, Q. Automatic target recognition of SAR images based on global scattering center model. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3713–3729. [Google Scholar]
- Freeman, A. The effects of noise on polarimetric SAR data. Geosci. Remote Sens. Symp. 1993, 799–802. [Google Scholar] [CrossRef]
- Kreithen, D.E.; Halversen, S.D.; Owirka, G.J. Discriminating targets from the clutter. Linc. Lab. J. 1993, 6, 11–24. [Google Scholar]
- Han, P.; Wu, J.; Wu, R. SAR Target feature extraction and recognition based on 2D-DLPP. Phys. Procedia 2012, 24, 1431–1436. [Google Scholar] [CrossRef]
- Vasuki, P.; Mohamed, S.; Roomi, M. Automatic target recognition for SAR images by discrete wavelet features. Eur. J. Sci. Res. 2012, 80, 133–139. [Google Scholar]
- Haddadi, A.; Sahebi, M.R.; Mansourian, A. Polarimetric SAR feature selection using a genetic algorithm using a genetic algorithm. Can. J. Remote Sens. 2011, 37, 27–36. [Google Scholar] [CrossRef]
- Verbout, S.M.; Irving, W.W.; Hanes, A.S. Improving a template-based classifier in a SAR automatic Target Recognition System by Using 3-D Target information. Linc. Lab. J. 1993, 5, 53–76. [Google Scholar]
- Zhang, H.; Nasrabadi, N.M.; Zhang, Y.; Huang, T.S. Multi-view automatic target recognition using joint sparse representation. IEEE Trans. Aerosp. Electron. Syst. 2012, 48, 2481–2497. [Google Scholar] [CrossRef]
- Potter, L.C.; Moses, R.L. Attributed scattering centers for SAR ATR. IEEE Trans. Image Process. 1997, 6, 79–91. [Google Scholar] [CrossRef] [PubMed]
- Kahler, B.; Blasch, E. Predicted radar/optical feature fusion gains for target identification. In Proceedings of the IEEE 2010 National Aerospace and Electronics Conference (NAECON), Fairborn, OH, USA, 14–16 July 2010; pp. 405–412. [Google Scholar]
- Stephan, L.; Childs, M.; Pujara, N. Portable, scalable architecture for model-based FLIR ATR and SAR/FLIR fusion. Proc. SPIE 1999, 3718, 79–87. [Google Scholar]
- Childs, M.B.; Carlson, K.M.; Pujara, N. Transition from lab to flight demo for model-based FLIR ATR and SAR-FLIR fusion. Proc. SPIE 2000, 4050, 294–305. [Google Scholar]
- Latger, J.; Cathala, T.; Douchin, N.; Goff, A.L. Simulation of active and passive infrared images using the SE-WORKBENCH. Proc. SPIE 2007, 6543, 654302. [Google Scholar]
- Schwenger, F.; Grossmann, P.; Malaplate, A. Validation of the thermal code of RadTherm-IR, IR-Workbench, and F-TOM. Proc. SPIE 2009, 7300, 73000J. [Google Scholar]
- Goff, A.L.; Cathala, T.; Latger, J. New impressive capabilities of SE-workbench for EO/IR real-time rendering of animated scenarios including flares. Proc. SPIE 2015, 9653, 965307. [Google Scholar]
- Beaven, S.G.; Yu, X.; Hoff, L.E.; Chen, A.M.; Winter, E.M. Analysis of hyperspectral infrared and low frequency SAR data for target classification. Proc. SPIE 1996, 2759, 121–130. [Google Scholar]
- Brooks, R.R.; Iyengar, S.S. Multi-Sensor Fusion: Fundamentals and Applications with Software; Prentice Hall: Upper Saddle River, NJ, USA, 1998. [Google Scholar]
- Klein, L.A. Sensor and Data Fusion: A Tool for Information Assessment and Decision Making, 2nd ed.; SPIE Press: Bellingham, WA, USA, 1998. [Google Scholar]
- Bai, Q.; Jin, C. Image Fusion and recognition based on Compressed Sensing Theory. Int. J. Smart Sens. Intell. Syst. 2015, 8, 159–180. [Google Scholar] [CrossRef]
- Li, H.; Zhou, Y.T.; Chellappa, R. SAR/IR sensor image fusion and real-time implementation. In Proceedings of the IEEE Proceedings of ASILOMAR-29, Pacific Grove, CA, USA, 30 October–1 November 1996; pp. 1121–1125. [Google Scholar]
- Zhou, Y. Multi-sensor image fusion. In Proceedings of the International Conference on Image Processing, Austin, TX, USA, 13–16 November 1994; Volume I, pp. 193–197. [Google Scholar]
- Novak, L.M.; Owirka, G.J.; Brower, W.S.; Weaver, A.L. The automatic target recognition system in SAIP. Linc. Lab. J. 1997, 10, 187–202. [Google Scholar]
- Chul Kim, J.; Ran Lee, Y.; Hee Kwak, S. The Fusion of SAR images and Optical images, based on the use of Wavelet Transform: To Improve Classification Accuracy. Proc. SPIE 2005, 5980, 59800K. [Google Scholar]
- Amarsaikhana, D.; Blotevogel, H.H.; van Genderen, J.L.; Ganzorig, M.; Gantuya, R.; Nergui, B. Fusing high-resolution SAR and optical imagery for improved urban land cover study and classification. Int. J. Image Data Fusion 2010, 1, 83–97. [Google Scholar] [CrossRef]
- Lehureau, G.; Campedel, M.; Tupin, F.; Tison, C.; Oller, G. Combining SAR and optical features in a SVM classifier for man-made structures detection. In Proceedings of the IEEE Geoscience and Remote Sensing Symposium (IGARSS), Cape Town, South Africa, 12–17 July 2009; Volume III, pp. 873–876. [Google Scholar]
- Lei, L.; Su, Y.; Jiang, Y. Feature-based classification fusion of vehicles in high-resolution SAR and optical imagery. Proc. SPIE 2005, 6043, 604323. [Google Scholar]
- Yitayew, T.G.; Brekke, C.; Doulgeris, A.P. Multisensor data fusion and feature extraction for forestry applications. In Proceedings of the IEEE Geoscience and Remote Sensing Symposium (IGARSS), Munich, Germany, 22–27 July 2012; pp. 4982–4985. [Google Scholar]
- Camps-Valls, G.; Matasci, G.; Kanevskie, M. Learning relevant image features with multiple-kernel classification. IEEE Trans. Geosci. Remote Sens. 2010, 48, 3780–3791. [Google Scholar]
- Yang, J.; Lu, Z.G.; Guo, Y.K. Target recognition and tracking based on data fusion of radar and infrared image sensors. In Proceedings of the 2nd International Conference on Information Fusion (FUSION’99), Sunnyvale, CA, USA, 6–8 July 1999; pp. 1–6. [Google Scholar]
- Foucher, S.; Germain, M.; Boucher, J.M.; Benie, G.B. Multisource classification using ICM and dempster-shafer theory. IEEE Trans. Instum. Meas. 2002, 51, 277–281. [Google Scholar] [CrossRef]
- Khoshelham, K.; Nedkov, S.; Nardinocchi, C. A comparison of bayesian and evidence-based fusion methods for automated building detection in aerial data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2009, 37, 1183–1188. [Google Scholar]
- Waske, B.; Benediktsson, J.A. Fusion of support vector machines for classification of multisensor data. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3858–3866. [Google Scholar] [CrossRef]
- Kasapoglu, N.G.; Eltoft, T. Decision fusion of classifiers for multifrequency PolSAR and optical data classification. In Proceedings of the 2013 6th International Conference on Recent Advances in Space Technologies (RAST), Istanbul, Turkey, 12–14 June 2013; pp. 411–416. [Google Scholar]
- Fard, T.A.; Hasanlou, M.; Arefi, H. Classifier fusion of high-resolution optical and synthetic aperture radar (SAR) satellite imagery for classification in urban area. In Proceedings of the 1st ISPRS International Conference on Geospatial Information Research, Tehran, Iran, 15–17 November 2014; Volume XL-2/W3, pp. 25–29. [Google Scholar]
- Ma, L.; Liu, X.; Song, L.; Zhou, C.; Zhao, X.; Zhao, Y. A new classifier fusion method based on historical and on-lineclassification reliability for recognizing common CT imaging signs oflung diseases Ling. Comput. Med. Imaging Graph. 2015, 40, 39–48. [Google Scholar] [CrossRef] [PubMed]
- Chureesampant, K.; Susaki, J. Multi-temporal SAR and optical data fusion with texture measures for land cover classification based on the Bayesian theory. ISPRS SC Newlett. 2008, 5, 1183–1188. [Google Scholar]
- Soda, P.; Iannello, G. Aggregation of classifiers for staining pattern recogni-tion in antinuclear autoantibodies analysis. IEEE Trans. Inf. Technol. Biomed. 2009, 13, 322–329. [Google Scholar] [CrossRef] [PubMed]
- Gupta, L.; Chung, B.; Srinath, M.D.; Molfese, D.L.; Kook, H. Multichannel fusion models for the parametric classification of differential brain activity. IEEE Trans. Biomed. Eng. 2005, 52, 1869–1881. [Google Scholar] [CrossRef] [PubMed]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networkse. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, Nevada, 3–6 December 2012; pp. 1–9. [Google Scholar]
- Snoek, J.; Rippely, O.; Swerskyx, K.; Kirosx, R.; Satishz, N.; Sundaramz, N.; Patwaryz, M.M.A.; Prabhat, M.; Adams, R.P. Scalable bayesian optimization using deep neural networks. In Proceedings of the International Conference on Machine Learning, Lille, France, 7–9 July 2015; Volume 37. [Google Scholar]
- Zhao, F.; Liu, Y.; Huo, K. Radar target recognition based on stacked denoising sparse autoencoder. J. Radars 2017, 6, 149–156. [Google Scholar]
- Gottimukkula, V.C.R. Object Classification Using Stacked Autoencoder and Convolutional Neural Network. Master’s Thesis, North Dakota State University, Fargo, ND, USA, 2016. [Google Scholar]
- Wang, X. Deep learning in object recognition, detection, and segmentation. Found. Trends Signal Process. 2016, 8, 217–382. [Google Scholar] [CrossRef]
- Chen, S.; Wang, H.; Xu, F.; Jin, Y.Q. Target classification using the deep convolutional networks for SAR images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4806–4817. [Google Scholar] [CrossRef]
- Vedaldi, A.; Lenc, K. MatConvNet: Convolutional neural networks for MATLAB. In Proceedings of the 23rd ACM International Conference on Multimedia, Brisbane, Australia, 26–30 October 2015; pp. 689–692. [Google Scholar]
- Protopapadakis, E.; Voulodimos, A.; Doulamis, A.; Doulamis, N.; Dres, D.; Bimpas, M. Stacked autoencoders for outlier detection in over-the-horizon radar signals. Comput. Intell. Neurosci. 2017, 2017. [Google Scholar] [CrossRef]
- Feng, X.; Haipeng, W.; Yaqiu, J. Deep learning as applied in SAR target recognition and terrain classification. J. Radars 2017, 6, 136–148. [Google Scholar]
- Akcay, S.; Kundegorski, M.E.; Devereux, M.; Breckon, T.P. Transfer learning using convolutional neural networks for object classification within X-ray baggage security imager. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 1057–1061. [Google Scholar]
- Huang, Z.; Pan, Z.; Lei, B. Transfer learning with deep convolutional neural network for SAR target classification with limited labeled data. Remote Sens. 2017, 9, 907. [Google Scholar] [CrossRef]
ID | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.3094 | 0.3372 | 0.4274 | 0.3650 | 0.4974 | 0.3962 | 0.4929 | 0.5194 | 0.5692 | 0.4615 | 0.4991 | 0.4461 | 0.9006 | 0.4022 | 0.6446 | 0.4021 | |
0.6906 | 0.6628 | 0.5726 | 0.6350 | 0.5026 | 0.6038 | 0.5071 | 0.4806 | 0.4308 | 0.5385 | 0.5009 | 0.5539 | 0.0994 | 0.5978 | 0.3554 | 0.5979 | |
0.0645 | 0.0649 | 0.0679 | 0.0629 | 0.0501 | 0.0692 | 0.0512 | 0.0669 | 0.0718 | 0.0592 | 0.0612 | 0.0615 | 0.0629 | 0.0605 | 0.0636 | 0.0617 | |
0.0722 | 0.0758 | 0.0911 | 0.0622 | 0.0602 | 0.0620 | 0.0620 | 0.0819 | 0.0903 | 0.0322 | 0.0661 | 0.0563 | 0.0553 | 0.0289 | 0.0528 | 0.0506 | |
2.7686 | ||||||||||||||||
2.7335 | ||||||||||||||||
0.4968 | ||||||||||||||||
0.5032 | ||||||||||||||||
0.0675 | 0.0696 | 0.0783 | 0.0621 | 0.0546 | 0.0655 | 0.0560 | 0.0736 | 0.0801 | 0.0470 | 0.0634 | 0.0590 | 0.0618 | 0.0456 | 0.0598 | 0.0562 |
Mode | Type | SAR | IR |
---|---|---|---|
Train | Depression angle | 10,15,20,25 | 65,70,75,80 |
Aspect angle | 5 | 5 | |
Image size | 64 × 64 | 96 × 96 | |
Total no. of DB | 4608 | 4608 | |
Evaluation | Composite | 4608 @ PSNR = 34.2 dB, = 1, Rot. jitter: ±1, Tran. jitter: ±1 pixel | 4608 @ PSNR = 17.3 dB, = 1, Rot. jitter: ±2, Tran. jitter: ±2 pixel |
Test | Noise | PSNR = 14∼18.5 dB | PSNR: 14∼18.5 dB |
Blur | = 1.0∼3.4 | = 1.0∼3.4 | |
Rotation jitter | uniform: ±1∼±5 | uniform: ±1∼± 5 | |
Translation jitter | uniform: ±0∼±3 pixel | uniform: ±0∼±3 pixel |
Method | Noise Test [%] | Blur Test [%] | Rot. Test [%] | Trans. Test [%] | |
---|---|---|---|---|---|
Linear | SAR-CNN (SC) | 57.81 | 51.43 | 67.14 | 61.32 |
IR-CNN (IC) | 75.15 | 69.11 | 73.41 | 74.95 | |
Fusion: -sum | 82.20 | 78.52 | 85.13 | 84.22 | |
Fusion: -sum | 79.43 | 78.67 | 84.04 | 83.76 | |
Fusion: -sum | 88.89 | 89.49 | 91.08 | 91.36 | |
Bayesian fusion (BF) [57] | 81.92 | 78.71 | 84.07 | 83.76 | |
Nonlinear | Fusion: NN | 96.70 | 92.31 | 97.61 | 98.11 |
Fusion: -NN | 97.83 | 93.35 | 97.80 | 98.29 |
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kim, S.; Song, W.-J.; Kim, S.-H. Double Weight-Based SAR and Infrared Sensor Fusion for Automatic Ground Target Recognition with Deep Learning. Remote Sens. 2018, 10, 72. https://doi.org/10.3390/rs10010072
Kim S, Song W-J, Kim S-H. Double Weight-Based SAR and Infrared Sensor Fusion for Automatic Ground Target Recognition with Deep Learning. Remote Sensing. 2018; 10(1):72. https://doi.org/10.3390/rs10010072
Chicago/Turabian StyleKim, Sungho, Woo-Jin Song, and So-Hyun Kim. 2018. "Double Weight-Based SAR and Infrared Sensor Fusion for Automatic Ground Target Recognition with Deep Learning" Remote Sensing 10, no. 1: 72. https://doi.org/10.3390/rs10010072
APA StyleKim, S., Song, W. -J., & Kim, S. -H. (2018). Double Weight-Based SAR and Infrared Sensor Fusion for Automatic Ground Target Recognition with Deep Learning. Remote Sensing, 10(1), 72. https://doi.org/10.3390/rs10010072