Feature Channel Expansion and Background Suppression as the Enhancement for Infrared Pedestrian Detection
<p>Introduction of the proposed approach: (<b>a</b>) Process of our model; (<b>b</b>) structure of the model.</p> "> Figure 1 Cont.
<p>Introduction of the proposed approach: (<b>a</b>) Process of our model; (<b>b</b>) structure of the model.</p> "> Figure 2
<p>Channels of the proposed feature expansion with the algorithm: Intensity channel, gradient magnitude, multi-scale block local binary pattern (MB-LBP) texture channel.</p> "> Figure 3
<p>Flowchart of feature channel expansion.</p> "> Figure 4
<p>Basic LBP operator.</p> "> Figure 5
<p>The 9 × 9 MB-LBP operator.</p> "> Figure 6
<p>Images in the process of the saliency fusion: (<b>a</b>) The original image; (<b>b</b>) the heat map of GBVS saliency map; (<b>c</b>) the heat map of Itti-Koch saliency map; (<b>d</b>) intensity distribution of GBVS saliency maI(<b>e</b>) intensity distribution of Itti-Koch saliency map; (<b>f</b>) fusion of saliency map.</p> "> Figure 7
<p>Some examples of CUSTFIR pedestrian dataset.</p> "> Figure 8
<p>A simple example showing the model’s performance when it comes to the best conditions with (<b>a</b>) original infrared (IR) image, (<b>b</b>) background suppression result, and (<b>c</b>) channel expansion result.</p> "> Figure 9
<p>An example showing the model performance at the edge of the image with (<b>a</b>) original IR image, (<b>b</b>) background suppression result, and (<b>c</b>) channel expansion result.</p> "> Figure 10
<p>Example showing the model performance on a low contrast circumstance with (<b>a</b>) original IR image, (<b>b</b>) background suppression result, and (<b>c</b>) channel expansion result.</p> "> Figure 11
<p>Example showing the model performance near the edge of the image with (<b>a</b>) original IR image, (<b>b</b>) background suppression result, and (<b>c</b>) channel expansion result.</p> "> Figure 12
<p>Example showing the model performance on a sunny day time outdoor with (<b>a</b>) original IR image, (<b>b</b>) background suppression result, and (<b>c</b>) channel expansion result.</p> "> Figure 13
<p>Examples of model performance using the CVC14 datasets with (<b>a</b>) original IR image (<b>b</b>) background suppression result on feature maps and (<b>c</b>) channel expansion result.</p> "> Figure 14
<p>Precision–Recall (PR) Curve of pedestrian detection over LSI dataset by our approach (background suppression and channel expansion), the model with channel expansion only, and the model with the Faster R-CNN only, and other baselines.</p> "> Figure 15
<p>PR Curve of the pedestrian detection over CVC-14 Infrared pedestrian dataset by our approach, the model with channel expansion only, and the model with the Faster R-CNN only, and other baselines.</p> "> Figure 16
<p>PR Curve of the pedestrian detection over CUST Infrared pedestrian dataset by our approach, the model with channel expansion only, and the model with the Faster R-CNN only, and other baselines.</p> "> Figure 17
<p>PR Curve of pedestrian detection over SCUT dataset by our approach (background suppression and channel expansion), the model with channel expansion only, and the model with the Faster R-CNN only, and other baselines.</p> "> Figure 18
<p>mAP versus epoch curve of pedestrian detection over LSIFIR dataset (<b>a</b>) and CVC-14 dataset (<b>b</b>) by our approach (background suppression and channel expansion), the model with channel expansion, and model with the Faster R-CNN.</p> ">
Abstract
:1. Introduction
- This is the first research on IR pedestrian detection using an IR image enhancement unit to extract the artificial features as the input of the CNN, and using a feature optimization network to output a further optimized feature map.
- In this research, experiments were carefully designed, and they proved the assumption that appropriate expert-driven features can help with the extraction of CNN features and accelerate the training process of the model.
- The detection performance has been improved by the model proposed in this research, compared with the baseline methods and original region proposal networks, which was proved by the experiments.
- A new saliency fusion method was designed in this research to suppress the background. This fusion method proved that background suppression could improve the performance of pedestrian detection by reducing the negatives, according to the experiments.
2. Related Work
2.1. Features in Expert-Driven Approaches
2.2. Visual Saliency Methods
2.3. Regions with CNN Features Method
3. Background Suppression and Channel Expansion Enhancement Approach
3.1. Feature Channel Expansion
3.1.1. Image Gradient Channel Computation
3.1.2. Texture Channel of MB-LBP Computation
3.2. Background Suppression
Algorithm 1 Background Suppression according to GBVS and Itti-Koch Map |
Input: an original infrared image I0 |
Output: a background-suppressed image |
1: the standardized image I ← standardize Image (I0) |
//Both histogram equalization (HE) and size normalization |
//Step 1: GBVS map and Itti-Koch map generation: |
2: GBVS map ← GBVS(I) |
3: Itti-Koch map ← Itti-Koch(I) |
4: GBVS map & Itti-Koch map normalization |
//Step 2: GBVS map and Itti-Koch map fusion: |
5: for each unknown pixel do |
6: for each map do |
7: calculate average saliency in 5 × 5 neighborhood |
8: end for |
9: find the largest average saliency as the fusion saliency value |
10: new image ← k × original image × saliency value |
11: end for |
12: generate all pixels in the new image |
13: return the mapping of the new image |
4. Experiments
4.1. Introduction of Experimental Datasets and Environment
4.2. Introduction of Quantitative Evaluation
- A true negative is a case where the background area was recognized as a background region.
- A true positive is a case where the human area was correctly recognized as a human region.
- A false negative is a case where the human area was recognized as a background region.
- A false positive is a case where the background area was recognized as a human region.
4.3. Visual Comparison
4.4. Quantitative comparison of Precision, Recall, Accuracy, and F-Measure
4.5. Comprehensive Comparison of mAPs
4.6. The Requirement for the Data Test
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Guohua, W.; Qiong, L. Far-Infrared Based Pedestrian Detection for Driver-Assistance Systems Based on Candidate Filters, Gradient-Based Feature and Multi-Frame Approval Matching. Sensors 2015, 15, 32188–32212. [Google Scholar] [CrossRef]
- Liu, Q.; Zhuang, J.; Ma, J. Robust and fast pedestrian detection method for far-infrared automotive driving assistance systems. Infrared Phys. Technol. 2013, 60, 288–299. [Google Scholar] [CrossRef]
- Davis, J.W.; Sharma, V. Background-subtraction in thermal imagery using contour saliency. Int. J. Comput. Vis. 2007, 71, 161–181. [Google Scholar] [CrossRef]
- Tewary, S.; Akula, A.; Ghosh, R.; Kumar, S.; Sardana, H.K. Hybrid multi-resolution detection of moving targets in infrared imagery. Infrared Phys. Technol. 2014, 67, 173–183. [Google Scholar] [CrossRef]
- Olmeda, D.; Premebida, C.; Nunes, U.; Maria Armingol, J.; de la Escalera, A. Pedestrian detection in far infrared images. Integr. Comput. Aided Eng. 2013, 20, 347–360. [Google Scholar] [CrossRef] [Green Version]
- Kim, J.H.; Hong, H.G.; Park, K.R. Convolutional Neural Network-Based Human Detection in Nighttime Images Using Visible Light Camera Sensors. Sensors 2017, 17, 1065. [Google Scholar] [CrossRef] [Green Version]
- Zhang, Y.; Shen, C.; Hartley, R.; Huang, X. Effective Pedestrian Detection Using Center-Symmetric Local Binary/Trinary Patterns. Available online: https://arxiv.org/abs/1009.0892 (accessed on 4 September 2020).
- Van de Sande, K.E.; Uijlings, J.R.; Gevers, T.; Smeulders, A.W. Segmentation as selective search for object recognition. In Proceedings of the IEEE International Conference on Computer Vision, ICCV 2011, Barcelona, Spain, 6–13 November 2011; pp. 1879–1886. [Google Scholar] [CrossRef] [Green Version]
- Ko, B.C.; Kim, D.-Y.; Jung, J.-H.; Nam, J.-Y. Three-level cascade of random forests for rapid human detection. Opt. Eng. 2013, 52. [Google Scholar] [CrossRef] [Green Version]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 24–27 June 2014. [Google Scholar] [CrossRef] [Green Version]
- Jeon, E.S.; Choi, J.S.; Lee, J.H.; Shin, K.Y.; Kim, Y.G.; Le, T.T.; Park, K.R. Human detection based on the generation of a background image by using a far-infrared light camera. Sensors 2015, 15, 6763–6788. [Google Scholar] [CrossRef] [Green Version]
- Reza, S.G.; Ognjen, A.; Hakim, B.; Xavier, M. Infrared face recognition: A literature review. In Proceedings of the 2013 International Joint Conference on Neural Networks (IJCNN), Dallas, TX, USA, 4–9 August 2013; pp. 1–10. [Google Scholar]
- Soonmin, H.; Jaesik, P.; Namil, K. Multispectral pedestrian detection: Benchmark data set and baseline. In Proceedings of the IEEE Conference on Computer Vision Pattern Recognition, Boston, MA, USA, 7–12 June 2015. [Google Scholar]
- Ouyang, W.; Wang, X. Joint Deep Learning for Pedestrian Detection. In Proceedings of the IEEE International Conference on Computer Vision, Columbus, OH, USA, 23–28 June 2014. [Google Scholar] [CrossRef] [Green Version]
- Liang, Y.; Huang, H.; Cai, Z.; Hao, Z.; Tan, K.C. Deep infrared pedestrian classification based on automatic image matting. Appl. Soft Comput. 2019, 77, 484–496. [Google Scholar] [CrossRef]
- Besbes, B.; Rogozan, A.; Rus, A.-M.; Bensrhair, A.; Broggi, A. Pedestrian Detection in Far-Infrared Daytime Images Using a Hierarchical Codebook of SURF. Sensors 2015, 15, 8570–8594. [Google Scholar] [CrossRef] [Green Version]
- Kwak, J.; Ko, B.C.; Nam, J.Y. Pedestrian Tracking Using Online Boosted Random Ferns Learning in Far-Infrared Imagery for Safe Driving at Night. IEEE Trans. Intell. Transp. Syst. 2017, 18, 69–81. [Google Scholar] [CrossRef]
- LeCun, Y.; Boser, B.; Denker, J.S.; Henderson, D.; Howard, R.E.; Hubbard, W.; Jackel, L.D. Backpropagation Applied to Handwritten Zip Code Recognition. Neural Comput. 1989, 1, 541–551. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar] [CrossRef] [Green Version]
- Navneet, D.; Bill, T. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005. [Google Scholar] [CrossRef] [Green Version]
- Suard, F.; Rakotomamonjy, A.; Bensrhair, A.; Broggi, A. Pedestrian Detection using Infrared images and Histograms of Oriented Gradients. In Proceedings of the 2006 IEEE Intelligent Vehicles Symposium, Tokyo, Japan, 13–15 June 2006. [Google Scholar] [CrossRef] [Green Version]
- Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
- Marko, H.; Matti, P.; Schmid, C. Description of interest regions with center-symmetric local binary patterns. In Proceedings of the Indian Conference on Computer Vision, Madurai, India, 13–16 December 2006. [Google Scholar]
- Shengcai, L.; Xiangxin, Z.; Zhen, L.; Lun, Z.; Li, S.Z. Learning Multi-scale Block Local Binary Patterns for Face Recognition. In Proceedings of the International Conference on Advances, Jakarta, Indonesia, 3–5 December 2007. [Google Scholar]
- Wu, B.; Nevatia, R. Detection of Multiple, Partially Occluded Humans in a Single Image by Bayesian Combination of Edgelet Part Detectors. In Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05), Beijing, China, 17–21 October 2005. [Google Scholar]
- Mikolajczyk, K.; Schmid, C.; Zisserman, A. Human detection based on a probabilistic assembly of robust part detectors. In Computer Vision—Eccv 2004, Pt 1; Pajdla, T., Matas, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2004; pp. 69–82. [Google Scholar]
- Mohan, A.; Papageorgiou, C.; Poggio, T. Example-based object detection in images by components. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 349–361. [Google Scholar] [CrossRef] [Green Version]
- Viola, P.; Jones, M.; Snow, D. Detecting Pedestrians Using Patterns of Motion and Appearance. In Proceedings of the IEEE International Conference on Computer Vision, Nice, France, 13–16 October 2003. [Google Scholar]
- Itti, L.; Koch, C.; Niebur, E. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 1254–1259. [Google Scholar] [CrossRef] [Green Version]
- Jonathan, H.; Christof, K.; Pietro, P. Graph-Based Visual Saliency. In Proceedings of the Conference on Advances in Neural Information Processing Systems. 2006. Available online: http://papers.nips.cc/paper/3095-graph-based-visual-saliency.pdf (accessed on 3 September 2020).
- Hou, X.; Harel, J.; Koch, C. Image Signature: Highlighting Sparse Salient Regions. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 194–201. [Google Scholar] [CrossRef]
- Zhao, J.; Chen, Y.; Feng, H.; Xu, Z.; Li, Q. Infrared image enhancement through saliency feature analysis based on multi-scale decomposition. Infrared Phys. Technol. 2014, 62, 86–93. [Google Scholar] [CrossRef]
- Shen, X.; Wu, Y. A Unified Approach to Salient Object Detection via Low Rank Matrix Recovery. In Proceedings of the IEEE Comput Soc Conf Comput Vis Pattern Recogn, Providence, RI, USA, 16–21 June 2012. [Google Scholar]
- Zheng, Q.; Yu, S.; You, X. Coarse-to-Fine Salient Object Detection with Low-Rank Matrix Recovery. Neurocomputing 2020, 376, 232–243. [Google Scholar] [CrossRef] [Green Version]
- Li, J.; Yang, J.; Gong, C.; Qingshan, L. Saliency fusion via sparse and double low rank decomposition. Pattern Recognit. Lett. 2018, 107, 114–122. [Google Scholar] [CrossRef]
- Dingwen, Z.; Junwei, H.; Yu, Z.; Dong, X. Synthesizing Supervision for Learning Deep Saliency Network without Human Annotation. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 42, 1755–1769. [Google Scholar]
- Felzenszwalb, P.F.; Huttenlocher, D.P. Efficient graph-based image segmentation. Int. J. Comput. Vis. 2004, 59, 167–181. [Google Scholar] [CrossRef]
- Girshick, R. Fast R-CNN. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Hussin, R.; Vijayan, A. Multifeature fusion for robust human detection in thermal infrared imagery. Opt. Eng. 2019, 58, 043101. [Google Scholar]
- Peixia, L.; Boyu, C.; Wanli, O.; Dong, W.; Xiaoyun, Y.; Huchuan, L. GradNet: Gradient-Guided Network for Visual Object Tracking. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 22 April 2019. [Google Scholar]
- Hameed, Z.; Wang, C. Edge detection using histogram equalization and multi-filtering process. In Proceedings of the 2011 IEEE International Symposium of Circuits and Systems (ISCAS), Rio de Janeiro, Brazil, 15–18 May 2011; pp. 1077–1080. [Google Scholar]
- Khellal, A.; Ma, H.; Fei, Q. Pedestrian classification and detection in far infrared images. In Proceedings of the International Conference on Intelligent Robotics and Applications, Portsmouth, UK, 24–27 August 2015. [Google Scholar]
- Miron, A.D. Multi-Modal, Multi-Domain Pedestrian Detection and Classification: Proposals and Explorations in Visible Over Stereovision, Fir and Swir. Ph.D. Thesis, INSA de Rouen, Rouvray, France, 2014. [Google Scholar]
- Xu, Z.; Zhuang, J.; Liu, Q.; Zhou, J.; Peng, S. Benchmarking a large-scale FIR dataset for on-road pedestrian detection. Infrared Phys. Technol. 2019, 96, 199–208. [Google Scholar] [CrossRef]
- Gonzalez, A.; Fang, Z.; Socarras, Y.; Serrat, J.; Vazquez, D.; Xu, J.; Lopez, A.M. Pedestrian Detection at Day/Night Time with Visible and FIR Cameras: A Comparison. Sensors 2016, 16, 820. [Google Scholar] [CrossRef]
Layer | Number of Filters | Size of Feature Map | Size of Kernel | Number of Stride | Number of Padding |
---|---|---|---|---|---|
Convolution 1 | 64 | 960 × 640 × 64 | 3 × 3 × 64 | 1 | 1 |
ReLU 1 | 960 × 640 × 64 | ||||
Convolution 2 | 64 | 960 × 640 × 64 | 3 × 3 × 64 | 1 | 1 |
ReLU 2 | 960 × 640 × 64 | ||||
Max pooling 1 | 1 | 480 × 320 × 64 | 2 × 2 | 2 | 0 |
Convolution 3 | 128 | 480 × 320 × 128 | 3 × 3 × 128 | 1 | 1 |
ReLU 3 | 480 × 320 × 128 | ||||
Convolution 4 | 128 | 480 × 320 × 128 | 3 × 3 × 128 | 1 | 1 |
ReLU 4 | 480 × 320 × 128 | ||||
Max pooling 2 | 1 | 240 × 160 × 128 | 2 × 2 | 2 | 0 |
Convolution 5 | 256 | 240 × 160 × 256 | 3 × 3 × 256 | 1 | 1 |
ReLU 5 | 240 × 160 × 256 | ||||
Convolution 6 | 256 | 240 × 160 × 256 | 3 × 3 × 256 | 1 | 1 |
ReLU 6 | 240 × 160 × 256 | ||||
Convolution 7 | 256 | 240 × 160 × 256 | 3 × 3 × 256 | 1 | 1 |
ReLU 7 | 240×160×256 | ||||
Max pooling 3 | 1 | 240 × 160 × 256 | 2 × 2 | 2 | 0 |
Convolution 8 | 512 | 120 × 80 × 512 | 3 × 3 × 512 | 1 | 1 |
ReLU 8 | 120 × 80 × 512 | ||||
Convolution 9 | 512 | 120 × 80 × 512 | 3 × 3 × 512 | 1 | 1 |
ReLU 9 | 120 × 80 × 512 | ||||
Convolution 10 | 512 | 120 × 80 × 512 | 3 × 3 × 512 | 1 | 1 |
ReLU 10 | 120 × 80 × 512 | ||||
Max pooling 4 | 1 | 60 × 40 × 512 | 2 × 2 | 2 | 0 |
Convolution 11 | 512 | 60 × 40 × 512 | 3 × 3 × 512 | 1 | 1 |
ReLU 11 | 60 × 40 × 512 | ||||
Convolution 12 | 512 | 60 × 40 × 512 | 3 × 3 × 512 | 1 | 1 |
ReLU 12 | 60 × 40 × 512 | ||||
Convolution 13 | 512 | 60 × 40 × 512 | 3 × 3 × 512 | 1 | 1 |
ReLU 13 | 60 × 40 × 512 |
Dataset | Size of Images | Number of Pedestrians | Number of Images | Sampling Frequency | Images Without Pedestrian |
---|---|---|---|---|---|
LSI | 164 × 129 | 7624 | 6054 | 30 Hz | 1424 |
CVC-14 | 640 × 471 | 8242 | 2614 | 30 Hz | 632 |
SCUT | 720 × 576 | 8679 | 4153 | 25 Hz | 1322 |
CUSTFIR | 352 × 288 | 6408 | 2836 | 25 Hz | 621 |
Prediction | |||
---|---|---|---|
Object | Background | ||
Ground Truth | Object | True Positive | False Negative |
Background | False Positive | True Negative |
Metric (Datasets) | Our Work | Baselines | |||
---|---|---|---|---|---|
Channel Expansion and Background Suppression 1 | With Channel Expansion Only | With Faster R-CNN Only [40] | ResNet with Sliding Window [20] | VGGNet with Sliding Window [19] | |
Precision (LSIFIR) 2 | 82.66% (81.62 ± 5.41%) | 78.30% | 75.39% (75.79 ± 3.65%) | 75.70% | 62.60% |
Recall (LSIFIR) | 82.15% (77.55 ± 3.23%) | 78.36% | 81.12% (79.97 ± 4.81%) | 70.55% | 70.45% |
F-measure (LSIFIR) | 82.40% (79.39 ± 2.17%) | 78.33% | 78.15 (77.78 ± 3.02%) | 73.13% | 66.29% |
Precision (CVC-14) | 67.41% | 59.57% | 56.55% | 55.42% | 52.90% |
Recall (CVC-14) | 69.70% | 67.74% | 69.23% | 67.18% | 63.95% |
F-measure (CVC-14) | 68.54% | 63.39% | 62.25% | 60.73% | 57.90% |
Precision (SCUT) | 66.73% | 65.39% | 62.17% | 60.41% | 59.57% |
Recall (SCUT) | 74.28% | 71.86% | 70.91% | 69.70% | 67.74% |
F-measure (SCUT) | 70.30% | 68.47% | 66.25% | 64.72% | 63.39% |
Precision (CUSTFIR) 3 | 92.90% | 92.93% | 87.39% | 84.78% | 80.95% |
Recall (CUSTFIR) | 77.87% | 76.98% | 78.64% | 77.90% | 74.49% |
F-measure (CUSTFIR) | 84.72% | 84.20% | 82.79% | 81.20% | 77.58% |
Dataset | Our Work | Baselines | |||
---|---|---|---|---|---|
Channel Expansion and Background Suppression 1 | With Channel Expansion Only | With Faster R-CNN Only [40] | ResNet with Sliding Window [20] | VGGNet with Sliding Window [19] | |
LSIFIR | 82.72% (81.67 ± 1.54%) | 80.80% | 76.12% (78.69 ± 0.37%) | 65.48% | 60.23% |
CVC-14 | 68.39% | 61.86% | 59.87% | 57.34% | 56.94% |
SCUT | 69.72% | 66.81% | 63.54% | 59.72% | 58.97% |
CUSTFIR 2 | 85.45% | 83.82% | 82.25% | 79.41% | 78.10% |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, S.; Wang, B.; Wang, S.; Tang, Y. Feature Channel Expansion and Background Suppression as the Enhancement for Infrared Pedestrian Detection. Sensors 2020, 20, 5128. https://doi.org/10.3390/s20185128
Wang S, Wang B, Wang S, Tang Y. Feature Channel Expansion and Background Suppression as the Enhancement for Infrared Pedestrian Detection. Sensors. 2020; 20(18):5128. https://doi.org/10.3390/s20185128
Chicago/Turabian StyleWang, Shengzhe, Bo Wang, Shifeng Wang, and Yifeng Tang. 2020. "Feature Channel Expansion and Background Suppression as the Enhancement for Infrared Pedestrian Detection" Sensors 20, no. 18: 5128. https://doi.org/10.3390/s20185128