Determination of Vehicle Trajectory through Optimization of Vehicle Bounding Boxes using a Convolutional Neural Network
<p>Camera location and installation in the study area: (<b>a</b>) installation of the camera; (<b>b</b>) XNO-6010R sensor; and (<b>c</b>) location and coverage of the camera at an intersection.</p> "> Figure 2
<p>Example of lens distortion: (<b>a</b>) lens-distorted image; (<b>b</b>) image calibrated through camera calibration.</p> "> Figure 2 Cont.
<p>Example of lens distortion: (<b>a</b>) lens-distorted image; (<b>b</b>) image calibrated through camera calibration.</p> "> Figure 3
<p>Reference dataset for the determination of vehicle trajectory: (<b>a</b>) orthophoto by unmanned aerial vehicle (UAV); (<b>b</b>) reference dataset.</p> "> Figure 4
<p>The framework of the proposed methodology.</p> "> Figure 5
<p>The framework of YOLO on our methodology.</p> "> Figure 6
<p>The architecture of YOLOv2.</p> "> Figure 7
<p>The center position of the moving vehicle resulting from the geometric displacement of the camera.</p> "> Figure 8
<p>The relationship between the geometric parameters of the moving vehicle.</p> "> Figure 9
<p>The relationships between the geometric parameters of a moving vehicle.</p> "> Figure 10
<p>Example of the training dataset: <b>(a)</b> daytime images; (<b>b</b>) nighttime images.</p> "> Figure 11
<p>Example of the extracted bounding boxes of moving vehicles.</p> "> Figure 12
<p>The projected center points of moving vehicles in closed-circuit television (CCTV) camera images and orthophotos: (<b>a</b>) CCTV camera images; (<b>b</b>) orthophoto images. (Green point: ground truth, blue point: results of the proposed algorithm, and red point: results of the conventional algorithm).</p> "> Figure 13
<p>Results of the vehicle trajectory extraction corresponding to the different algorithms: (<b>a</b>) driving straight (up and down); (<b>b</b>) driving straight (left and right); (<b>c</b>) right turn; and (<b>d</b>) left turn.</p> ">
Abstract
:1. Introduction
2. Sensor Platforms for Tracking Vehicle Location
2.1. Sensor Specification
2.2. Preprocessing for the Integration of Sensor Platforms and Spatial Information
- Step (1)
- Select four corresponding points in the image and the orthophoto captured by the UAV.
- Step (2)
- Calculate the transform matrix based on the perspective transformation [31].
- Step (3)
- Transform the image into the coordinates of the orthophoto using the transformation matrix.
- Step (4)
- Determine the length of the front of the vehicle according to the horizontal and vertical movement when the vehicle is located in the intersection.
3. Proposed Methodology for The Determination of Vehicle Trajectory
3.1. Vehicle Detection Using YOLO
3.2. Trajectory Estimation by Kalman Filtering and IOU Tracker
3.3. Trajectory Correction using Determination of Optimal Bounding Box
4. Results and Discussion
4.1. Training and Test Datasets
4.2. Experimental Results and Analysis
5. Conclusions
Author Contributions
Conflicts of Interest
References
- Liu, Y. Big Data Technology and its Analysis of Application in Urban Intelligent Transportation System. In Proceedings of the International Conference on Intelligent Transportation—Big Data Smart City, Xiamen, China, 25–26 January 2018; pp. 17–19. [Google Scholar]
- Luvizon, D.C.; Nassu, B.T.; Minetto, R. A video-based system for vehicle speed measurement in urban roadways. IEEE Trans. Intell. Transp. Syst. 2017, 18, 1393–1404. [Google Scholar] [CrossRef]
- Nishibe, Y.; Ohta, N.; Tsukada, K.; Yamadera, H.; Nonomura, Y.; Mohri, K.; Uchiyama, T. Sensing of passing vehicles using a lane marker on road with a built-in thin film MI sensor and power source. IEEE Trans. Veh. Technol. 2004, 53, 1827–1834. [Google Scholar] [CrossRef]
- Nishibe, Y.; Yamadera, H.; Ohta, N.; Tsukada, K.; Ohmura, Y. Magneto-impedance effect of a layered CoNbZr amorphous film formed on a polyimide substrate. IEEE Trans. Magn. 2003, 39, 571–575. [Google Scholar] [CrossRef]
- Atkinson, D.; Squire, P.T.; Maylin, M.G.; Goreb, J. An integrating magnetic sensor based on the giant magneto-impedance effect. Sens. Actuators A Phys. 2000, 81, 82–85. [Google Scholar] [CrossRef]
- Jogschies, L.; Klaas, D.; Kruppe, R.; Rittinger, J.; Taptimthong, P.; Wienecke, A.; Rissing, L.; Wurz, M.C. Recent developments of magnetoresistive sensors for industrial applications. Sensors 2015, 15, 28665–28689. [Google Scholar] [CrossRef] [PubMed]
- Lu, C.C.; Huang, J.; Chiu, P.K.; Chiu, S.L.; Jeng, J.T. High-sensitivity low-noise miniature fluxgate magnetometers using a flip chip conceptual design. Sensors 2014, 14, 13815–13829. [Google Scholar] [CrossRef] [PubMed]
- Dong, H.; Wang, X.; Zhang, C.; He, R.; Jia, L.; Qin, Y. Improved robust vehicle detection and identification based on single magnetic sensor. IEEE Access 2018, 6, 5247–5255. [Google Scholar] [CrossRef]
- Marszalek, Z.; Zeglen, T.; Sroka, R.; Gajda, J. Inductive loop axle detector based on resistance and reactance vehicle magnetic profiles. Sensors 2018, 18, 2376. [Google Scholar] [CrossRef] [PubMed]
- Ki, Y.; Lee, D. A traffic accident recording and reporting model at intersections. IEEE Trans. Intell. Transp. Syst. 2007, 8, 188–194. [Google Scholar] [CrossRef]
- Wang, Y.; Zou, Y.; Shi, H.; Zhao, H. Video Image Vehicle Detection System for Signaled Traffic Intersection. In Proceedings of the Ninth International Conference on Hybrid Intelligent Systems, Shenyang, China, 12–14 August 2009; pp. 222–227. [Google Scholar]
- Kato, J.; Watanabe, T.; Joga, S.; Rittscher, J.; Blake, A. An HMM-based segementation method for traffic monitoring movies. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 1291–1296. [Google Scholar] [CrossRef]
- Cucchiara, R.; Piccardi, M.; Mello, P. Image analysis and rule-based reasoning for a traffic monitoring system. IEEE Trans. Intell. Transp. Syst. 2000, 1, 119–130. [Google Scholar] [CrossRef]
- Zhou, J.; Gao, D.; Zhang, D. Moving vehicle detection for automatic traffic monitoring. IEEE Trans. Veh. Technol. 2007, 56, 51–58. [Google Scholar] [CrossRef]
- Lin, J.; Sun, M. A YOLO-based Traffic Counting System. In Proceedings of the 2018 Conference on Technologies and Applications of Artificial Intelligence (TAAI), Taichung, Taiwan, 30 November–2 December 2018; pp. 82–85. [Google Scholar]
- Kim, K.; Kim, P.; Chung, Y.; Choi, D. Multi-scale detector for accurate vehicle detection in traffic surveillance data. IEEE Access 2019, 7, 2169–3536. [Google Scholar] [CrossRef]
- Forero, A.; Calderon, F. Vehicle and Pedestrian Video-Tracking with Classification Based on Deep Convolutional Neural Networks. In Proceedings of the 2019 XXII Symposium on Image, Signal Processing and Artificial Vision (STSIVA), Bucaramanga, Colombia, 24–26 April 2019. [Google Scholar]
- Asha, C.S.; Narasimhadhan, A.V. Vehicle Counting for Traffic Management System Using YOLO and Correlation Filter. In Proceedings of the 2018 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT), Bangalore, India, 16–17 March 2018; pp. 1–6. [Google Scholar]
- Zhang, F.; Li, C.; Yang, F. Vehicle detection in urban traffic surveillance images based on convolutional neural networks with feature concatenation. Sensors 2019, 19, 594. [Google Scholar] [CrossRef] [PubMed]
- Xu, Z.; Shi, H.; Li, N.; Xiang, C.; Zhou, H. Vehicle Detection Under UAV Based on Optimal Dense YOLO Method. In Proceedings of the 2018 5th International Conference on Systems and Informatics (ICSAI), Nanjing, China, 10–12 November 2018. [Google Scholar]
- Zhang, J.; Wang, W.; Lu, C.; Wang, J.; Sangaiah, A.K. Lightweight deep network for traffic sign classifiction. Ann. Telecommun. 2019, 74, 1–11. [Google Scholar]
- Zhang, J.; Jin, X.; Sun, J.; Wang, J.; Sangaiah, A.K. Spatial and semantic convolutional features for robust visual object tracking. Multimedia Tools Appl. 2018, 1–21. [Google Scholar] [CrossRef]
- Zhang, J.; Jin, X.; Sun, J.; Wang, J.; Li, K. Dual model learning combined with multiple feature selection for accurate visual tracking. IEEE Access 2019, 7, 43956–43969. [Google Scholar] [CrossRef]
- Zhang, J.; Wu, Y.; Feng, W.; Wang, J. Spatially attentive visual tracking using multi-model adaptive response fusion. IEEE Access 2019, 7, 83873–83887. [Google Scholar] [CrossRef]
- Koller, D.; Weber, J.; Huang, T.; Malik, J.; Ogasawara, G.; Rao, B.; Russel, S. Towards Robust Automatic Traffic Scene Analysis in Real-Time. In Proceedings of the 33rd Conference on Decision and Control, Lake Buena Vista, FL, USA, 14–16 December 1994; pp. 3776–3781. [Google Scholar]
- Wang, Y.; Deng, W.; Liu, Z.; Wang, J. Deep learning-based vehicle detection with synthetic image data. IET Intell. Transp. Syst. 2019, 13, 1097–1105. [Google Scholar] [CrossRef]
- Sang, J.; Wu, Z.; Guo, P.; Hu, H.; Xiang, H.; Zhang, Q.; Cai, B. An improved YOLOv2 for vehicle detection. Sensors 2018, 18, 4272. [Google Scholar] [CrossRef]
- Li, J.; Chen, S.; Zhang, F.; Li, E.; Yang, T.; Lu, Z. An adaptive framework for multi-vehicle ground speed estimation in airborne videos. Remote Sens. 2019, 11, 1241. [Google Scholar] [CrossRef]
- Wang, L.; Zhang, L.; Yi, Z. Trajectory predictor by using recurrent neural networks in visual tracking. IEEE Trans. Cybern. 2017, 47, 3172–3183. [Google Scholar] [CrossRef] [PubMed]
- Brown, D.C. Close-Range Camera Calibration. In Proceedings of the Symposium on Close-Range Photogrammetry System, ISPRS, Chicago, IL, USA, 28 July–1 August 1971; pp. 855–866. [Google Scholar]
- Horaud, R. New methods for matching 3-D objects with single perspective view. IEEE Trans. Pattern Anal. Mach. Intell. 1987, 9, 401–412. [Google Scholar] [CrossRef] [PubMed]
- Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6517–6525. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Neubeck, A.; Van Gool, L. Efficient Non-Maximum Suppression. In Proceedings of the International Conference on Pattern Recognition (ICPR), Hong Kong, China, 20–24 August 2006; pp. 850–855. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2005; pp. 448–456. [Google Scholar]
- Faragher, R. Understanding the basis of the kalman filter via a simple and intuitive derivation. IEEE Signal Process. Mag. 2012, 29, 128–132. [Google Scholar] [CrossRef]
- Peterfreund, N. Robust tracking of position and velocity with Kalman snakes. IEEE Trans. Pattern Anal. Mach. Intell. 1999, 21, 564–569. [Google Scholar] [CrossRef] [Green Version]
- Kalman, R.E. A new approach to linear filtering and prediction problems. Trans. ASME J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef]
- Bochinski, E.; Eiselein, V.; Sikora, T. High-Speed Tracking-by-Detection without Using Image Information. In Proceedings of the IEEE International Conference on Advanced Video and Signal Based Surveillance, Lecce, Italy, 29 August–1 September 2017; pp. 1–6. [Google Scholar]
Sensor | XNO-6010R |
---|---|
Imaging devices | 1/2.8’’ 2-megapixel CMOS |
Effective pixels | 1945 (H) 1097 (V), 2.13 megapixel |
Signal-to-noise ratio | 50 dB |
Focal length | 2.4 mm fixed |
Field of view | Horizontal: 139.0° Vertical: 73.0° Diagonal: 167.0° |
Weight | 1.22 kg |
Installation height | 9.5 m |
Num | Type | Input | Filters | Size/Stride | Output |
---|---|---|---|---|---|
0 | conv | 416 416 3 | 32 | 3 3/1 | 416 416 32 |
1 | max | 416 416 32 | 2 2/2 | 208 208 32 | |
2 | conv | 208 208 32 | 64 | 3 3/1 | 208 208 64 |
3 | max | 208 208 64 | 2 2/2 | 104 104 64 | |
4 | conv | 104 104 64 | 128 | 3 3/1 | 104 104 128 |
5 | conv | 104 104 128 | 64 | 1 1/1 | 104 104 64 |
6 | conv | 104 104 64 | 128 | 3 3/1 | 104 104 128 |
7 | max | 104 104 128 | 2 2/2 | 52 52 128 | |
8 | conv | 52 52 128 | 256 | 3 3/1 | 52 52 256 |
9 | conv | 52 52 256 | 128 | 1 1/1 | 52 52 128 |
10 | conv | 52 52 128 | 256 | 3 3/1 | 52 52 256 |
11 | max | 52 52 256 | 2 2/2 | 26 26 256 | |
12 | conv | 26 26 256 | 512 | 3 3/1 | 26 26 512 |
13 | conv | 26 26 512 | 256 | 1 1/1 | 26 26 256 |
14 | conv | 26 26 256 | 512 | 3 3/1 | 26 26 512 |
15 | conv | 26 26 512 | 256 | 1 1/1 | 26 26 256 |
16 | conv | 26 26 256 | 512 | 3 3/1 | 26 26 512 |
17 | max | 26 26 512 | 2 2/2 | 13 13 512 | |
18 | conv | 13 13 512 | 1024 | 3 3/1 | 13 13 1024 |
19 | conv | 13 13 1024 | 512 | 1 1/1 | 13 13 512 |
20 | conv | 13 13 512 | 1024 | 3 3/1 | 13 13 1024 |
21 | conv | 13 13 1024 | 512 | 1 1/1 | 13 13 512 |
22 | conv | 13 13 512 | 1024 | 3 3/1 | 13 13 1024 |
23 | conv | 13 13 1024 | 1024 | 3 3/1 | 13 13 1024 |
24 | conv | 13 13 1024 | 1024 | 3 3/1 | 13 13 1024 |
25 | route | 16th | 26 26 512 | ||
26 | reorg | 26 26 512 | __/1 | 13 13 2048 | |
27 | route | 26th and 24th | 13 13 3072 | ||
28 | conv | 13 13 3072 | 1024 | 3 3/1 | 13 13 1024 |
29 | conv | 13 13 1024 | 40 | 1 1/1 | 13 13 40 |
Conventional Algorithm | Proposed Algorithm | |
---|---|---|
RMSE (X) | 14.35 | 7.41 |
RMSE (Y) | 30.00 | 14.75 |
RMSE | 33.25 | 16.51 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Seong, S.; Song, J.; Yoon, D.; Kim, J.; Choi, J. Determination of Vehicle Trajectory through Optimization of Vehicle Bounding Boxes using a Convolutional Neural Network. Sensors 2019, 19, 4263. https://doi.org/10.3390/s19194263
Seong S, Song J, Yoon D, Kim J, Choi J. Determination of Vehicle Trajectory through Optimization of Vehicle Bounding Boxes using a Convolutional Neural Network. Sensors. 2019; 19(19):4263. https://doi.org/10.3390/s19194263
Chicago/Turabian StyleSeong, Seonkyeong, Jeongheon Song, Donghyeon Yoon, Jiyoung Kim, and Jaewan Choi. 2019. "Determination of Vehicle Trajectory through Optimization of Vehicle Bounding Boxes using a Convolutional Neural Network" Sensors 19, no. 19: 4263. https://doi.org/10.3390/s19194263
APA StyleSeong, S., Song, J., Yoon, D., Kim, J., & Choi, J. (2019). Determination of Vehicle Trajectory through Optimization of Vehicle Bounding Boxes using a Convolutional Neural Network. Sensors, 19(19), 4263. https://doi.org/10.3390/s19194263