Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

A Highly Efficient Vehicle Taillight Detection Approach Based on Deep Learning

Published: 01 July 2021 Publication History

Abstract

Vehicle taillight detection is essential to analyze and predict driver intention in collision avoidance systems. In this article, we propose an end-to-end framework that locates the rear brake and turn signals from video stream in real-time. The system adopts the fast YOLOv3-tiny as the backbone model and three improvements have been made to increase the detection accuracy on taillight semantics, i.e., additional output layer for multi-scale detection, spatial pyramid pooling (SPP) module for richer deep features, and focal loss for alleviation of class imbalance and hard sample classification. Experimental results demonstrate that the integration of multi-scale features as well as hard examples mining greatly contributes to the turn light detection. The detection accuracy is significantly increased by 7.36%, 32.04% and 21.65% (absolute gain) for brake, left-turn and right-turn signals, respectively. In addition, we construct the taillight detection dataset, with brake and turn signals are specified with bounding boxes, which may help nourishing the development of this realm.

References

[1]
S. Garg, K. Kaur, S. H. Ahmed, A. Bradai, G. Kaddoum, and M. Atiquzzaman, “MobQoS: Mobility-aware and QoS-driven SDN framework for autonomous vehicles,” IEEE Wireless Commun., vol. 26, no. 4, pp. 12–20, Aug. 2019.
[2]
C. Badueet al., “Self-driving cars: A survey,” 2019, arXiv:1901.04407. [Online]. Available: http://arxiv.org/abs/1901.04407
[3]
D. Fenget al., “Deep multi-modal object detection and semantic segmentation for autonomous driving: Datasets, methods, and challenges,” IEEE Trans. Intell. Transp. Syst., early access, Feb. 17, 2020. 10.1109/TITS.2020.2972974.
[4]
Z. Caoet al., “Using reinforcement learning to minimize the probability of delay occurrence in transportation,” IEEE Trans. Veh. Technol., vol. 69, no. 3, pp. 2424–2436, Mar. 2020.
[5]
W. Chenet al., “Using FTOC to track shuttlecock for the badminton robot,” Neurocomputing, vol. 334, pp. 182–196, Mar. 2019.
[6]
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 2818–2826.
[7]
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 770–778.
[8]
J. Guoet al., “A novel robotic guidance system with eye gaze tracking control for needle based interventions,” IEEE Trans. Cognit. Develop. Syst., early access, Dec. 12, 2020. 10.1109/TCDS.2019.2959071.
[9]
W. Liuet al., “SSD: Single shot multibox detector,” in Proc. Eur. Conf. Comput. Vis. Cham, Switzerland: Springer, 2016, pp. 21–37.
[10]
S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” in Proc. Adv. Neural Inf. Process. Syst., 2015, pp. 91–99.
[11]
K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask R-CNN,” in Proc. IEEE Int. Conf. Comput. Vis., Oct. 2017, pp. 2961–2969.
[12]
S. Garg, K. Kaur, N. Kumar, and J. J. P. C. Rodrigues, “Hybrid Deep-Learning-Based anomaly detection scheme for suspicious flow detection in SDN: A social multimedia perspective,” IEEE Trans. Multimedia, vol. 21, no. 3, pp. 566–578, Mar. 2019.
[13]
S. Garg, K. Kaur, N. Kumar, G. Kaddoum, A. Y. Zomaya, and R. Ranjan, “A hybrid deep learning-based model for anomaly detection in cloud datacenter networks,” IEEE Trans. Netw. Service Manage., vol. 16, no. 3, pp. 924–935, Sep. 2019.
[14]
R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2014, pp. 580–587.
[15]
R. Girshick, “Fast R-CNN,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Dec. 2015, pp. 1440–1448.
[16]
J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 779–788.
[17]
J. Redmon and A. Farhadi, “YOLO9000: Better, faster, stronger,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jul. 2017, pp. 7263–7271.
[18]
J. Redmon and A. Farhadi, “YOLOv3: An incremental improvement,” 2018, arXiv:1804.02767. [Online]. Available: http://arxiv.org/abs/1804.02767
[19]
A. Bochkovskiy, C.-Y. Wang, and H.-Y. Mark Liao, “YOLOv4: Optimal speed and accuracy of object detection,” 2020, arXiv:2004.10934. [Online]. Available: http://arxiv.org/abs/2004.10934
[20]
T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar, “Focal loss for dense object detection,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Oct. 2017, pp. 2980–2988.
[21]
Z. Yang, S. Liu, H. Hu, L. Wang, and S. Lin, “RepPoints: Point set representation for object detection,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2019, pp. 9657–9666.
[22]
Z. Tian, C. Shen, H. Chen, and T. He, “FCOS: Fully convolutional one-stage object detection,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2019, pp. 9627–9636.
[23]
A. Almagambetov, M. Casares, and S. Velipasalar, “Autonomous tracking of vehicle rear lights and detection of brakes and turn signals,” in Proc. IEEE Symp. Comput. Intell. Secur. Defence Appl., Jul. 2012, pp. 1–7.
[24]
B. Fröhlich, M. Enzweiler, and U. Franke, “Will this car change the lane?-Turn signal recognition in the frequency domain,” in Proc. IEEE Intell. Vehicles Symp., Jun. 2014, pp. 37–42.
[25]
Z. Cui, S.-W. Yang, and H.-M. Tsai, “A vision-based hierarchical framework for autonomous front-vehicle taillights detection and signal recognition,” in Proc. IEEE 18th Int. Conf. Intell. Transp. Syst., Sep. 2015, pp. 931–937.
[26]
J.-G. Wanget al., “Appearance-based brake-lights recognition using deep learning and vehicle detection,” in Proc. IEEE Intell. Vehicles Symp. (IV), Jun. 2016, pp. 815–820.
[27]
J.-G. Wang, L. Zhou, Z. Song, and M. Yuan, “Real-time vehicle signal lights recognition with HDR camera,” in Proc. IEEE Int. Conf. Internet Things (iThings) IEEE Green Comput. Commun. (GreenCom) IEEE Cyber, Phys. Social Comput. (CPSCom) IEEE Smart Data (SmartData), Dec. 2016, pp. 355–358.
[28]
H.-T. Chen, Y.-C. Wu, and C.-C. Hsu, “Daytime preceding vehicle brake light detection using monocular vision,” IEEE Sensors J., vol. 16, no. 1, pp. 120–131, Jan. 2016.
[29]
G. Zhonget al., “Learning to tell brake lights with convolutional features,” in Proc. IEEE 19th Int. Conf. Intell. Transp. Syst. (ITSC), Nov. 2016, pp. 1558–1563.
[30]
A. Almagambetov, S. Velipasalar, and M. Casares, “Robust and computationally lightweight autonomous tracking of vehicle taillights and signal detection by embedded smart cameras,” IEEE Trans. Ind. Electron., vol. 62, no. 6, pp. 3732–3741, Jun. 2015.
[31]
R. K. Satzoda and M. M. Trivedi, “Looking at vehicles in the night: Detection and dynamics of rear lights,” IEEE Trans. Intell. Transp. Syst., vol. 20, no. 12, pp. 4297–4307, Dec. 2019.
[32]
F. I. Vancea, A. Daniel Costea, and S. Nedevschi, “Vehicle taillight detection and tracking using deep learning and thresholding for candidate generation,” in Proc. 13th IEEE Int. Conf. Intell. Comput. Commun. Process. (ICCP), Sep. 2017, pp. 267–272.
[33]
J. Arunnehru, H. A. Basha, A. Kumar, R. Sathya, and M. K. Geetha, “A vision-based on-road vehicle light detection system using support vector machines,” in Proc. Integr. Intell. Comput., Commun. Secur. Singapore: Springer, 2019, pp. 117–126.
[34]
F. I. Vancea and S. Nedevschi, “Semantic information based vehicle relative orientation and taillight detection,” in Proc. IEEE 14th Int. Conf. Intell. Comput. Commun. Process. (ICCP), Sep. 2018, pp. 259–264.
[35]
D. Nava, G. Panzani, and S. M. Savaresi, “A collision warning oriented brake lights detection and classification algorithm based on a mono camera sensor,” in Proc. IEEE Intell. Transp. Syst. Conf. (ITSC), Oct. 2019, pp. 319–324.
[36]
H.-K. Hsuet al., “Learning to tell brake and turn signals in videos using CNN-LSTM structure,” in Proc. IEEE 20th Int. Conf. Intell. Transp. Syst. (ITSC), Oct. 2017, pp. 1–6.
[37]
K.-H. Lee, T. Tagawa, J.-E.-M. Pan, A. Gaidon, and B. Douillard, “An attention-based recurrent convolutional network for vehicle taillight recognition,” in Proc. IEEE Intell. Vehicles Symp. (IV), Jun. 2019, pp. 2365–2370.
[38]
D. Frossard, E. Kee, and R. Urtasun, “DeepSignals: Predicting intent of drivers through visual signals,” in Proc. Int. Conf. Robot. Autom. (ICRA), May 2019, pp. 9697–9703.
[39]
Z. Chen, L. Zhang, C. Jiang, Z. Cao, and W. Cui, “WiFi CSI based passive human activity recognition using attention based BLSTM,” IEEE Trans. Mobile Comput., vol. 18, no. 11, pp. 2714–2724, Nov. 2018.
[40]
Z. Cao, T. Liao, W. Song, Z. Chen, and C. Li, “Detecting the shuttlecock for a badminton robot: A YOLO based approach,” Expert Syst. Appl., vol. 164, Feb. 2021, Art. no.
[41]
R. Mehta and C. Ozturk, “Object detection at 200 frames per second,” in Proc. Eur. Conf. Comput. Vis. (ECCV) Workshops, Sep. 2018, pp. 1–15.
[42]
T.-Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jul. 2017, pp. 2117–2125.
[43]
K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 37, no. 9, pp. 1904–1916, Sep. 2015.
[44]
C. Ma, Z. Liu, Z. Cao, W. Song, J. Zhang, and W. Zeng, “Cost-sensitive deep forest for price prediction,” Pattern Recognit., vol. 107, Nov. 2020, Art. no.
[45]
Z. Cao, S. Jiang, J. Zhang, and H. Guo, “A unified framework for vehicle rerouting and traffic light control to reduce traffic congestion,” IEEE Trans. Intell. Transp. Syst., vol. 18, no. 7, pp. 1958–1973, Jul. 2017.
[46]
Z. Cao, H. Guo, J. Zhang, D. Niyato, and U. Fastenrath, “Finding the shortest path in stochastic vehicle routing: A cardinality minimization approach,” IEEE Trans. Intell. Transp. Syst., vol. 17, no. 6, pp. 1688–1702, Jun. 2016.

Cited By

View all
  • (2024)Double Domain Guided Real-Time Low-Light Image Enhancement for Ultra-High-Definition Transportation SurveillanceIEEE Transactions on Intelligent Transportation Systems10.1109/TITS.2024.335975525:8(9550-9562)Online publication date: 1-Aug-2024
  • (2023)Taillight Signal Recognition via Sequential LearningProceedings of the 52nd International Conference on Parallel Processing Workshops10.1145/3605731.3605872(1-7)Online publication date: 7-Aug-2023
  • (2023)P3SNet: Parallel Pyramid Pooling Stereo NetworkIEEE Transactions on Intelligent Transportation Systems10.1109/TITS.2023.327632824:10(10433-10444)Online publication date: 1-Oct-2023

Index Terms

  1. A Highly Efficient Vehicle Taillight Detection Approach Based on Deep Learning
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image IEEE Transactions on Intelligent Transportation Systems
    IEEE Transactions on Intelligent Transportation Systems  Volume 22, Issue 7
    July 2021
    867 pages

    Publisher

    IEEE Press

    Publication History

    Published: 01 July 2021

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 24 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Double Domain Guided Real-Time Low-Light Image Enhancement for Ultra-High-Definition Transportation SurveillanceIEEE Transactions on Intelligent Transportation Systems10.1109/TITS.2024.335975525:8(9550-9562)Online publication date: 1-Aug-2024
    • (2023)Taillight Signal Recognition via Sequential LearningProceedings of the 52nd International Conference on Parallel Processing Workshops10.1145/3605731.3605872(1-7)Online publication date: 7-Aug-2023
    • (2023)P3SNet: Parallel Pyramid Pooling Stereo NetworkIEEE Transactions on Intelligent Transportation Systems10.1109/TITS.2023.327632824:10(10433-10444)Online publication date: 1-Oct-2023
    • (2023)Brake light detection of vehicles using differential evolution based neural architecture searchApplied Soft Computing10.1016/j.asoc.2023.110839147:COnline publication date: 1-Nov-2023

    View Options

    View options

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media