Nothing Special   »   [go: up one dir, main page]

Skip to main content

Ground Truth Data Generator in Automotive Infrared Sensor Vision Problems Using a Minimum Set of Operations

  • Conference paper
  • First Online:
Advances in Computational Collective Intelligence (ICCCI 2023)

Abstract

In image vision we call a ground truth data generator any kind of software tool or algorithm that contributes in a semi or fully automatic way to the extraction of ground truth labels from a data set. The main purpose of such automation is to reduce as much as possible the manual effort of labeling a big number of frames. Above all, such a generator must be precise and avoid false positives because its results shall be used as training data for neural networks. In this paper we present a minimum set of operations required for fully automatic generation of labels from existing grayscale images in automotive image vision problems such as eye detection or traffic sign recognition. Multiple configurations based on these operations have been created to fit various desired features. We shifted the focus from algorithms development for ground truth data generation to understanding the particularities of an object or sign in the grayscale spectrum and defining correct configurations to detect them. We will present these configurations and the results obtained in the ground truth data generation process.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Swirski, L., Bulling, A., Anddodgson, N.: Ro-bust real-time pupil tracking in highly off-axis images. In: Proceedings of the ETRA (2012)

    Google Scholar 

  2. Fuhl, W., Eivazi, S., Hosp, B., Eivazi, A., Rosenstiel, W., Kasneci, E.: BORE: boosted-oriented edge optimization for robust, real time remote pupil center detection. In: Eye Tracking Research and Applications, ETRA, p. 12. (2018)

    Google Scholar 

  3. Fuhl, W., Kübler, T.C., Hospach, D., Bringmann, O., Rosenstiel, W., Kasneci, E.: Ways of improving the precision of eye tracking data: controlling the influence of dirt and dust on pupil detection. J. Eye Mov. Res. 10, 3 (2017)

    Google Scholar 

  4. Fuhl, W., Santini, T., Kübler, T.C., Kasneci, E.: ElSe: ellipse selection for robust pupil detection in real-world environments. In: Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications (ETRA), pp. 123–130 (2016)

    Google Scholar 

  5. Gu, H., Su, G., Du, C.: Feature points extraction from faces. In: Image and Vision Computing New Zealand. https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.110.519&rep=rep1&type=pdf

  6. Paul, S.K., Uddin, M.S., Bouakaz, S.: Face recognition using eyes, nostrils and mouth features. In: 16th International Conference Computer and Information Technology, pp. 117–120 (2014). https://doi.org/10.1109/ICCITechn.2014.6997378

  7. Demirkus, M., Clark, J.J., Arbel, T.: Robust semi-automatic head pose labeling for real-world face video sequences. Multimed. Tools Appl. 70, 495–523 (2014). https://doi.org/10.1007/s11042-012-1352-1

    Article  Google Scholar 

  8. Tian, Y., Liu, W., Xiao, R., Wen, F., Tang, X.: A Face Annotation Framework with Partial Clustering and Interactive Labeling. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2007). https://doi.org/10.1109/CVPR.2007.383282

  9. Le, V., Brandt, J., Lin, Z., Bourdev, L., Huang, T.S.: Interactive facial feature localization. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7574, pp. 679–692. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33712-3_49

    Chapter  Google Scholar 

  10. Świrski , L., Dodgson, N.: Rendering synthetic ground truth images for eye tracker evaluation. In: Proceedings of the Symposium on Eye Tracking Research and Applications (2014). https://doi.org/10.1145/2578153.2578188

  11. Leon, F., Gavrilescu, M., A review of tracking and trajectory prediction methods for autonomous driving. Mathematics 9(6), 37 (2021). Article number 660. https://doi.org/10.3390/math9060660

  12. Salzmann, T., Ivanovic, B., Chakravarty, P., Pavone, M.: Trajectron++: dynamically-feasible trajectory forecasting with heterogeneous data. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12363, pp. 683–700. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58523-5_40

    Chapter  Google Scholar 

  13. Valcan, S., Gaianu, M.: Ground truth data generator for eye location on infrared driver recordings. J. Imaging 7, 162 (2021). https://doi.org/10.3390/jimaging7090162

    Article  Google Scholar 

  14. Valcan, S., Gaianu, M.: Eye detection for drivers using convolutional neural networks with automatically generated ground truth data. In: 2022 24th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC) (2022)

    Google Scholar 

  15. Valcan, S., Gaianu, M.: Nostrils and mouth detection for drivers using convolutional neural networks with automatically generated ground truth data. In: 9th Annual Conference on Computational Science & Computational Intelligence (CSCI 2022) (2022)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sorin Valcan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Valcan, S., Gaianu, M. (2023). Ground Truth Data Generator in Automotive Infrared Sensor Vision Problems Using a Minimum Set of Operations. In: Nguyen, N.T., et al. Advances in Computational Collective Intelligence. ICCCI 2023. Communications in Computer and Information Science, vol 1864. Springer, Cham. https://doi.org/10.1007/978-3-031-41774-0_50

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-41774-0_50

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-41773-3

  • Online ISBN: 978-3-031-41774-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics