Nothing Special   »   [go: up one dir, main page]

Skip to main content

DeepPBM: Deep Probabilistic Background Model Estimation from Video Sequences

  • Conference paper
  • First Online:
Pattern Recognition. ICPR International Workshops and Challenges (ICPR 2021)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12662))

Included in the following conference series:

Abstract

This paper presents a novel unsupervised probabilistic model estimation of visual background in video sequences using a variational autoencoder framework. Due to the redundant nature of the backgrounds in surveillance videos, visual information of the background can be compressed into a low-dimensional subspace in the encoder part of the variational autoencoder, while the highly variant information of its moving foreground gets filtered throughout its encoding-decoding process. Our deep probabilistic background model (DeepPBM) estimation approach is enabled by the power of deep neural networks in learning compressed representations of video frames and reconstructing them back to the original domain. We evaluated the performance of our DeepPBM in background subtraction on 9 surveillance videos from the background model challenge (BMC2012) dataset, and compared that with a standard subspace learning technique, robust principle component analysis (RPCA), which similarly estimates a deterministic low dimensional representation of the background in videos and is widely used for this application. Our method outperforms RPCA on BMC 2012 dataset with 23% in average in F-measure score, emphasizing that background subtraction using the trained model can be done in more than 10 times faster (The source code is available at: https://github.com/ostadabbas/DeepPBM).

R. Behnaz and F. Amirreza—Equal contribution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Allebosch, G., Deboeverie, F., Veelaert, P., Philips, W.: Efic: edge based foreground background segmentation and interior classification for dynamic camera viewpoints. In: International Conference on Advanced Concepts for Intelligent Vision Systems, pp. 130–141 (2015)

    Google Scholar 

  2. Babaee, M., Dinh, D.T., Rigoll, G.: A deep convolutional neural network for video sequence background subtraction. Pattern Recogn. 76, 635–649 (2018)

    Article  Google Scholar 

  3. Bakkay, M.C., Rashwan, H.A., Salmane, H., Khoudour, L., Puigtt, D., Ruichek, Y.: Bscgan: deep background subtraction with conditional generative adversarial networks. In: 2018 25th IEEE International Conference on Image Processing (ICIP), pp. 4018–4022. IEEE (2018)

    Google Scholar 

  4. Bianco, S., Ciocca, G., Schettini, R.: How far can you get by combining change detection algorithms? In: Battiato, S., Gallo, G., Schettini, R., Stanco, F. (eds.) ICIAP 2017. LNCS, vol. 10484, pp. 96–107. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68560-1_9

    Chapter  Google Scholar 

  5. Blei, D.M., Kucukelbir, A., McAuliffe, J.D.: Variational Inference: A Review for Statisticians. ArXiv e-prints (January 2016)

    Google Scholar 

  6. Bouwmans, T., Javed, S., Sultana, M., Jung, S.K.: Deep neural network concepts for background subtraction: a systematic review and comparative evaluation. Neural Netw. 117, 8–66 (2019)

    Article  Google Scholar 

  7. Braham, M., Van Droogenbroeck, M.: Deep background subtraction with scene-specific convolutional neural networks. In: IEEE International Conference on Systems, Signals and Image Processing (IWSSIP), Bratislava 23–25 May 2016, pp. 1–4 (2016)

    Google Scholar 

  8. Candès, E.J., Li, X., Ma, Y., Wright, J.: Robust principal component analysis? J. ACM (JACM) 58(3), 11 (2011)

    Article  MathSciNet  Google Scholar 

  9. Chen, Y.T., Chen, C.S., Huang, C.R., Hung, Y.P.: Efficient hierarchical method for background subtraction. Pattern Recogn. 40(10), 2706–2715 (2007)

    Article  Google Scholar 

  10. Doersch, C.: Tutorial on Variational Autoencoders. ArXiv e-prints (June 2016)

    Google Scholar 

  11. García-González, J., Ortiz-de-Lazcano-Lobato, J.M., Luque-Baena, R.M., Molina-Cabello, M.A., López-Rubio, E.: Background modeling for video sequences by stacked denoising autoencoders. In: Herrera, F. (ed.) CAEPIA 2018. LNCS (LNAI), vol. 11160, pp. 341–350. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00374-6_32

    Chapter  Google Scholar 

  12. Haines, T.S., Xiang, T.: Background subtraction with dirichletprocess mixture models. IEEE Trans. Pattern Anal. Mach. Intell. 36(4), 670–683 (2014)

    Article  Google Scholar 

  13. He, J., Balzano, L., Szlam, A.: Incremental gradient on the grassmannian for online foreground and background separation in subsampled video. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1568–1575 (2012)

    Google Scholar 

  14. KaewTraKulPong, P., Bowden, R.: An improved adaptive background mixture model for real-time tracking with shadow detection. In: Remagnino, P., Jones, G.A., Paragios, N., Regazzoni, C.S., (eds.) Video-based surveillance systems, pp. 135–144. Springer, Boston (2002) https://doi.org/10.1007/978-1-4615-0913-4_11

  15. Kingma, D.P., Welling, M.: Auto-Encoding Variational Bayes. ArXiv e-prints (December 2013)

    Google Scholar 

  16. Lim, L.A., Keles, H.Y.: Foreground segmentation using convolutional neural networks for multiscale feature encoding. Pattern Recogn. Lett. 112, 256–262 (2018)

    Article  Google Scholar 

  17. Mansour, H., Vetro, A.: Video background subtraction using semi-supervised robust matrix completion. In: 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6528–6532 (May 2014)

    Google Scholar 

  18. Mondéjar-Guerra, V., Rouco, J., Novo, J., Ortega, M.: An end-to-end deep learning approach for simultaneous background modeling and subtraction. In: British Machine Vision Conference (BMVC), Cardiff (2019)

    Google Scholar 

  19. Rezaei, B., Ostadabbas, S.: Moving object detection through robust matrix completion augmented with objectness. IEEE J. Sel. Top. Sign. Proces. 12(6), 1313–1323 (2018). https://doi.org/10.1109/JSTSP.2018.2869111

  20. Rezaei, B., Ostadabbas, S.: Background subtraction via fast robust matrix completion. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1871–1879 (2017)

    Google Scholar 

  21. Sakkos, D., Liu, H., Han, J., Shao, L.: End-to-end video background subtraction with 3d convolutional neural networks. Multimedia Tools Appl. 77(17), 23023–23041 (2017). https://doi.org/10.1007/s11042-017-5460-9

    Article  Google Scholar 

  22. St-Charles, P.L., Bilodeau, G.A., Bergevin, R.: A self-adjusting approach to change detection based on background word consensus. In: 2015 IEEE Winter Conference on Applications of Computer Vision, pp. 990–997 (January 2015)

    Google Scholar 

  23. St-Charles, P.L., Bilodeau, G.A., Bergevin, R.: Subsense: a universal change detection method with local adaptive sensitivity. IEEE Trans. Image Process. 24(1), 359–373 (2015)

    Article  MathSciNet  Google Scholar 

  24. Stauffer, C., Grimson, W.E.L.: Adaptive background mixture models for real-time tracking. In: 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 246–252 (1999)

    Google Scholar 

  25. Sultana, M., Mahmood, A., Javed, S., Jung, S.K.: Unsupervised deep context prediction for background estimation and foreground segmentation. Mach. Vis. Appl. 30(3), 375–395 (2018). https://doi.org/10.1007/s00138-018-0993-0

    Article  Google Scholar 

  26. Vacavant, A., Chateau, T., Wilhelm, A., Lequièvre: a benchmark dataset for outdoor foreground/background extraction. In: Asian Conference on Computer Vision, pp. 291–300 (2012)

    Google Scholar 

  27. Wang, R., Bunyak, F., Seetharaman, G., Palaniappan, K.: Static and moving object detection using flux tensor with split gaussian models. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 414–418 (2014)

    Google Scholar 

  28. Wang, Y., Luo, Z., Jodoin, P.M.: Interactive deep learning method for segmenting moving objects. Pattern Recogn. Lett. 96, 66–75 (2017)

    Article  Google Scholar 

  29. Yong, X.: Improved gaussian mixture model in video motion detection. J. Multimedia 8(5), 527 (2013)

    Google Scholar 

  30. Zheng, W., Wang, K., Wang, F.Y.: A novel background subtraction algorithm based on parallel vision and Bayesian GANs. Neurocomputing 394, 178–200 (2019)

    Article  Google Scholar 

  31. Zhou, X., Yang, C., Yu, W.: Moving object detection by detecting contiguous outliers in the low-rank representation. IEEE Trans. Pattern Anal. Mach. Intell. 35(3), 597–610 (2013)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sarah Ostadabbas .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Behnaz, R., Amirreza, F., Ostadabbas, S. (2021). DeepPBM: Deep Probabilistic Background Model Estimation from Video Sequences. In: Del Bimbo, A., et al. Pattern Recognition. ICPR International Workshops and Challenges. ICPR 2021. Lecture Notes in Computer Science(), vol 12662. Springer, Cham. https://doi.org/10.1007/978-3-030-68790-8_47

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-68790-8_47

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-68789-2

  • Online ISBN: 978-3-030-68790-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics