Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

EVBS-CAT: enhanced video background subtraction with a controlled adaptive threshold for constrained wireless video surveillance

  • Research
  • Published:
Journal of Real-Time Image Processing Aims and scope Submit manuscript

Abstract

Moving object detection (MOD) has gained significant attention for its application in advanced video surveillance tasks. Region-of-Interest (ROI) detection algorithms are essential prerequisites for various applications, ranging from video surveillance to adaptive video coding. The simplicity and efficiency of MOD methods are critical when targeting energy-constrained systems, such as Wireless Multimedia Sensor Networks (WMSN). The challenge is always to reduce computational costs while preserving high detection accuracy. In this article, we present EVBS-CAT, an Enhanced Video Background Subtraction with a Controlled Adaptive Threshold selection method for low-cost surveillance systems. The proposed moving object detection method utilizes background subtraction (BS) with morphological operations and adaptive thresholding. We evaluate the algorithm using the Change Detection 2012 dataset. Through a computational complexity analysis of each step, we demonstrate the efficiency of the proposed MOD technique for embedded WMSN. The algorithm yields promising results compared to state-of-the-art MOD techniques in the context of embedded wireless surveillance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Data availability

The datasets supporting the conclusions of this article are available in the following online repositories:

CDnet2012 Mask Results: Accessible at GitHub Repository: https://github.com/ahcen23/Data_Aliouat_2024_EVBS_CAT_JRTIP.

Associated Mask Videos: Viewable at https://www.youtube.com/playlist?list=PLMil6W99Iz1BeAAnLshN56ijpoWLMfaHT.

These resources provide the data and visual masks pertinent to our study for the CDnet Dataset.

References

  1. Aliouat, A., Kouadria, N., Harize, S., Maimour, M.: Multi-threshold-based frame segmentation for content-aware video coding in WMSN. In: Advances in Computing Systems and Applications: Proceedings of the 5th Conference on Computing Systems and Applications, pp. 337–347. Springer (2022)

  2. Aliouat, A., Kouadria, N., Harize, S., Maimour, M.: An efficient low complexity region-of-interest detection for video coding in wireless visual surveillance. IEEE Access 11, 26793–26806 (2023)

    Article  Google Scholar 

  3. Aliouat, A., Kouadria, N., Maimour, M., Harize, S.: Region-of-interest based video coding strategy for low bitrate surveillance systems. In: 2022 19th International Multi-Conference on Systems, Signals & Devices (SSD), pp. 1357–1362. IEEE (2022)

  4. Aliouat, A., Kouadria, N., Maimour, M., Harize, S., Doghmane, N.: Region-of-interest based video coding strategy for rate/energy-constrained smart surveillance systems using WMSNs. Ad Hoc Netw. 140, 103076 (2023)

    Article  Google Scholar 

  5. Aurangzeb, K., Alhussein, M., Haider, S.I.: Impact of complexity and compression ratio of compression method on lifetime of vision sensor node. Elektron. Elektrotech. 23(3), 64–67 (2017)

    Article  Google Scholar 

  6. Babaee, M., Dinh, D.T., Rigoll, G.: A deep convolutional neural network for video sequence background subtraction. Pattern Recognit. 76, 635–649 (2018)

    Article  ADS  Google Scholar 

  7. Barnich, O., Van Droogenbroeck, M.: Vibe: a universal background subtraction algorithm for video sequences. IEEE Trans. Image Process. 20(6), 1709–1724 (2011). https://doi.org/10.1109/TIP.2010.2101613

    Article  ADS  MathSciNet  PubMed  Google Scholar 

  8. Benezeth, Y., Jodoin, P.M., Emile, B., Laurent, H., Rosenberger, C.: Comparative study of background subtraction algorithms. J. Electron. Imaging 19(3), 033003 (2010)

    Article  ADS  Google Scholar 

  9. Bouwmans, T., Maddalena, L., Petrosino, A.: Scene background initialization: a taxonomy. Pattern Recognit. Lett. 96, 3–11 (2017)

    Article  ADS  Google Scholar 

  10. Chen, M., Wei, X., Yang, Q., Li, Q., Wang, G., Yang, M.H.: Spatiotemporal GMM for background subtraction with superpixel hierarchy. IEEE Trans. Pattern Anal. Mach. Intell. 40(6), 1518–1525 (2017)

    Article  PubMed  Google Scholar 

  11. Chen, M., Yang, Q., Li, Q., Wang, G., Yang, M.H.: Spatiotemporal background subtraction using minimum spanning tree and optical flow. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) Computer Vision-ECCV 2014, pp. 521–534. Springer International Publishing, Cham (2014)

    Chapter  Google Scholar 

  12. Chien, S.Y., Huang, Y.W., Hsieh, B.Y., Ma, S.Y., Chen, L.G.: Fast video segmentation algorithm with shadow cancellation, global motion compensation, and adaptive threshold techniques. IEEE Trans. Multimed. 6(5), 732–748 (2004). https://doi.org/10.1109/TMM.2004.834868

    Article  Google Scholar 

  13. Elgammal, A., Duraiswami, R., Harwood, D., Davis, L.S.: Background and foreground modeling using nonparametric kernel density estimation for visual surveillance. Proc. IEEE 90(7), 1151–1163 (2002)

    Article  Google Scholar 

  14. Elgammal, A., Harwood, D., Davis, L.: Non-parametric model for background subtraction. In: European Conference on Computer Vision, pp. 751–767. Springer (2000)

  15. Elharrouss, O., Abbad, A., Moujahid, D., Tairi, H.: Moving object detection zone using a block-based background model. IET Comput. Vis. 12(1), 86–94 (2018)

    Article  Google Scholar 

  16. Garg, K., Ramakrishnan, N., Prakash, A., Srikanthan, T.: Rapid and robust background modeling technique for low-cost road traffic surveillance systems. IEEE Trans. Intell. Transp. Syst. 21(5), 2204–2215 (2019)

    Article  Google Scholar 

  17. Gastal, E.S.L., Oliveira, M.M.: Domain transform for edge-aware image and video processing. ACM Trans. Graph. (2011). https://doi.org/10.1145/2010324.1964964

    Article  Google Scholar 

  18. Genovese, M., Napoli, E.: ASIC and FPGA implementation of the gaussian mixture model algorithm for real-time segmentation of high definition video. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 22(3), 537–547 (2014). https://doi.org/10.1109/TVLSI.2013.2249295

    Article  Google Scholar 

  19. Goyette, N., Jodoin, P.M., Porikli, F., Konrad, J., Ishwar, P.: Changedetection. net: a new change detection benchmark dataset. In: 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 1–8. IEEE (2012)

  20. Gracewell, J., John, M.: Dynamic background modeling using deep learning autoencoder network. Multimed. Tools Appl. 79(7), 4639–4659 (2020)

    Article  Google Scholar 

  21. Harvey, N.R., Marshall, S.: Rank-order morphological filters: a new class of filters. Proceedings IEEE Workshop on Nonlinear Signal and Image Processing, pp. 975–978. Halkidiki, Greece (1995)

  22. Huang, K., Zhang, Q., Zhou, C., Xiong, N., Qin, Y.: An efficient intrusion detection approach for visual sensor networks based on traffic pattern learning. IEEE Trans. Syst. Man Cybern. Syst. 47(10), 2704–2713 (2017)

    Article  Google Scholar 

  23. Imran, M., Ahmad, N., Khursheed, K., Waheed, M.A., Lawal, N., O’Nils, M.: Implementation of wireless vision sensor node with a lightweight bi-level video coding. IEEE J. Emerg. Sel. Top. Circuits Syst. 3(2), 198–209 (2013)

    Article  ADS  Google Scholar 

  24. Jiang, S., Lu, X.: Wesambe: a weight-sample-based method for background subtraction. IEEE Trans. Circuits Syst. Video Technol. 28(9), 2105–2115 (2018). https://doi.org/10.1109/TCSVT.2017.2711659

    Article  Google Scholar 

  25. KaewTraKulPong, P., Bowden, R.: An Improved Adaptive Background Mixture Model for Real-time Tracking with Shadow Detection, pp. 133–145. Springer US, Boston (2002). https://doi.org/10.1007/978-1-4615-0913-4_11

    Book  Google Scholar 

  26. Kalakoti, G., et al.: Key-frame detection and video retrieval based on dc coefficient-based cosine orthogonality and multivariate statistical tests. Traitement du Signal 37(5), 773–784 (2020)

  27. Kerhet, A., Magno, M., Leonardi, F., Boni, A., Benini, L.: A low-power wireless video sensor node for distributed object detection. J. Real Time Image Process. 2(4), 331–342 (2007)

    Article  Google Scholar 

  28. Ko, J.H., Mudassar, B.A., Mukhopadhyay, S.: An energy-efficient wireless video sensor node for moving object surveillance. IEEE Trans. Multi Scale Comput. Syst. 1(1), 7–18 (2015)

    Article  Google Scholar 

  29. Ko, J.H., Na, T., Mukhopadhyay, S.: An energy-quality scalable wireless image sensor node for object-based video surveillance. IEEE J. Emerg. Sel. Top. Circuits Syst. 8(3), 591–602 (2018)

    Article  ADS  Google Scholar 

  30. Kouadria, N., Mechouek, K., Harize, S., Doghmane, N.: Region-of-interest based image compression using the discrete tchebichef transform in wireless visual sensor networks. Comput. Electr. Eng. 73, 194–208 (2019)

    Article  Google Scholar 

  31. Kulchandani, J.S., Dangarwala, K.J.: Moving object detection: review of recent research trends. In: 2015 International Conference on Pervasive Computing (ICPC), pp. 1–5. IEEE (2015)

  32. Liu, Y., Mu, C., Kou, W., Liu, J.: Modified particle swarm optimization-based multilevel thresholding for image segmentation. Soft Comput. 19, 1311–1327 (2015)

    Article  Google Scholar 

  33. Maddalena, L., Petrosino, A.: A self-organizing approach to background subtraction for visual surveillance applications. IEEE Trans. Image Process. 17(7), 1168–1177 (2008). https://doi.org/10.1109/TIP.2008.924285

    Article  ADS  MathSciNet  PubMed  Google Scholar 

  34. Mansri, I., Doghmane, N., Kouadria, N., Harize, S., Bekhouch, A.: Comparative evaluation of VVC, HEVC, H. 264, AV1, and VP9 encoders for low-delay video applications. In: 2020 Fourth International Conference on Multimedia Computing, Networking and Applications (MCNA), pp. 38–43. IEEE (2020)

  35. Mehmood, S., Cagnoni, S., Mordonini, M., Khan, S.A.: An embedded architecture for real-time object detection in digital images based on niching particle swarm optimization. J. Real Time Image Process. 10, 75–89 (2015)

    Article  Google Scholar 

  36. Mendizabal, A., Salgado, L.: A region based approach to background modeling in a wavelet multi-resolution framework. In: 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 929–932. IEEE (2011)

  37. Min, D., Choi, S., Lu, J., Ham, B., Sohn, K., Do, M.N.: Fast global image smoothing based on weighted least squares. IEEE Trans. Image Process. 23(12), 5638–5653 (2014)

    Article  ADS  MathSciNet  PubMed  Google Scholar 

  38. Morde, A., Ma, X., Guler, S.: Learning a background model for change detection. In: 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 15–20. IEEE (2012)

  39. Ngo, H.T., Ives, R.W., Rakvic, R.N., Broussard, R.P.: Real-time video surveillance on an embedded, programmable platform. Microprocess. Microsyst. 37(6–7), 562–571 (2013)

    Article  Google Scholar 

  40. Nonaka, Y., Shimada, A., Nagahara, H., Taniguchi, R.I.: Evaluation report of integrated background modeling based on spatio-temporal features. In: 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 9–14. Providence, RI, USA (2012). https://doi.org/10.1109/CVPRW.2012.6238920

  41. Paek, J., Hicks, J., Coe, S., Govindan, R.: Image-based environmental monitoring sensor application using an embedded wireless sensor network. Sensors 14(9), 15981–16002 (2014)

    Article  ADS  PubMed  PubMed Central  Google Scholar 

  42. Peixoto, J.P.J., Costa, D.G.: Wireless visual sensor networks for smart city applications: a relevance-based approach for multiple sinks mobility. Future Gener. Comput. Syst. 76, 51–62 (2017)

    Article  Google Scholar 

  43. Porikli, F., Tuzel, O.: Bayesian background modeling for foreground detection. In: Proceedings of the third ACM international workshop on Video surveillance and sensor networks, pp. 55–58. New York, NY, USA (2005). https://doi.org/10.1145/1099396.1099407

  44. Ratnayake, K., Amer, A.: Embedded architecture for noise-adaptive video object detection using parameter-compressed background modeling. J. Real Time Image Process. 13, 397–414 (2017)

    Article  Google Scholar 

  45. Reynolds, D.A., et al.: Gaussian mixture models. Encycl. Biom. 741, 659–663 (2009)

    Google Scholar 

  46. Sabbagh, M., Tabkhi, H., Schirner, G.: Power-efficient real-time solution for adaptive vision algorithms. IET Comput. Digit. Tech. 9, 16–26 (2015). https://doi.org/10.1049/iet-cdt.2014.0075

    Article  Google Scholar 

  47. Sajid, H., Cheung, S.C.S.: Universal multimode background subtraction. IEEE Trans. Image Process. 26(7), 3249–3260 (2017). https://doi.org/10.1109/TIP.2017.2695882

    Article  ADS  MathSciNet  PubMed  Google Scholar 

  48. Savaş, M.F., Demirel, H., Erkal, B.: Moving object detection using an adaptive background subtraction method based on block-based structure in dynamic scene. Optik 168, 605–618 (2018)

    Article  ADS  Google Scholar 

  49. Sengar, S.S., Mukhopadhyay, S.: Moving object detection using statistical background subtraction in wavelet compressed domain. Multimed. Tools Appl. 79(9), 5919–5940 (2020)

    Article  Google Scholar 

  50. Shafiee, M.J., Siva, P., Fieguth, P., Wong, A.: Embedded motion detection via neural response mixture background modeling. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 837–844. IEEE (2016)

  51. Sobral, A., Vacavant, A.: A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos. Comput. Vis. Image Underst. 122, 4–21 (2014)

    Article  Google Scholar 

  52. St-Charles, P.L., Bilodeau, G.A., Bergevin, R.: Subsense: a universal change detection method with local adaptive sensitivity. IEEE Trans. Image Process. 24(1), 359–373 (2014)

    Article  ADS  MathSciNet  PubMed  Google Scholar 

  53. Thévenaz, P., Sage, D., Unser, M.: Bi-exponential edge-preserving smoother. IEEE Trans. Image Process. 21(9), 3924–3936 (2012)

    Article  ADS  MathSciNet  PubMed  Google Scholar 

  54. Yang, L., Li, J., Luo, Y., Zhao, Y., Cheng, H., Li, J.: Deep background modeling using fully convolutional network. IEEE Trans. Intell. Transp. Syst. 19(1), 254–262 (2017)

    Article  Google Scholar 

  55. Yang, Q.: Recursive bilateral filtering. In: European Conference on Computer Vision, pp. 399–413. Springer (2012)

  56. Yoshinaga, S., Shimada, A., Nagahara, H., Taniguchi, R.I.: Background model based on intensity change similarity among pixels. In: The 19th Korea-Japan Joint Workshop on Frontiers of Computer Vision, pp. 276–280. IEEE (2013)

  57. Zhang, R., Liu, X., Hu, J., Chang, K., Liu, K.: A fast method for moving object detection in video surveillance image. Signal Image Video Process. 11(5), 841–848 (2017)

    Article  Google Scholar 

  58. Zhang, Z., Ji, Y., Cui, W., Wang, Y., Li, H., Zhao, X., Li, D., Tang, S., Yang, M., Tan, W., et al.: ATF-3D: semi-supervised 3D object detection with adaptive thresholds filtering based on confidence and distance. IEEE Robot. Autom. Lett. 7(4), 10573–10580 (2022)

    Article  Google Scholar 

  59. Zhao, Z., Bouwmans, T., Zhang, X., Fang, Y.: A fuzzy background modeling approach for motion detection in dynamic backgrounds. In: International conference on multimedia and signal processing, pp. 177–185. Springer (2012)

Download references

Acknowledgements

This work was supported in part by the PHC TASSILI 21MDU323.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ahcen Aliouat.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was partially supported by Campus France 46082TB and PHC TASSILI program 21MDU323.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Aliouat, A., Kouadria, N., Maimour, M. et al. EVBS-CAT: enhanced video background subtraction with a controlled adaptive threshold for constrained wireless video surveillance. J Real-Time Image Proc 21, 9 (2024). https://doi.org/10.1007/s11554-023-01388-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11554-023-01388-3

Keywords

Navigation