Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Towards exploiting believe function theory for object based scene classification problem

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Scene classification is one of the active research domains of artificial intelligence (AI) with many real-world applications. This paper presents a new scene classification approach based on the Belief Function Theory, which provides a more effective way of handling uncertainty information compared to traditional probability-based methods. Unlike previous methods that rely on probabilities, which have proved their limitations, the main contribution of our approach is the use of belief degrees to classify unknown scenes based on object labels. We conduct experiments on three well-known datasets (SUN397, MIT Indoor, and LabelMe) and compare our results with state-of-the-art methods. Our approach achieves competitive results with a simple and robust framework that outperforms previous methods in some cases. We also provide insights into the strengths and limitations of our approach and discuss potential future directions for research. Overall, our work demonstrates the effectiveness of the Belief Function theory in scene classification and opens up new avenues for further research and innovation in this area.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availability

The datasets generated during and/or analyzed during the current study are available in the following repositories:

MIT Indoor dataset: The MIT Indoor dataset used in this study is available in the MIT Indoor dataset repository at https://web.mit.edu/torralba/www/indoor.html.

SUN397 dataset: The SUN397 dataset used in this study is available in the SUN397 dataset repository at https://vision.princeton.edu/projects/2010/SUN/.

LabelMe dataset: The LabelMe dataset used in this study is available in the LabelMe dataset repository at http://labelme2.csail.mit.edu/Release3.0/browserTools/php/publications.php

References

  1. Sreenu G, Durai MS (2019) Intelligent video surveillance: a review through deep learning techniques for crowd analysis. J Big Data 6(1):1–27

    Article  Google Scholar 

  2. Prabhakar G, Kailath B, Natarajan S, Kumar R (2017) Obstacle detection and classification using deep learning for tracking in high-speed autonomous driving. In: 2017 IEEE Region 10 Symposium (TENSYMP). IEEE, pp 1–6

  3. Hou J, Zeng H, Zhu J, Hou J, Chen J, Ma KK (2019) Deep quadruplet appearance learning for vehicle re-identification. IEEE TVT 68(9):8512–8522

    Google Scholar 

  4. Chen BX, Sahdev R, Wu D, Zhao X, Papagelis M, Tsotsos J K (2019) Scene classification in indoor environments for robots using context based word embeddings. arXiv preprint arXiv:1908.06422

  5. Muhammad K, Ahmad J, Baik SW (2018) Early fire detection using convolutional neural networks during surveillance for effective disaster management. Neurocomputing 288:30–42

    Article  Google Scholar 

  6. Tripathi S, Singh SK, Lee H K (2021) An end-to-end breast tumour classification model using context-based patch modelling–a bilstm approach for image classification. Computerized Medical Imaging and Graphics 87:101838

  7. Kumar A, Abhishek K, Kumar Singh A, Nerurkar P, Chandane M, Bhirud S, Patel D, Busnel Y (2021) Multilabel classification of remote sensed satellite imagery. Trans Emerg Telecommun Technol 32:7

    Google Scholar 

  8. Tahir W, Majeed A, Rehman T (2015) Indoor/outdoor image classification using gist image features and neural network classifiers. In: 2015 12th International conference on high-capacity optical networks and enabling/emerging technologies (HONET). IEEE, pp 1–5

  9. Shaaban AM, Al-Atabany W, Salem NM (2019) Binary classification of visual scenes using convolutional neural network. In: 2019 Novel intelligent and leading emerging sciences conference (NILES), vol. 1. IEEE, pp 214–217

  10. Ruiz LFC, Guasselli LA, Simioni JPD, Belloli TF, Fernandes PCB (2021) Object-based classification of vegetation species in a subtropical wetland using Sentinel-1 and Sentinel-2A images. Sci Remote Sens 3:100017

  11. Murphy KP, Torralba A, Freeman WT (2003) Using the forest to see the trees: A graphical model relating features, objects, and scenes. In: Advances in neural information processing systems, p 16

  12. Li LJ, Su H, Fei-Fei L, Xing E (2010) Object bank: A high-level image representation for scene classification & semantic feature sparsification. In: Advances in neural information processing systems, p 23

  13. Freund Y, Schapire RE (1997) A decision-theoretic generalization of on-line learning and an application to boosting. J Comput Syst Sci 55(1):119–139

    Article  MathSciNet  Google Scholar 

  14. Xiao J, Hays J, Ehinger K A, Oliva A, Torralba A (2010) Sun dataset: Large-scale scene recognition from abbey to zoo. In: 2010 IEEE computer society conference on computer vision and pattern recognition. IEEE, pp 3485– 3492

  15. Quattoni A, Torralba A (2009, June) Recognizing indoor scenes. In: 2009 IEEE Conference on computer vision and pattern recognition. IEEE, pp 413–420

  16. Russell BC, Torralba A, Murphy KP, Freeman WT (2008) LabelMe: a dataset and web-based tool for image annotation. Int J Comput Vision 77(1–3):157–173

    Article  Google Scholar 

  17. Feng J, Fu A (2018) Scene semantic recognition based on probability topic model. Information 9(4):97

    Article  Google Scholar 

  18. Cheng X, Lu J, Feng J, Yuan B, Zhou J (2018) Scene recognition with objectness. Pattern Recogn 74:474–487

    Article  Google Scholar 

  19. Pandey M, Lazebnik S (2011) Scene recognition and weakly supervised object localization with deformable part-based models. In: 2011 international conference on computer vision. IEEE, pp 1307–1314

  20. PolyTech, M. FOULLOY Laurent Professeur (2012) Fonctions de Croyance: de la théorie à la pratique. PhD diss., Université d’Artois.

  21. Felzenszwalb PF, Girshick RB, McAllester D, Ramanan D (2009) Object detection with discriminatively trained part-based models. IEEE Trans Pattern Anal Mach Intell 32(9):1627–1645

    Article  Google Scholar 

  22. Wu, CY, Feichtenhofer C, Fan H, He K, Krahenbuhl P, Girshick R: Long-term feature banks for detailed video understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 284–293 (2019)

  23. Ji J, Krishna R, Fei-Fei L, Niebles J C (2020) Action genome: Actions as compositions of spatio-temporal scene graphs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10236–10247.

  24. Yu J, Kuang Z, Zhang B, Zhang W, Lin D, Fan J (2018) Leveraging content sensitiveness and user trustworthiness to recommend fine-grained privacy settings for social image sharing. IEEE Trans Inf Forensics Secur 13(5):1317–1332

    Article  Google Scholar 

  25. Liao Y, Kodagoda S, Wang Y, Shi L, Liu Y (2016) Understand scene categories by objects: A semantic regularized scene classifier using convolutional neural networks. In 2016 IEEE international conference on robotics and automation (ICRA). IEEE, pp 2318–2325

  26. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst 25:1097–1105

    Google Scholar 

  27. Zhou B, Lapedriza A, Xiao J, Torralba A, Oliva A (2014) Learning deep features for scene recognition using places database. In: Advances in neural information processing systems, p 27

  28. Wu R, Wang B, Wang W, Yu Y (2015) Harvesting discriminative meta objects with deep CNN features for scene classification. In Proceedings of the IEEE International Conference on Computer Vision, 1287–1295

  29. Hidayat E M, Munir R, Setijadi P A, Machbub C (2016) Scenes categorization based on appears objects probability. In 2016 6th International Conference on System Engineering and Technology (ICSET). IEEE, pp 132–136

  30. Zitnick CL, Vedantam R, Parikh D (2014) Adopting abstract images for semantic scene understanding. IEEE Trans Pattern Anal Mach Intell 38(4):627–638

    Article  Google Scholar 

  31. Bai X, Yang M, Lyu P, Xu Y, Luo J (2018) Integrating scene text and visual appearance for fine-grained image classification. IEEE Access 6:66322–66335

    Article  Google Scholar 

  32. Song X, Jiang S, Herranz L, Kong Y, Zheng K (2016) Category co-occurrence modeling for large scale scene recognition. Pattern Recogn 59:98–111

    Article  Google Scholar 

  33. Margolin R, Zelnik-Manor L, Tal A (2014) Otc: A novel local descriptor for scene classification. European Conference on Computer Vision. Springer, Cham, pp 377–391

    Google Scholar 

  34. Bergamo A, Torresani L (2014) Classemes and other classifier-based features for efficient object categorization. IEEE Trans Pattern Anal Mach Intell 36(10):1988–2001

    Article  Google Scholar 

  35. Gong Y, Wang L, Guo R, Lazebnik S (2014) Multi-scale orderless pooling of deep convolutional activation features. European conference on computer vision. Springer, Cham, pp 392–407

    Google Scholar 

  36. Dixit M, Chen S, Gao D, Rasiwasia N, Vasconcelos N (2015) Scene classification with semantic fisher vectors. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2974–2983

  37. Qayyum A, Malik AS, Saad NM, Iqbal M, Faris Abdullah M, Rasheed W, Rashid Abdullah TA, Bin Jafaar MY (2017) Scene classification for aerial images based on CNN using sparse coding technique. Int J Remote Sens 38(8–10):2662–2685

    Article  Google Scholar 

  38. Weng Q, Mao Z, Lin J, Liao X (2018) Land-use scene classification based on a CNN using a constrained extreme learning machine. Int J Remote Sens 39(19):6281–6299

    Article  Google Scholar 

  39. Misko J, Jadhav SS, Kim Y (2021) Extensible Embedded Processor for Convolutional Neural Networks. Sci Programm 1–12

  40. Zhao W (2017) Research on the deep learning of the small sample data based on transfer learning. In: AIP Conference Proceedings. vol. 1864. no. 1. AIP Publishing

  41. Li LJ, Su H, Lim Y, Fei-Fei L (2010) Objects as attributes for scene classification. European conference on computer vision. Springer, Berlin, Heidelberg, pp 57–69

    Google Scholar 

  42. Wang C, Blei DM, Fei-Fei L (2009) Simultaneous image classification and annotation. In CVPR 1(2):6

    Google Scholar 

  43. Dixit M, Rasiwasia N, Vasconcelos N (2011) Adapted gaussian models for image classification. In CVPR 2011, pp. 937–943. IEEE

  44. Kwitt R, Vasconcelos N, Rasiwasia N (2012) Scene recognition on the semantic manifold. In: 12th European conference on computer vision, Florence, Italy, October 7-13, 2012, proceedings, part IV 12. Spring er, Berlin, Heidelberg, pp 359–372

  45. Song X, Jiang S, Herranz L (2017) Multi-scale multi- feature context modeling for scene recognition in the semantic manifold. IEEE Trans Image Process 26(6):2721–2735

    Article  MathSciNet  Google Scholar 

  46. Kabbai L, Abdellaoui M, Douik A (2019) Image classification by combining local and global features. Vis Comput 35:679–693

    Article  Google Scholar 

  47. López-Cifuentes A, Escudero-Vinolo M, Bescós J, García-Martín Á (2020) Semantic-aware scene recognition. Pattern Recogn 102:107256

    Article  Google Scholar 

  48. Seong H, Hyun J, Kim E (2020) Fosnet: An end-to-end trainable deep neural network for scene recognition. IEEE Access 8:82066–82077

    Article  Google Scholar 

  49. Kaljahi MA, Palaiahnakote S, Anisi MH, Idris MYI, Blumenstein M, Khan MK (2019) A scene image classification technique for a ubiquitous visual surveillance system. Multimed Tools Appl 78:5791–5818

    Article  Google Scholar 

Download references

Acknowledgements

We would like to thank M. Mohamed Rahal and M. Mouad Oubouchou for their contribution to this work with their Bechlor thesis entitled "Contribution à l'élaboration d'une plateforme de collaboration en ligne de génération automatique de scènes" (2018)

Funding

They affirm that no financial or personal relationships could impact or bias the objectivity, interpretations, or conclusions presented in the paper. This research was conducted without any external influence or vested interests, maintaining an unbiased and impartial approach driven solely by the pursuit of scientific inquiry and the advancement of knowledge.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anfel Amirat.

Ethics declarations

Conflicts of interests

The authors declare no conflicts of interest regarding this research paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Amirat, A., Benrais, L. & Baha, N. Towards exploiting believe function theory for object based scene classification problem. Multimed Tools Appl 83, 39235–39253 (2024). https://doi.org/10.1007/s11042-023-17120-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-023-17120-z

Keywords

Navigation