Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/2578726.2578789acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicmrConference Proceedingsconference-collections
tutorial

Selective Multi-cotraining for Video Concept Detection

Published: 01 April 2014 Publication History

Abstract

Research interest in cotraining is increasing which combines information from usually two classifiers to iteratively increase training resources and strengthen the classifiers. We try to select classifiers for cotraining when more than two representations of the data are available. The classifier based on the selected representation or data descriptor is expected to provide the most complementary information as new labels for the target classifier. These labels are critical for the next learning iteration. We present two criteria to select the complementary classifier where classification results on a validation set are used to calculate statistics for all the available classifiers. These statistics are used not only to pick the best classifier but also ascertain the number of new labels to be added for the target classifier. We demonstrate the effectiveness of classifier selection for semantic indexing task on the TRECIVD 2013 dataset and compare it to the self-training.

References

[1]
A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In COLT, 1998.
[2]
W. Du, R. Phlypo, and T. Adali. Adaptive feature split selection for co-training: Application to tire irregular wear classification. In ICASSP, 2013.
[3]
G.-Z. Li, D. Li, W.-C. Lu, J. Y. Yang, and M. Q. Yang. Feature selection for co-training: A qsar study. In IC-AI, 2007.
[4]
X. Li and C. G. M. Snoek. Classifying tag relevance with relevant positive and negative examples. In ACM International Conference on Multimedia, 2013.
[5]
D. G. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, November 2004.
[6]
K. Nigam and R. Ghani. Analyzing the effectiveness and applicability of co-training. In CIKM, 2000.
[7]
P. Over, G. Awad, M. Michel, J. Fiscus, G. Sanders, W. Kraaij, A. F. Smeaton, and G. Quéenot. Trecvid 2013 -- an overview of the goals, tasks, data, evaluation mechanisms and metrics. In Proceedings of TRECVID 2013. NIST, USA, 2013.
[8]
S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver for svm. ICML, 2007.
[9]
M. D. Smucker, J. Allan, and B. Carterette. A comparison of statistical significance tests for information retrieval evaluation. In ACM-IKM, 2007.
[10]
K. E. A. van de Sande, T. Gevers, and C. G. M. Snoek. Empowering visual categorization with the gpu. IEEE Transactions on Multimedia, 13(1), 2011.
[11]
A. Vedaldi and B. Fulkerson. VLFeat: An open and portable library of computer vision algorithms, 2008.
[12]
R. Yan and M. R. Naphade. Co-training non-robust classifiers for video semantic concept detection. In ICIP. IEEE, 2005.
[13]
R. Yan and M. R. Naphade. Semi-supervised cross feature learning for semantic concept detection in videos. In CVPR. IEEE, 2005.

Index Terms

  1. Selective Multi-cotraining for Video Concept Detection

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Other conferences
      ICMR '14: Proceedings of International Conference on Multimedia Retrieval
      April 2014
      564 pages
      ISBN:9781450327824
      DOI:10.1145/2578726
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      In-Cooperation

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 01 April 2014

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Cotraining
      2. bootstrapping
      3. feature selection

      Qualifiers

      • Tutorial
      • Research
      • Refereed limited

      Conference

      ICMR '14
      ICMR '14: International Conference on Multimedia Retrieval
      April 1 - 4, 2014
      Glasgow, United Kingdom

      Acceptance Rates

      ICMR '14 Paper Acceptance Rate 21 of 111 submissions, 19%;
      Overall Acceptance Rate 254 of 830 submissions, 31%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 69
        Total Downloads
      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 18 Nov 2024

      Other Metrics

      Citations

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media