Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/2757513.2757516acmconferencesArticle/Chapter ViewAbstractPublication PagesmobihocConference Proceedingsconference-collections
research-article

Video Highlight Shot Extraction with Time-Sync Comment

Published: 22 June 2015 Publication History

Abstract

Benefit from abundance of mobile applications, portability of large-screen mobile devices and accessibility of media resources, users nowadays much more prefer to watch videos on their mobiles no matter whether they are at home or on the way. However, constrained by available time and network flow, users may only choose to watch some hot video segments that are manually annotated by video editors. In this paper, we aim to automatically extract video highlight shot with the help of video sentimental feature of time-sync comments. First, analyzing statistical feature of real data. After, we simulate the generation process of time-sync comment after. Then, we propose a shot boundary detection method to extract highlight shot, which is proved to be more effective than traditional methods based on comment density. This experiment attests the time-sync comment is particularly suitable for sentiment-based video segment extraction for 2 reasons. 1) Text-based similarity calculation of is much faster than image-based process depending on every frame of video; 2) Time-sync comment reflects user subjective emotion therefore is useful in personalised video recommendation.

References

[1]
Y. Aytar, M. Shah, and J. Luo. Utilizing semantic word similarity measures for video retrieval. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1--8. IEEE, 2008.
[2]
D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent dirichlet allocation. the Journal of machine Learning research, 3:993--1022, 2003.
[3]
Y. Y. Chung, W. J. Chin, X. Chen, D. Y. Shi, E. Choi, and F. Chen. Content-based video retrieval system using wavelet transform. WSEAS Transactions on Circuits and Systems, 6(2):259--265, 2007.
[4]
A. Go, R. Bhayani, and L. Huang. Twitter sentiment classification using distant supervision. CS224N Project Report, Stanford, pages 1--12, 2009.
[5]
N. Godbole, M. Srinivasaiah, and S. Skiena. Large-scale sentiment analysis for news and blogs. ICWSM, 7, 2007.
[6]
G. H. Golub and C. Reinsch. Singular value decomposition and least squares solutions. Numerische Mathematik, 14(5):403--420, 1970.
[7]
M. Hamasaki, H. Takeda, T. Hope, and T. Nishimura. Network analysis of an emergent massively collaborative creation community. In Proceedings of the Third International ICWSM Conference, pages 222--225, 2009.
[8]
G. Heinrich. Parameter estimation for text analysis. Technical report, Technical report, 2005.
[9]
W. Hu, N. Xie, L. Li, X. Zeng, and S. Maybank. A survey on visual content-based video indexing and retrieval. Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on, 41(6):797--819, 2011.
[10]
C. D. Manning and H. Schütze. Foundations of statistical natural language processing. MIT press, 1999.
[11]
B. Pang and L. Lee. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the 42nd annual meeting on Association for Computational Linguistics, page 271. Association for Computational Linguistics, 2004.
[12]
A. F. Smeaton. Techniques used and open challenges to the analysis, indexing and retrieval of digital video. Information Systems, 32(4):545--559, 2007.
[13]
B. Wu, E. Zhong, B. Tan, A. Horner, and Q. Yang. Crowdsourced time-sync video tagging using temporal and personalized topic modeling. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 721--730. ACM, 2014.
[14]
K. Yoshii and M. Goto. Musiccommentator: Generating comments synchronized with musical audio signals by a joint probabilistic model of acoustic and textual features. In Entertainment Computing-ICEC 2009, pages 85--97. Springer, 2009.

Cited By

View all
  • (2023)Comprehending the Gossips: Meme Explanation in Time-Sync Video Comment via Multimodal CuesACM Transactions on Asian and Low-Resource Language Information Processing10.1145/361292022:8(1-17)Online publication date: 24-Aug-2023
  • (2023)Visual-audio correspondence and its effect on video tippingInformation Processing and Management: an International Journal10.1016/j.ipm.2023.10334760:3Online publication date: 1-May-2023
  • (2022)CoEvo-Net: Coevolution Network for Video Highlight DetectionIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2021.311350532:6(3788-3797)Online publication date: Jun-2022
  • Show More Cited By

Index Terms

  1. Video Highlight Shot Extraction with Time-Sync Comment

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    HOTPOST '15: Proceedings of the 7th International Workshop on Hot Topics in Planet-scale mObile computing and online Social neTworking
    June 2015
    62 pages
    ISBN:9781450335171
    DOI:10.1145/2757513
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 22 June 2015

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. time-sync comment
    2. topic model
    3. video highlight extraction

    Qualifiers

    • Research-article

    Funding Sources

    • Youth Science and Technology Foundation of Shanghai

    Conference

    MobiHoc'15
    Sponsor:

    Acceptance Rates

    HOTPOST '15 Paper Acceptance Rate 5 of 10 submissions, 50%;
    Overall Acceptance Rate 5 of 10 submissions, 50%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)26
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 26 Sep 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)Comprehending the Gossips: Meme Explanation in Time-Sync Video Comment via Multimodal CuesACM Transactions on Asian and Low-Resource Language Information Processing10.1145/361292022:8(1-17)Online publication date: 24-Aug-2023
    • (2023)Visual-audio correspondence and its effect on video tippingInformation Processing and Management: an International Journal10.1016/j.ipm.2023.10334760:3Online publication date: 1-May-2023
    • (2022)CoEvo-Net: Coevolution Network for Video Highlight DetectionIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2021.311350532:6(3788-3797)Online publication date: Jun-2022
    • (2022)Video Content Classification Using Time-Sync Comments and Titles2022 7th International Conference on Cloud Computing and Big Data Analytics (ICCCBDA)10.1109/ICCCBDA55098.2022.9778285(252-258)Online publication date: 22-Apr-2022
    • (2022)An Autonomous Data Collection Pipeline for Online Time-Sync Comments2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC)10.1109/COMPSAC54236.2022.00053(327-336)Online publication date: Jun-2022
    • (2022)Multimodal learning model based on video–audio–chat feature fusion for detecting e-sports highlightsApplied Soft Computing10.1016/j.asoc.2022.109285126(109285)Online publication date: Sep-2022
    • (2022)Video emotion analysis enhanced by recognizing emotion in video commentsInternational Journal of Data Science and Analytics10.1007/s41060-022-00317-014:2(175-189)Online publication date: 19-Mar-2022
    • (2021)Video Episode Boundary Detection with Joint Episode-Topic Model2020 25th International Conference on Pattern Recognition (ICPR)10.1109/ICPR48806.2021.9412630(2049-2056)Online publication date: 10-Jan-2021
    • (2021)Aligned variational autoencoder for matching danmaku and video storylinesNeurocomputing10.1016/j.neucom.2021.04.118454(228-237)Online publication date: Sep-2021
    • (2021)Entity-level sentiment prediction in Danmaku video interactionThe Journal of Supercomputing10.1007/s11227-021-03652-4Online publication date: 9-Feb-2021
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media