Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Video clip recommendation model by sentiment analysis of time-sync comments

Published: 01 December 2020 Publication History

Abstract

With the advent of video time-sync comments, users can not only comment the videos on the Internet, but also share their feelings with others. However, the number of the videos on the Internet is so huge that users do not have enough time and energy to watch all the videos. How to recommend the videos suitable for users has become an important problem. The traditional video sentiment analysis methods can not work effectively and the results are not easy to explain. In this paper, an emotion recognition algorithm based on sync-time comments is proposed, as a basis for the recommendation of video clips. First, we propose a formal description of video clips recommendation based on sentiment analysis. Secondly, by constructing the classification of time-sync comments based on Latent Dirichlet Allocation (LDA) topic model, we evaluate the emotion vector of the words in time-sync comments. Meanwhile, the video clips are recommended according to the emotion relationships among the video clips. The experimental results show that the proposed model is effective in analyzing the complex sentiment of different kinds of text information.

References

[1]
Blei DM, Ng AY, and Jordan MI Latent Dirichlet allocation J Mach Learn Res 2003 3 993-1022
[2]
Bobicev V, Maxim V, Prodan T et al (2010) Emotions in Words: Developing a Multilingual WordNet-Affect. Computational Linguistics and Intelligent Text Processing, CICLing 2010, LNCS 6008, pp 375–384
[3]
Foundation PS Document for Jieba 0.39[EB/OL]. [2017-04-15]. https://pypi.org/project/jieba/jieba0.39.html
[4]
Hamasaki M, Takeda H, Hope T et al (2009) Network analysis of an emergent massively collaborative creation community. In: Proceedings of the third international ICWSM conference. Menlo Park, pp 222–225
[5]
Heinrich G Parameter estimation for text analysis[EB/OL]. [2016-03-10]. http://www.arbylon.net/publications/text-est2.pdf
[6]
Li SS and Huang CR Chinese sentiment classification based on stacking combination method Journal of Chinese Information Processing 2010 24 5 56-61
[7]
Liu ZM and Liu L Empirical study of sentiment classification for Chinese microblog based on machine learning Comput Eng Appl 2012 48 1 1-4
[8]
Luo CZ, Ni BB, Yan SC, et al. Image classification by selective regularized subspace learning IEEE Trans Multimedia 2016 18 1 40-50
[9]
Lv G, Xu T, Chen E et al (2016) Reading the videos: temporal labeling for crowdsourced time-sync videos based on semantic embedding. In: Proceedings of the 13th AAAI conference on artificial intelligence. Menlo Park, pp 3000–3006
[10]
Quan C and Ren F Sentence emotion analysis and recognition based on emotion words using Ren-CECps International Journal of Advanced Intelligence 2010 2 1 105-117
[11]
Ren F and Quan C Linguistic-based emotion analysis and recognition for measuring consumer satisfaction: an application of affective computing Inf Technol Manag 2012 13 4 321-332
[12]
[13]
Wang M, Fu WJ, Hao SJ, et al. Learning on big graph: label inference and regularization with anchor hierarchy IEEE Trans Knowl Data Eng 2017 29 5 1101-1114
[14]
Wang M, Gao Y, Lu K, et al. View-based discriminative probabilistic modeling for 3D object retrieval and recognition IEEE Trans Image Process 2013 22 4 1395-1407
[15]
Wang M, Hong RC, Li GD, et al. Event driven Web video summarization by tag localization and key-shot identification IEEE Trans Multimedia 2012 14 4 975-985
[16]
Wang M, Luo CZ, Hong RC, et al. Beyond object proposals: random crop pooling for multi-label image recognition IEEE Trans Image Process 2016 25 12 5678-5688
[17]
Wu B, Zhong E, Horner A et al (2014) Music emotion recognition by multi-label multi-layer multi-instance multi-view learning. In: Proceedings of the 22nd ACM international conference on multimedia. New York, pp 117–126
[18]
Wu B, Zhong E, Tan B et al (2014) Crowdsourced time-sync video tagging using temporal and personalized topic modeling. In: Proceedings of the 20th ACM SIGKDD international conference on knowledge discovery and data mining. New York, pp 721–730
[19]
Wu Z, Ito E (2014) Correlation analysis between user’s emotional comments and popularity measures. In: Proceedings of the 2014 IIAI 3rd international conference on advanced applied informatics. Piscataway, pp 280–283
[20]
Xian Y, Li J, Zhang C et al (2015) Video highlight clip extraction with time-sync comment. In: Proceedings of the 7th international workshop on hot topics in planet-scale mobile computing and online social networking. New York, pp 31–36
[21]
Yoshii K, Goto M Music Commentator: generating comments synchronized with musical audio signals by a joint probabilistic model of acoustic and textual features[EB/OL]. [2016-03-10]. https://staff.aist.go.jp/m.goto/PAPER/ICEC2009yoshii.pdf
[22]
Yu H, Hatzivassiloglou V (2003) Towards answering opinion questions: separating facts from opinions and identifying the polarity of opinion sentences. In: Proceedings of the 2003 conference on empirical methods in natural language processing. Stroudsburg, pp 129–136
[23]
Zhao J, Liu K, Wang G Adding redundant features for CRFs based sentence sentiment classification. In: Proceedings of the conference on empirical methods in natural language processing. Stroudsburg, pp 117–126 2008
[24]
Zheng YY, Xu J, and Xiao Z Utilization of sentiment analysis and visualization in online video bullet-screen comments New Technology of Library and Information Service 2015 31 11 82-90
[25]
Zhou L, Xia Y, Li B et al WIA-opinmine system in NTCIR-8 MOAT evaluation [EB/OL]. [2016-03-10]. http://research.nii.Ac.jp/ntcir/workshop/OnlineProceedings8/NTCIR/15-NTCIR8-MOAT-ZhouL.pdf

Cited By

View all
  • (2024)Personalized time-sync comment generation based on a multimodal transformerMultimedia Systems10.1007/s00530-024-01301-330:2Online publication date: 30-Mar-2024
  • (2023)A survey on sentiment analysis and its applicationsNeural Computing and Applications10.1007/s00521-023-08941-y35:29(21567-21601)Online publication date: 1-Oct-2023

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Multimedia Tools and Applications
Multimedia Tools and Applications  Volume 79, Issue 45-46
Dec 2020
1330 pages

Publisher

Kluwer Academic Publishers

United States

Publication History

Published: 01 December 2020
Accepted: 03 April 2019
Revision received: 03 March 2019
Received: 18 June 2018

Author Tags

  1. Video clip recommendation
  2. Time-sync comment sentiment
  3. Topic modeling
  4. Sentiment analysis
  5. Emotion vector

Qualifiers

  • Research-article

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 16 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Personalized time-sync comment generation based on a multimodal transformerMultimedia Systems10.1007/s00530-024-01301-330:2Online publication date: 30-Mar-2024
  • (2023)A survey on sentiment analysis and its applicationsNeural Computing and Applications10.1007/s00521-023-08941-y35:29(21567-21601)Online publication date: 1-Oct-2023

View Options

View options

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media