Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Automatic music emotion classification using hashtag graph

  • S.I.: Emotion Recognition in Speech
  • Published:
International Journal of Speech Technology Aims and scope Submit manuscript

    We’re sorry, something doesn't seem to be working properly.

    Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Abstract

Music is integrated automatically in routine life of human being. It reveals emotion and harmonizes the listener sensation. Human’s present state of mind is interconnected with music. Distinguishing the human emotions on behalf of pitch, rhythm, harmony, melody and interval is a tedious process. Identification of human emotions is done by the machine learning approaches. The classification models that are used for predicting emotions are not very much efficient. To address the above mentioned problem a novel approach using hash tag graph generation is proposed for automatic emotion detection. The proposed method consists of two steps training and testing process. In this paper the proposed method is compared with support vector machines, k-nearest neighbour approach and convolution neural network in terms of accuracy, precision, recall, specificity, f-measure, geometric mean, root mean square error and computational cost. The proposed technique achieves the best performance in terms of all the evaluation parameters.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  • Chang, C. Y., Wu, C. K., Lo, C. Y., Wang, C. J., & Chung, P. C. (2011) Music emotion recognition with consideration of personal preference. In The 2011 International Workshop on Multidimensional (nD) Systems, 5–7 Sept. 2011, Poitiers, France (pp. 1–4).

  • Chen, Y. A., Wang, J. C., Yang, Y. H., & Chen, H. (2014) Linear regression-based adaptation of music emotion recognition models for personalization. In IEEE International Conference on Acoustics, Speech and Signal Processing (pp. 2149–2153).

  • Fawcett, T. (2006). An introduction to ROC analysis. Pattern Recognition Letters, 27(8), 861–874.

    Article  MathSciNet  Google Scholar 

  • Feng, Y., Zhuang, Y., & Pan, Y. (2003) Popular music retrieval by detecting mood. In Proceedings of the 26th annual international ACM SIGIR conference on Research and development in information retrieval, July 28–Aug 1, 2003, Toronto, Canada (pp. 375–376).

  • Fu, Z., Lu, G., Ting, K. M., & Zhang, D. (2011). A survey of audio-based music classification and annotation. IEEE Transactions on Multimedia, 13(2), 303–319.

    Article  Google Scholar 

  • Hampiholi, V. (2012). A method for music classification based on perceived mood detection for Indian bollywood music. International Journal of Computer, Electrical, Automation, Control and Information Engineering, 6(12), 507–5014.

    Google Scholar 

  • Hsu, Y. L., Wang, J. S., Chiang, W. C., & Hung, C. H. (2017). Automatic ECG-based emotion recognition in music listening. IEEE Transactions on Affective Computing. https://doi.org/10.1109/TAFFC.2017.2781732.

    Google Scholar 

  • Hu, X. (2017). A framework for evaluating multimodal music mood classification. Journal of the Association for Information Science and Technology, 68(2), 273–285.

    Article  Google Scholar 

  • Hu, X., & Yang, Y. H. (2017a). The mood of Chinese pop music: Representation and recognition Xiao. Journal of the Association for Information Science and Technology, 68(8), 1899–1910.

    Article  Google Scholar 

  • Hu, X., & Yang, Y. H. (2017b). Cross-dataset and cross-cultural music mood prediction: A case on western and Chinese pop songs. IEEE Transactions on Affective Computing, 8(2), 228–240.

    Article  Google Scholar 

  • Jao, P. K., & Yang, Y. H. (2015). Music annotation and retrieval using unlabeled exemplars: Correlation and sparse codes. IEEE Signal Processing Letters, 22(10), 1771–1775.

    Article  Google Scholar 

  • Lee, J., & Nam, J. (2017). Multi-level and multi-scale feature aggregation using pretrained convolutional neural networks for music auto-tagging. IEEE Signal Processing Letters, 24(8), 1208–1212.

    Article  Google Scholar 

  • Liu, X., Chen, Q., Wu, X., Liu, Y., & Liu, Y. (2017). CNN based music emotion classification.

  • Lu, L., Liu, D., & Zhang, H. J. (2006). Automatic mood detection and tracking of music audio signals. IEEE Transactions on Audio, Speech and Language Processing, 14(1), 5–18.

    Article  Google Scholar 

  • Mannepalli, K., Sastry, P. N., & Suman, M. (2018). Analysis of emotion recognition system for Telugu using prosodic and formant features. Speech and Language Processing for Human-Machine Communications, 664, 137–144.

    Article  Google Scholar 

  • Mo, S., & Niu, J. (2017). A novel method based on OMPGW method for feature extraction in automatic music mood classification. IEEE Transactions on Affective Computing.

  • Ogihara. M. (2004) Content-based music similarity search and emotion detection. In 2004 IEEE International Conference on Acoustics Speech and Signal Processing, pp. V505–V708.

  • Patra, B. G., Das, D., & Bandyopadhyay, S. (2018). Multimodal mood classification of Hindi and Western songs. Journal of Intelligent Information Systems, 51(3), 579–596.

    Article  Google Scholar 

  • Powers, D. M. W. (2011). Evaluation: from precision, recall and f-measure to Roc, informedness, markedness & correlation. The Journal of Machine Learning Technologies, 2(1), 37–63.

    MathSciNet  Google Scholar 

  • Rao, V., Ramakrishnan, S., & Rao, P. (2008). Singing voice detection in north indian classical music. In Proceedings of the National Conference on Communications (NCC) (p. 17).

  • Rijsbergen, V. (1979) Information retrieval—Chapter 7. Inf. Retr. Boston., pp. 112–140.

  • Sawata, R., Ogawa, T., & Haseyama, M. (2017). Novel audio feature projection using KDLPCCA-based correlation with EEG features for favorite music classification. IEEE Transactions on Affective Computing, 3045, 1–14.

    Google Scholar 

  • Sen, A., Popat, D., Shah, H., Kuwor, P., & Johri, E. (2018). Music playlist generation using facial expression analysis and task extraction (pp. 129–139). Singapore: Springer.

    Google Scholar 

  • Shakya, A., Gurung, B., Thapa, M. S., & Rai, M. (2017). Music Classification based on genre and mood. In International Conference on Computational Intelligence, Communications and Business Analytics (Vol. 776, pp. 168–183). Singapore: Springer.

  • Singh, S., Tripathy, M., & Anand, R. S. (2017). A wavelet packet based approach for speech enhancement using modulation channel selection. Wireless Personal Communication, 95(4), 4441–4456.

    Article  Google Scholar 

  • Turnbull, D., Barrington, L., Torres, D., & Lanckriet, G. (2008). Semantic annotation and retrieval of music and sound effects. IEEE Transactions on Audio, Speech and Language Processing, 16(2), 467–476.

    Article  Google Scholar 

  • Willmott, C. J., et al. (1985). Statistics for the evaluation and comparison of models. Journal of Geophysical Research, 90(C5), 8995–9005.

    Article  Google Scholar 

  • Yang, Y. H., & Chen, H. H. (2011). Ranking-based emotion recognition for music organization and retrieval. IEEE Transactions on Audio, Speech and Language Processing, 19(4), 762–774.

    Article  Google Scholar 

  • Yang, Y.-H., Lin, Y.-C., Su, Y.-F., & Chen, H. H. (2008). A regression approach to music emotion recognition. IEEE Transactions on Audio, Speech and Language Processing, 16(2), 448–457.

    Article  Google Scholar 

  • Yang, Y.-H., Su, Y.-F., Lin, Y.-C., & Chen, H. H. (2011). Music emotion recognition. Boca Raton: CRC Press.

    Book  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Deepti Chaudhary.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chaudhary, D., Singh, N.P. & Singh, S. Automatic music emotion classification using hashtag graph. Int J Speech Technol 22, 551–561 (2019). https://doi.org/10.1007/s10772-019-09629-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10772-019-09629-2

Keywords

Navigation