Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/2393347.2393384acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

On shape and the computability of emotions

Published: 29 October 2012 Publication History

Abstract

We investigated how shape features in natural images influence emotions aroused in human beings. Shapes and their characteristics such as roundness, angularity, simplicity, and complexity have been postulated to affect the emotional responses of human beings in the field of visual arts and psychology. However, no prior research has modeled the dimensionality of emotions aroused by roundness and angularity. Our contributions include an in depth statistical analysis to understand the relationship between shapes and emotions. Through experimental results on the International Affective Picture System (IAPS) dataset we provide evidence for the significance of roundness-angularity and simplicity-complexity on predicting emotional content in images. We combine our shape features with other state-of-the-art features to show a gain in prediction and classification accuracy. We model emotions from a dimensional perspective in order to predict valence and arousal ratings which have advantages over modeling the traditional discrete emotional categories. Finally, we distinguish images with strong emotional content from emotionally neutral images with high accuracy.

References

[1]
Smith, J., Chang, S.F.: An image and video search engine for the world wide web (1996) In: SPIE.
[2]
Datta, R., Joshi, D., Li, J., Wang, J.Z.: Image retrieval: Ideas, influences, and trends of the new age. ACM Computing Surveys 40 (2008) 1--60.
[3]
Garcia, S., Williams, H.E., Cannane, A.: Access-ordered indexes (2004) In: ACSC.
[4]
Zobel, J., Moffat, A.: Inverted files for text search engines. ACM Computing Surveys 38 (2006)
[5]
Torralba, A., Fergus, R., Freeman, W.T.: 80 million tiny images: a large dataset for non-parametric object and scene recognition. (In: IEEE T-PAMI).
[6]
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database (2009) In: CVPR.
[7]
Fellbaum, C.: Wordnet: An electronic lexical database (1998) Bradford Books.
[8]
Shi, S., Zhang, H., Yuan, X., Wen, J.: Corpus-based semantic class mining: distributional vs. pattern-based approaches (2010) In: ICCL.
[9]
Fei-Fei, L., Fergus, R., Perona, P.: Learning generative visual models from few training examples: an incremental bayesian approach tested on 101 object categories (2004) In: CVPR Workshop on Generative-Model Based Vision.
[10]
Fei-Fei, L., Fergus, R., Perona, P.: Learning generative visual models from few training examples: an incremental bayesian approach tested on 101 object categories (2004) In: CVPR Workshop on Generative-Model Based Vision.
[11]
Fei-Fei, L., Fergus, R., Perona, P.: Learning generative visual models from few training examples: an incremental bayesian approach tested on 101 object categories (2004) In: CVPR Workshop on Generative-Model Based Vision.
[12]
Griffin, G., Holub, A., Perona, P.: Caltech-256 object category dataset. Technical Report 7694 (2007).
[13]
Russell, B., Torralba, A., Murphy, K., Freeman, W.: Labelme: a database and web-based tool for image annotation. In: IJCV 77 (2008) 157--173.
[14]
Deselaers, T., Ferrari, V.: Visual and semantic similarity in imagenet (2011) In: CVPR.
[15]
Weston, J., Bengio, S., Usunier, N.: Large scale image annotation: Learning to rank with joint word-image embeddings (2010) In: ECCV.
[16]
Weston, J., Bengio, S., Usunier, N.: Large scale image annotation: Learning to rank with joint word-image embeddings (2010) In: ECCV.
[17]
Good, J.: How many photos have ever been taken? (2011) http://blog.1000memories.com/94-number-ofphotos-ever-taken-digital-and-analog-in-shoebox.
[18]
Wang, X.J., Zhang, L., Jing, F., Ma, W.Y.: Lei zhang, feng jing, wei-ying ma, annosearch: Image auto-annotation by search (2006) In: CVPR.
[19]
Wang, X.J., Zhang, L., Liu, M., Li, Y., Ma, W.Y.: Arista - image search to annotation on billions of web photos (2010) In: CVPR.
[20]
Wang, X.J., Zhang, L., Ma, W.Y.: Duplicate search-based image annotation using web-scale data. Proceedings of IEEE (2012)
[21]
Sivic, J., Zisserman, A.: Video google: A text retrieval approach to object matching in videos (2003) In Proc. ICCV.
[22]
Chum, O., Philbin, J., Zisserman, A.: Near duplicate image detection: min-hash and tf-idf weighting (2008) In Proc. BMVC.
[23]
Ke, Y., Sukthankar, R., Huston, L.: Efficient near-duplicate detection and sub-image retrieval (2004) In: ACM Multimedia.
[24]
Chum, O., Matas, J.: Large scale discovery of spatilly related images. IEEE T-PAMI (2010)
[25]
Lee, D., Ke, Q., Isard, M.: Partition min-hash for partial duplicate image discovery (2010) In: ECCV.
[26]
Pearson, K.: On lines and planes of closest fit to systems of points in space. Philosophical Magazine 2 (1901) 559--572.
[27]
Abdi, H., Williams, L.: Principal component analysis. Wiley Interdisciplinary Reviews: Computational Statistics 2 (2010) 433--459.
[28]
Yang, Y., Pedersen, J.O.: A comparative study on feature selection in text categorization (1997) In: ICML.
[29]
Chang, C., Lin, C.: Libsvm: A library for support vector machines (2012) http://www.csie.ntu.edu.tw/cjlin/libsvm.
[30]
Platt, J.: Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. In: Advances in Large Margin Classifiers. MIT Press (1999).

Cited By

View all
  • (2024)Conveying Emotions through Shape-changing to Children with and without Visual ImpairmentProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642525(1-16)Online publication date: 11-May-2024
  • (2024)"I'm Not Touching You. It's The Robot!": Inclusion Through A Touch-Based Robot Among Mixed-Visual Ability ChildrenProceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610977.3634992(511-521)Online publication date: 11-Mar-2024
  • (2024)Color Enhanced Cross Correlation Net for Image Sentiment AnalysisIEEE Transactions on Multimedia10.1109/TMM.2021.311820826(4097-4109)Online publication date: 1-Jan-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
MM '12: Proceedings of the 20th ACM international conference on Multimedia
October 2012
1584 pages
ISBN:9781450310895
DOI:10.1145/2393347
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 29 October 2012

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. human emotion
  2. psychology
  3. shape features

Qualifiers

  • Research-article

Conference

MM '12
Sponsor:
MM '12: ACM Multimedia Conference
October 29 - November 2, 2012
Nara, Japan

Acceptance Rates

Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)142
  • Downloads (Last 6 weeks)14
Reflects downloads up to 23 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Conveying Emotions through Shape-changing to Children with and without Visual ImpairmentProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642525(1-16)Online publication date: 11-May-2024
  • (2024)"I'm Not Touching You. It's The Robot!": Inclusion Through A Touch-Based Robot Among Mixed-Visual Ability ChildrenProceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610977.3634992(511-521)Online publication date: 11-Mar-2024
  • (2024)Color Enhanced Cross Correlation Net for Image Sentiment AnalysisIEEE Transactions on Multimedia10.1109/TMM.2021.311820826(4097-4109)Online publication date: 1-Jan-2024
  • (2024)HICEM: A High-Coverage Emotion Model for Artificial Emotional IntelligenceIEEE Transactions on Affective Computing10.1109/TAFFC.2023.332490215:3(1136-1152)Online publication date: Jul-2024
  • (2024)Unmasking Emotions: Deep Neural Networks for Image-Based Emotion Recognition2024 2nd International Conference on Intelligent Data Communication Technologies and Internet of Things (IDCIoT)10.1109/IDCIoT59759.2024.10467471(1045-1051)Online publication date: 4-Jan-2024
  • (2024)How Does Aesthetic Design Affect Continuance Intention in In-Vehicle Infotainment Systems? An Exploratory StudyInternational Journal of Human–Computer Interaction10.1080/10447318.2023.2301253(1-16)Online publication date: 12-Jan-2024
  • (2024)Object aroused emotion analysis network for image sentiment analysisKnowledge-Based Systems10.1016/j.knosys.2024.111429286(111429)Online publication date: Feb-2024
  • (2024)ClKI: closed-loop and knowledge iterative via self-distillation for image sentiment analysisInternational Journal of Machine Learning and Cybernetics10.1007/s13042-023-02068-115:7(2843-2862)Online publication date: 16-Jan-2024
  • (2024)A supervised contrastive learning-based model for image emotion classificationWorld Wide Web10.1007/s11280-024-01260-927:3Online publication date: 24-Apr-2024
  • (2023)A reliable and robust online validation method for creating a novel 3D Affective Virtual Environment and Event Library (AVEL)PLOS ONE10.1371/journal.pone.027806518:4(e0278065)Online publication date: 13-Apr-2023
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media