Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Which facial profile do humans expect after seeing a frontal view? a comparison with a linear face model

Published: 02 August 2012 Publication History

Abstract

Manipulated versions of three-dimensional faces that have different profiles, but almost the same appearance in frontal views, provide a novel way to investigate if and how humans use class-specific knowledge to infer depth from images of faces. After seeing a frontal view, participants have to select the profile that matches that view. The profiles are original (ground truth), average, random other, and two solutions computed with a linear face model (3D Morphable Model). One solution is based on 2D vertex positions, the other on pixel colors in the frontal view. The human responses demonstrate that humans neither guess nor just choose the average profile. The results also indicate that humans actually use the information from the front view, and not just rely on the plausibility of the profiles per se. All our findings are perfectly consistent with a correlation-based inference in a linear face model. The results also verify that the 3D reconstructions from our computational algorithms (stimuli 4 and 5) are similar to what humans expect, because they are chosen to be the true profile equally often as the ground-truth profiles. Our experiments shed new light on the mechanisms of human face perception and present a new quality measure for 3D reconstruction algorithms.

References

[1]
Belhumeur, P. N., Kriegman, D. J., and Yuille, A. L. 1999. The bas-relief ambiguity. Int. J. Comput. Vis. 35, 1, 33--44.
[2]
Blanz, V., Mehl, A., Vetter, T., and Seidel, H.-P. 2004. A statistical method for robust 3d surface reconstruction from sparse data. In Proceedings of the 2nd International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT '04), Y. Aloimonos and G. Taubin, Eds., IEEE, 293--300.
[3]
Blanz, V. and Vetter, T. 1999. A Morphable Model for the synthesis of 3d faces. In Proceedings of SIGGRAPH. 187--194.
[4]
Ezzat, T., Geiger, G., and Poggio, T. 2002. Trainable videorealistic speech animation. In Proceedings of SIGGRAPH'02. 388--398.
[5]
Gregory, R. L. 1997. Knowledge in perception and illusion. Phil. Trans. R. Soc. Lond. B 352, 1121--1128.
[6]
Gummersbach, N. and Blanz, V. 2010. A morphing-based analysis of the perceptual distance metric of human faces. In Proceedings of the 7th Symposium on Applied Perception in Graphics and Visualization (APGV). 109--116.
[7]
Hassner, T. and Basri, R. 2006. Example based 3d reconstruction from single 2d images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (Beyond Patches Workshop).
[8]
Hill, H. and Johnston, A. 2007. The hollow-face illusion: Object-Specific knowledge, general assumptions or properties of the stimulus? Percept. 36, 2, 199--223.
[9]
Jiang, F., Blanz, V., and OToole, A. J. 2006. Probing the visual representation of faces with adaptation: A view from the other side of the mean. Psychol. Sci. 17, 493--500.
[10]
Koenderink, J. J., van Doorn, A. J., Kappers, A. M. L., and Todd, J. T. 1997. The visual contour in depth. Percep. Psychophys. 59, 6, 828--838.
[11]
Leopold, D. A., O'Toole, A. J., Vetter, T., and Blanz, V. 2001. Prototype-Referenced shape encoding revealed by high-level aftereffects. Nature Neurosci. 4, 1, 89--94.
[12]
Mallot, H. 2000. Computational Vision: Information Processing in Perception and Visual Behavior. MIT Press, Cambridge, MA.
[13]
OToole, A. J., Edelman, S., and Bülthoff, H. H. 1998. Stimulus-specific effects in face recognition over changes in viewpoint. Visi. Res. 38, 2351--2363.
[14]
OToole, A. J., Vetter, T., Volz, H., and Salter, E. M. 1997. Caricatures of three dimensional human heads: As we get older do we get more distinct? Suppl. Invest. Ophthalmol. Vis. Sci. 38, 4662.
[15]
Saxena, A., Sun, M., and Ng, A. Y. 2008. Make3d: Learning 3-d scene structure from a single still image. IEEE Trans. Pattern Anal. Mach. Intell. 31, 5.
[16]
Troje, N. F. and Bülthoff, H. H. 1996. Face recognition under varying pose: The role of texture and shape. Visi. Res. 36, 1761--1771.
[17]
Valentine, T. 1991. A unified account of the effects of distinctiveness, inversion, and race in face recognition. Q. J. Exp. Psychol. A 43A, 161--204.
[18]
Yuille, A. L. and Kersten, D. 2006. Vision as bayesian inference: Analysis by synthesis? Trends Cogn. Neurosci. 10, 7, 301--308.

Cited By

View all
  • (2023)General lighting can overcome accidental viewingi-Perception10.1177/2041669523121560414:6Online publication date: 6-Dec-2023
  • (2020)On the Perception Analysis of User Feedback for Interactive Face RetrievalACM Transactions on Applied Perception10.1145/340396417:3(1-20)Online publication date: 3-Aug-2020
  • (2020)3D Morphable Face Models—Past, Present, and FutureACM Transactions on Graphics10.1145/339520839:5(1-38)Online publication date: 9-Jun-2020
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Transactions on Applied Perception
ACM Transactions on Applied Perception  Volume 9, Issue 3
July 2012
74 pages
ISSN:1544-3558
EISSN:1544-3965
DOI:10.1145/2325722
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 02 August 2012
Accepted: 01 June 2012
Revised: 01 June 2012
Received: 01 May 2012
Published in TAP Volume 9, Issue 3

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Depth
  2. Morphable Model
  3. faces
  4. shape perception

Qualifiers

  • Research-article
  • Research
  • Refereed

Funding Sources

  • INBEKI

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)4
  • Downloads (Last 6 weeks)0
Reflects downloads up to 16 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2023)General lighting can overcome accidental viewingi-Perception10.1177/2041669523121560414:6Online publication date: 6-Dec-2023
  • (2020)On the Perception Analysis of User Feedback for Interactive Face RetrievalACM Transactions on Applied Perception10.1145/340396417:3(1-20)Online publication date: 3-Aug-2020
  • (2020)3D Morphable Face Models—Past, Present, and FutureACM Transactions on Graphics10.1145/339520839:5(1-38)Online publication date: 9-Jun-2020
  • (2019)What Does 2D Geometric Information Really Tell Us About 3D Face Shape?International Journal of Computer Vision10.1007/s11263-019-01197-xOnline publication date: 25-Jul-2019
  • (2015)Exploration of the correlations of attributes and features in faces2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG)10.1109/FG.2015.7163124(1-8)Online publication date: May-2015
  • (2015)Hallucination of facial details from degraded images using 3D face modelsImage and Vision Computing10.1016/j.imavis.2015.06.00440:C(49-64)Online publication date: 1-Aug-2015

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media