Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Noise-based enhancement for foveated rendering

Published: 22 July 2022 Publication History

Abstract

Human visual sensitivity to spatial details declines towards the periphery. Novel image synthesis techniques, so-called foveated rendering, exploit this observation and reduce the spatial resolution of synthesized images for the periphery, avoiding the synthesis of high-spatial-frequency details that are costly to generate but not perceived by a viewer. However, contemporary techniques do not make a clear distinction between the range of spatial frequencies that must be reproduced and those that can be omitted. For a given eccentricity, there is a range of frequencies that are detectable but not resolvable. While the accurate reproduction of these frequencies is not required, an observer can detect their absence if completely omitted. We use this observation to improve the performance of existing foveated rendering techniques. We demonstrate that this specific range of frequencies can be efficiently replaced with procedural noise whose parameters are carefully tuned to image content and human perception. Consequently, these frequencies do not have to be synthesized during rendering, allowing more aggressive foveation, and they can be replaced by noise generated in a less expensive post-processing step, leading to improved performance of the rendering system. Our main contribution is a perceptually-inspired technique for deriving the parameters of the noise required for the enhancement and its calibration. The method operates on rendering output and runs at rates exceeding 200 FPS at 4K resolution, making it suitable for integration with real-time foveated rendering systems for VR and AR devices. We validate our results and compare them to the existing contrast enhancement technique in user experiments.

Supplemental Material

MP4 File
supplemental material
MP4 File
presentation
SRT File
presentation
ZIP File
supplemental material

References

[1]
Rachel Albert, Anjul Patney, David Luebke, and Joohwan Kim. 2017. Latency requirements for foveated rendering in virtual reality. ACM Trans. Appl. Percept. 14, 4 (2017).
[2]
Roger S. Anderson, Margarita B. Zlatkova, and Shaban Demirel. 2002. What limits detection and resolution of short-wavelength sinusoidal gratings across the retina? Vision Res. 42, 8 (2002), 981--990.
[3]
Stephen J. Anderson, Kathy T. Mullen, and Robert F. Hess. 1991. Human peripheral spatial resolution for achromatic and chromatic stimuli: limits imposed by optical and retinal factors. J. Physiol. (Lond.) 442, 1 (1991), 47--64.
[4]
H. R. Aubert and C. F. R. Foerster. 1857. Beitrage zur kenntnisse der indirecten sehens [Translation: Contributions of knowledge to indirect vision]. Graefes Archiv fur Ophthalmologie 3 (1857).
[5]
Horace B. Barlow. 1961. Possible principles underlying the transformation of sensory messages. Sensory communication 1, 01 (1961).
[6]
Ryan Beams, Brendan Collins, Andrea S. Kim, and Aldo Badano. 2020. Angular dependence of the spatial resolution in virtual reality displays. In IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, 836--841.
[7]
Pierre Bénard, Ares Lagae, Peter Vangorp, Sylvain Lefebvre, George Drettakis, and Joëlle Thollot. 2010. A dynamic noise primitive for coherent stylization. Comput Graph Forum 29, 4 (2010), 1497--1506.
[8]
G. Browder and W. Chambers. 1988. Eye-enslaved area-of-interest display systems. In Proc. of Flight Simulation Technologies Conference.
[9]
Christine A. Curcio, Kenneth R. Sloan, Robert E. Kalina, and Anita E. Hendrickson. 1990. Human photoreceptor topography. Journal of comparative neurology 292, 4 (1990), 497--523.
[10]
Russell L. De Valois, Duane G. Albrecht, and Lisa G. Thorell. 1982. Spatial frequency selectivity of cells in macaque visual cortex. Vision Res. 22, 5 (1982), 545--559.
[11]
Arturo Deza, Aditya Jonnalagadda, and Miguel P. Eckstein. 2019. Towards metamerism via foveated style transfer. In International Conference on Learning Representations. Openreview.net, New Orleans, USA.
[12]
Piotr Didyk, Tobias Ritschel, Elmar Eisemann, Karol Myszkowski, and Hans-Peter Seidel. 2010. Adaptive image-space stereo view synthesis. In VMV. 299--306.
[13]
A. Eugen Fick. 1898. Ueber stäbchensehschärfe und zapfensehschärfe. Albrecht von Graefes Archiv für Ophthalmologie 45, 2 (1898), 336--356.
[14]
Jeremy Freeman and Eero P. Simoncelli. 2011. Metamers of the ventral stream. Nature neuroscience 14, 9 (2011), 1195--1201.
[15]
William E. Glenn. 1994. Real-time display systems, present and future. In Visual Science and Engineering: Models and Applications. Marcel Dekker, 387--413.
[16]
Daniel G. Green. 1970. Regional variations in the visual acuity for interference fringes on the retina. J. Physiol. (Lond.) 207, 2 (1970), 351--356.
[17]
Brian Guenter, Mark Finch, Steven Drucker, Desney Tan, and John Snyder. 2012. Foveated 3D graphics. ACM Trans. Graphics 31, 6 (2012).
[18]
Andrew M. Haun. 2021. What is visible across the visual field? Neuroscience of Consciousness 2021, 1 (06 2021).
[19]
E. Hering. 1899. Concerning the limits of visual acuity. Ber. d. math.-phys. Kl. d. K. Sachs. Gesellsch. d. Wissensch. zu Leipzig (1899).
[20]
Joy Hirsch and Christine A. Curcio. 1989. The spatial resolution capacity of human foveal retina. Vision Res. 29, 9 (1989), 1095--1101.
[21]
David Hoffman, Zoe Meraz, and Eric Turner. 2018. Limits of peripheral acuity and implications for VR system design. Journal of the Society for Information Display 26, 8 (2018), 483--495.
[22]
Aapo Hyvärinen, Jarmo Hurri, and Patrick O. Hoyer. 2009. Natural image statistics: A probabilistic approach to early computational vision. Vol. 39. Springer Science & Business Media.
[23]
Anton Kaplanyan, Anton Sochenov, Thomas Leimkühler, Mikhail Okunev, Todd Goodall, and Gizem Rufo. 2019. DeepFovea: Neural reconstruction for foveated rendering and video compression using learned statistics of natural videos. ACM Trans. Graphics 38, 6 (2019).
[24]
Philip Kortum and Wilson S. Geisler. 1996. Implementation of a foveated image coding system for image bandwidth reduction. In Human Vision and Electronic Imaging, Vol. 2657. International Society for Optics and Photonics, 350--360.
[25]
Ares Lagae, Sylvain Lefebvre, Rob Cook, Tony DeRose, George Drettakis, David S. Ebert, John P. Lewis, Ken Perlin, and Matthias Zwicker. 2010. A survey of procedural noise functions. Comput Graph Forum 29, 8 (2010), 2579--2600.
[26]
Ares Lagae, Sylvain Lefebvre, George Drettakis, and Philip Dutré. 2009. Procedural noise using sparse Gabor convolution. ACM Trans. Graphics 28, 3 (2009).
[27]
Gordon E. Legge and Daniel Kersten. 1987. Contrast discrimination in peripheral vision. J. Opt. Soc. Am. A 4, 8 (Aug 1987), 1594--1598.
[28]
Jerome Y. Lettvin et al. 1976. On seeing sidelong. The Sciences 16, 4 (1976), 10--20.
[29]
Dennis M. Levi and Stanley A. Klein. 1986. Sampling in spatial vision. Nature 320, 6060 (1986), 360--362.
[30]
Marc Levoy and Ross Whitaker. 1990. Gaze-directed volume rendering. SIGGRAPH 24, 2 (feb 1990), 217--223.
[31]
Rafał Mantiuk, Kil Joong Kim, Allan G. Rempel, and Wolfgang Heidrich. 2011. HDR-VDP-2: A calibrated visual metric for visibility and quality predictions in all luminance conditions. ACM Trans. Graphics 30, 4 (2011).
[32]
Rafał K. Mantiuk, Gyorgy Denes, Alexandre Chapiro, Anton Kaplanyan, Gizem Rufo, Romain Bachy, Trisha Lian, and Anjul Patney. 2021. FovVideoVDP: A visible difference predictor for wide field-of-view video. ACM Trans. Graphics 40, 4 (2021).
[33]
Xiaoxu Meng, Ruofei Du, Matthias Zwicker, and Amitabh Varshney. 2018. Kernel foveated rendering. Proc. of the ACM on Computer Graphics and Interactive Techniques 1, 1 (2018).
[34]
Michel Millodot, Chris A. Johnson, Anne Lamont, and Herschel W. Leibowitz. 1975. Effect of dioptrics on peripheral visual acuity. Vision Res. 15, 12 (1975), 1357--1362.
[35]
Bipul Mohanto, A. B. M. Tariqul Islam, Enrico Gobbetti, and Oliver Staadt. 2021. An integrative view of foveated rendering. Computers & Graphics (2021).
[36]
G. A. Østerberg. 1935. Topography of the layer of the rods and cones in the human retina. Acta ophthalmol 13, 6 (1935).
[37]
Anjul Patney, Marco Salvi, Joohwan Kim, Anton Kaplanyan, Chris Wyman, Nir Benty, David Luebke, and Aaron Lefohn. 2016. Towards foveated rendering for gaze-tracked virtual reality. ACM Trans. Graphics 35, 6 (2016).
[38]
Eli Peli. 1990. Contrast in complex images. J. Opt. Soc. Am. A, Optics and image science 7 10 (1990), 2032--2040.
[39]
Javier Portilla and Eero P. Simoncelli. 2000. A parametric texture model based on joint statistics of complex wavelet coefficients. Int J Comput Vision 40, 1 (2000), 49--70.
[40]
Ruth Rosenholtz. 2016. Capabilities and limitations of peripheral vision. Annual Review of Vision Science 2, 1 (2016), 437--457.
[41]
Ethan A. Rossi and Austin Roorda. 2010. The relationship between visual resolution and cone spacing in the human fovea. Nature neuroscience 13, 2 (2010), 156--157.
[42]
Jyrki Rovamo and Veijo Virsu. 1979. An estimation and application of the human cortical magnification factor. Experimental brain research 37, 3 (1979), 495--510.
[43]
Jyrki Rovamo, Veijo Virsu, and Risto Näsänen. 1978. Cortical magnification factor predicts the photopic contrast sensitivity of peripheral vision. Nature 271, 5640 (1978), 54--56.
[44]
Daniel L. Ruderman. 1994. The statistics of natural images. Network: computation in neural systems 5, 4 (1994), 517--548.
[45]
Heiko H. Schütt and Felix A. Wichmann. 2017. An image-computable psychophysical spatial vision model. J. Vis. 17, 12 (2017).
[46]
Boubakar Sere, Christian Marendaz, and Jeanny Herault. 2000. Nonhomogeneous resolution of images of natural scenes. Perception 29, 12 (2000), 1403--1412.
[47]
Eero P. Simoncelli and Bruno A. Olshausen. 2001. Natural image statistics and neural representation. Annual review of neuroscience 24, 1 (2001), 1193--1216.
[48]
Michael Stengel, Steve Grogorick, Martin Eisemann, and Marcus Magnor. 2016. Adaptive image-space sampling for gaze-contingent real-time rendering. In Comput Graph Forum, Vol. 35. Wiley Online Library, 129--139.
[49]
Hans Strasburger, Ingo Rentschler, and Martin Jüttner. 2011. Peripheral vision and pattern recognition: A review. J. Vis. 11, 5 (12 2011).
[50]
Nicholas T. Swafford, José A. Iglesias-Guitian, Charalampos Koniaris, Bochang Moon, Darren Cosker, and Kenny Mitchell. 2016. User, metric, and computational evaluation of foveated rendering methods. In Proc. of the ACM Symposium on Applied Perception. 7--14.
[51]
L. N. Thibos. 1998. Acuity perimetry and the sampling theory of visual resolution. Optometry and vision science: official publication of the American Academy of Optometry 75, 6 (1998), 399--406.
[52]
L. N. Thibos, F. E. Cheney, and D. J. Walsh. 1987a. Retinal limits to the detection and resolution of gratings. J. Opt. Soc. Am. A 4, 8 (1987), 1524--1529.
[53]
L. N. Thibos, D. L. Still, and A. Bradley. 1996. Characterization of spatial aliasing and contrast sensitivity in peripheral vision. Vision Res. 36, 2 (1996), 249--258.
[54]
L. N. Thibos and D. J. Walsh. 1985. Detection of high frequency gratings in the periphery. J. Opt. Soc. Am. A 2 (1985).
[55]
L. N. Thibos, D. J. Walsh, and F. E. Cheney. 1987b. Vision beyond the resolution limit: aliasing in the periphery. Vision Res. 27, 12 (1987), 2193--2197.
[56]
H. M. Tong and R. A. Fisher. 1984. Progress report on an eye-slaved area-of-interest visual display. Technical Report. Singer Co Silver Spring Md Link Simul. Systems Div.
[57]
Norimichi Tsumura, Chizuko Endo, Hideaki Haneishi, and Yoichi Miyake. 1996. Image compression and decompression based on gazing area. In Human Vision and Electronic Imaging, Vol. 2657. International Society for Optics and Photonics, 361--367.
[58]
Okan Tarhan Tursun, Elena Arabadzhiyska-Koleva, Marek Wernikowski, Radosław Mantiuk, Hans-Peter Seidel, Karol Myszkowski, and Piotr Didyk. 2019. Luminance-contrast-aware foveated rendering. ACM Trans. Graphics 38, 4 (2019).
[59]
Veijo Virsu, Risto Näsänen, and Kari Osmoviita. 1987. Cortical magnification and peripheral vision. J. Opt. Soc. Am. A 4, 8 (Aug 1987), 1568--1578.
[60]
David R. Walton, Rafael Kuffner Dos Anjos, Sebastian Friston, David Swapp, Kaan Akşit, Anthony Steed, and Tobias Ritschel. 2021. Beyond blur: real-time ventral metamers for foveated rendering. ACM Trans. Graphics 40, 4 (2021).
[61]
Zhou Wang, Alan C. Bovik, Hamid R. Sheikh, and Eero P. Simoncelli. 2004. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Processing 13, 4 (2004), 600--612.
[62]
Andrew B. Watson. 2018. The field of view, the field of resolution, and the field of contrast sensitivity. Electronic Imaging 2018, 14 (2018).
[63]
Andrew B. Watson and Albert J. Ahumada. 2016. The pyramid of visibility. Electronic Imaging 2016, 16 (2016).
[64]
T. H. Wertheim. 1894. Uber die indirekte sehscharfe. Zeitschrift fur Psychologie 7 (1894), 172--187.
[65]
David R. Williams. 1985a. Aliasing in human foveal vision. Vision Res. 25, 2 (1985), 195--205.
[66]
David R. Williams. 1985b. Visibility of interference fringes near the resolution limit. J. Opt. Soc. Am. A 2, 7 (1985), 1087--1093.

Cited By

View all

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Transactions on Graphics
ACM Transactions on Graphics  Volume 41, Issue 4
July 2022
1978 pages
ISSN:0730-0301
EISSN:1557-7368
DOI:10.1145/3528223
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 July 2022
Published in TOG Volume 41, Issue 4

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. foveated rendering
  2. image enhancement

Qualifiers

  • Research-article

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)97
  • Downloads (Last 6 weeks)10
Reflects downloads up to 14 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Towards Motion Metamers for Foveated RenderingACM Transactions on Graphics10.1145/365814143:4(1-10)Online publication date: 19-Jul-2024
  • (2024)Saccade-Contingent RenderingACM SIGGRAPH 2024 Conference Papers10.1145/3641519.3657420(1-9)Online publication date: 13-Jul-2024
  • (2024)Scene-aware Foveated RenderingIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.345615730:11(7097-7106)Online publication date: 1-Nov-2024
  • (2024)Eye-tracking on virtual reality: a surveyVirtual Reality10.1007/s10055-023-00903-y28:1Online publication date: 5-Feb-2024
  • (2023)Towards Attention–aware Foveated RenderingACM Transactions on Graphics10.1145/359240642:4(1-10)Online publication date: 26-Jul-2023
  • (2023)Learning GAN-Based Foveated Reconstruction to Recover Perceptually Important Image FeaturesACM Transactions on Applied Perception10.1145/358307220:2(1-23)Online publication date: 21-Apr-2023
  • (2023)Perceptually-guided Dual-mode Virtual Reality System For Motion-adaptive DisplayIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.324709729:5(2249-2257)Online publication date: 1-May-2023
  • (2023)DEAMP: Dominant-Eye-Aware Foveated Rendering with Multi-Parameter optimization2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)10.1109/ISMAR59233.2023.00078(632-641)Online publication date: 16-Oct-2023

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media