Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article
Open access

EyeNeRF: a hybrid representation for photorealistic synthesis, animation and relighting of human eyes

Published: 22 July 2022 Publication History

Abstract

A unique challenge in creating high-quality animatable and relightable 3D avatars of real people is modeling human eyes, particularly in conjunction with the surrounding periocular face region. The challenge of synthesizing eyes is multifold as it requires 1) appropriate representations for the various components of the eye and the periocular region for coherent viewpoint synthesis, capable of representing diffuse, refractive and highly reflective surfaces, 2) disentangling skin and eye appearance from environmental illumination such that it may be rendered under novel lighting conditions, and 3) capturing eyeball motion and the deformation of the surrounding skin to enable re-gazing.
These challenges have traditionally necessitated the use of expensive and cumbersome capture setups to obtain high-quality results, and even then, modeling of the full eye region holistically has remained elusive. We present a novel geometry and appearance representation that enables high-fidelity capture and photorealistic animation, view synthesis and relighting of the eye region using only a sparse set of lights and cameras. Our hybrid representation combines an explicit parametric surface model for the eyeball surface with implicit deformable volumetric representations for the periocular region and the interior of the eye. This novel hybrid model has been designed specifically to address the various parts of that exceptionally challenging facial area - the explicit eyeball surface allows modeling refraction and high frequency specular reflection at the cornea, whereas the implicit representation is well suited to model lower frequency skin reflection via spherical harmonics and can represent non-surface structures such as hair (i.e. eyebrows) or highly diffuse volumetric bodies (i.e. sclera), both of which are a challenge for explicit surface models. Tightly integrating the two representations in a joint framework allows controlled photoreal image synthesis and joint optimization of both the geometry parameters of the eyeball and the implicit neural network in continuous 3D space. We show that for high-resolution close-ups of the human eye, our model can synthesize high-fidelity animated gaze from novel views under unseen illumination conditions, allowing to generate visually rich eye imagery.

Supplemental Material

MP4 File
supplemental material
MP4 File
presentation
SRT File
presentation

References

[1]
Mallikarjun B R, Ayush Tewari, Tae-Hyun Oh, Tim Weyrich, Bernd Bickel, Hans-Peter Seidel, Hanspeter Pfister, Wojciech Matusik, Mohamed Elgharib, and Christian Theobalt. 2021. Monocular Reconstruction of Neural Face Reflectance Fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[2]
Pascal Bérard, Derek Bradley, Markus Gross, and Thabo Beeler. 2016. Lightweight Eye Capture Using a Parametric Model. ACM Trans. Graph. 35, 4 (2016). https://doi.org/2897824.2925962
[3]
P. Bérard, D. Bradley, M. Gross, and Thabo Beeler. 2019. Practical Person-Specific Eye Rigging. Computer Graphics Forum 38 (05 2019), 441--454.
[4]
Pascal Bérard, Derek Bradley, Maurizio Nitti, Thabo Beeler, and Markus Gross. 2014. High-Quality Capture of Eyes. ACM Trans. Graph. 33, 6, Article 223 (nov 2014), 12 pages.
[5]
Fausto Bernardini, Joshua Mittleman, Holly Rushmeier, Cláudio Silva, and Gabriel Taubin. 1999. The Ball-Pivoting Algorithm for Surface Reconstruction. IEEE Transactions on Visualization and Computer Graphics 5, 4 (oct 1999), 349--359.
[6]
Sai Bi, Stephen Lombardi, Shunsuke Saito, Tomas Simon, Shih-En Wei, Kevyn Mcphail, Ravi Ramamoorthi, Yaser Sheikh, and Jason Saragih. 2021. Deep relightable appearance models for animatable faces. ACM Transactions on Graphics (TOG) 40, 4 (August 2021).
[7]
Sai Bi, Zexiang Xu, Kalyan Sunkavalli, Miloš Hašan, Yannick Hold-Geoffroy, David Kriegman, and Ravi Ramamoorthi. 2020. Deep Reflectance Volumes: Relightable Reconstructions from Multi-View Photometric Images. arXiv:2007.09892 [cs.CV]
[8]
Mark Boss, Raphael Braun, Varun Jampani, Jonathan T. Barron, Ce Liu, and Hendrik P.A. Lensch. 2021. NeRD: Neural Reflectance Decomposition from Image Collections. In IEEE International Conference on Computer Vision (ICCV).
[9]
Marcel C. Buehler, Seonwook Park, Shalini De Mello, Xucong Zhang, and Otmar Hilliges. 2019. Content-Consistent Generation of Realistic Eyes with Style. In International Conference on Computer Vision Workshops (ICCVW) (Seoul, Korea).
[10]
Dawson-Haggerty et al. [n.d.]. trimesh. https://trimsh.org/
[11]
Carolina De Freitas, Marco Ruggeri, Fabrice Manns, Arthur Ho, and Jean-Marie Parel. 2013. In vivo measurement of the average refractive index of the human crystalline lens using optical coherence tomography. Optics letters 38, 2 (2013), 85--87.
[12]
Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, and Westley Sarokin. 2000a. Acquiring the Reflectance Field of a Human Face. In SIGGRAPH. New Orleans, LA. http://ict.usc.edu/pubs/Acquiring%20the%20Re%EF%AC%82ectance%20Field%20of%20a%20Human%20Face.pdf
[13]
Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokin, and Mark Sagar. 2000b. Acquiring the Reflectance Field of a Human Face. In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '00). ACM Press/Addison-Wesley Publishing Co., USA, 145--156.
[14]
Abdallah Dib, Cedric Thebault, Junghyun Ahn, Philippe-Henri Gosselin, Christian Theobalt, and Louis Chevallier. 2021. Towards High Fidelity Monocular Face Reconstruction with Rich Reflectance using Self-supervised Learning and Ray Tracing. arXiv:2103.15432 [cs.CV]
[15]
Augustin Fresnel. 1868. Oeuvres complètes d'augustin Fresnel. Vol. 2. Imprimerie impériale.
[16]
Wolfgang Fuhl, Gjergji Kasneci, and Enkelejda Kasneci. 2021. TEyeD: Over 20 million real-world eye images with Pupil, Eyelid, and Iris 2D and 3D Segmentations, 2D and 3D Landmarks, 3D Eyeball, Gaze Vector, and Eye Movement Types. arXiv:2102.02115 [eess.IV]
[17]
R. Fusek. 2018. Pupil localization using geodesic distance. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 11241 LNCS (2018), 433--444.
[18]
Guy Gafni, Justus Thies, Michael Zollhöfer, and Matthias Nießner. 2021. Dynamic Neural Radiance Fields for Monocular 4D Facial Avatar Reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 8649--8658.
[19]
Yaroslav Ganin, Daniil Kononenko, Diana Sungatullina, and Victor Lempitsky. 2016. DeepWarp: Photorealistic Image Resynthesis for Gaze Manipulation. arXiv:1607.07215 [cs.CV]
[20]
Zhe He, Adrian Spurr, Xucong Zhang, and Otmar Hilliges. 2019. Photo-Realistic Monocular Gaze Redirection Using Generative Adversarial Networks. In IEEE International Conference on Computer Vision (ICCV) (Seoul, Korea). IEEE.
[21]
Joseph Ivanic and Klaus Ruedenberg. 1996. Rotation matrices for real spherical harmonics. Direct determination by recursion. The Journal of Physical Chemistry 100, 15 (1996), 6342--6347.
[22]
James T Kajiya and Brian P Von Herzen. 1984. Ray tracing volume densities. ACM SIGGRAPH computer graphics 18, 3 (1984), 165--174.
[23]
Harsimran Kaur and Roberto Manduchi. 2020. EyeGAN: Gaze-Preserving, Mask-Mediated Eye Image Synthesis. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV).
[24]
Harsimran Kaur and Roberto Manduchi. 2021. Subject Guided Eye Image Synthesis With Application to Gaze Redirection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). 11--20.
[25]
Joohwan Kim, Michael Stengel, Alexander Majercik, Shalini De Mello, David Dunn, Samuli Laine, Morgan McGuire, and David Luebke. 2019. NVGaze: An Anatomically-Informed Dataset for Low-Latency, Near-Eye Gaze Estimation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Glasgow, Scotland UK) (CHI '19). ACM, New York, NY, USA, 10 pages.
[26]
Daniil Kononenko and Victor Lempitsky. 2015. Learning to look up: Realtime monocular gaze correction using machine learning. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 4667--4675.
[27]
Yves Le Grand. 1957. Light, Color and Vision. Wiley, New York, NY, USA.
[28]
Daniel H. Lee and Adam K. Anderson. 2017. Reading What the Mind Thinks From How the Eye Sees. Psychological Science 28, 4 (2017), 494--503. arXiv:https://doi.org/10.1177/0956797616687364 28406382.
[29]
Kangwook Lee, Hoon Kim, and Changho Suh. 2018. Simulated+Unsupervised Learning With Adaptive Data Generation and Bidirectional Mappings. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. https://openreview.net/forum?id=SkHDoG-Cb
[30]
Zhengqi Li, Simon Niklaus, Noah Snavely, and Oliver Wang. 2021. Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[31]
Stephen Lombardi, Tomas Simon, Jason Saragih, Gabriel Schwartz, Andreas Lehrmann, and Yaser Sheikh. 2019. Neural Volumes: Learning Dynamic Renderable Volumes from Images. ACM Trans. Graph. 38, 4, Article 65 (July 2019), 14 pages.
[32]
Feng Lu, Yusuke Sugano, Takahiro Okabe, and Yoichi Sato. 2015. Gaze Estimation From Eye Appearance: A Head Pose-Free Method via Eye Image Synthesis. IEEE Transactions on Image Processing 24, 11 (2015), 3680--3693.
[33]
B R Mallikarjun, Ayush Tewari, Abdallah Dib, Tim Weyrich, Bernd Bickel, Hans-Peter Seidel, Hanspeter Pfister, Wojciech Matusik, Louis Chevallier, Mohamed Elgharib, et al. 2021. PhotoApp: Photorealistic Appearance Editing of Head Portraits. ACM Transactions on Graphics 40, 4 (2021), 1--16.
[34]
Abhimitra Meka, Christian Haene, Rohit Pandey, Michael Zollhoefer, Sean Fanello, Graham Fyffe, Adarsh Kowdle, Xueming Yu, Jay Busch, Jason Dourgarian, Peter Denny, Sofien Bouaziz, Peter Lincoln, Matt Whalen, Geoff Harvey, Jonathan Taylor, Shahram Izadi, Andrea Tagliasacchi, Paul Debevec, Christian Theobalt, Julien Valentin, and Christoph Rhemann. 2019. Deep Reflectance Fields - High-Quality Facial Reflectance Field Inference From Color Gradient Illumination. ACM Transactions on Graphics (Proceedings SIGGRAPH) 38, 4.
[35]
Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In ECCV.
[36]
Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. 2022. Instant Neural Graphics Primitives with a Multiresolution Hash Encoding. arXiv:2201.05989 (January 2022).
[37]
Ko Nishino and Shree Nayar. 2006. Corneal Imaging System: Environment from Eyes. International Journal of Computer Vision 70 (10 2006), 23--40.
[38]
Rohit Pandey, Sergio Orts-Escolano, Chloe LeGendre, Christian Haene, Sofien Bouaziz, Christoph Rhemann, Paul Debevec, and Sean Fanello. 2021. Total Relighting: Learning to Relight Portraits for Background Replacement. ACM Transactions on Graphics (Proceedings SIGGRAPH) 40, 4.
[39]
Keunhong Park, Utkarsh Sinha, Jonathan T. Barron, Sofien Bouaziz, Dan B Goldman, Steven M. Seitz, and Ricardo Martin-Brualla. 2021a. Nerfies: Deformable Neural Radiance Fields. ICCV (2021).
[40]
Keunhong Park, Utkarsh Sinha, Peter Hedman, Jonathan T. Barron, Sofien Bouaziz, Dan B Goldman, Ricardo Martin-Brualla, and Steven M. Seitz. 2021b. HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields. ACM Trans. Graph. 40, 6, Article 238 (dec 2021).
[41]
Sudi Patel, John Marshall, and Frederick W Fitzke. 1995. Refractive index of the human corneal epithelium and stroma., 100--141 pages.
[42]
Albert Pumarola, Enric Corona, Gerard Pons-Moll, and Francesc Moreno-Noguer. 2021. D-nerf: Neural radiance fields for dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10318--10327.
[43]
Gabriel Schwartz, Shih-En Wei, Te-Li Wang, Stephen Lombardi, Tomas Simon, Jason Saragih, and Yaser Sheikh. 2020. The eyes have it: An integrated eye and face model for photorealistic facial animation. ACM Transactions on Graphics (TOG) 39, 4 (2020), 91--1.
[44]
Ashish Shrivastava, Tomas Pfister, Oncel Tuzel, Joshua Susskind, Wenda Wang, and Russell Webb. 2017. Learning from Simulated and Unsupervised Images through Adversarial Training. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2242--2251.
[45]
Zhixin Shu, Sunil Hadap, Eli Shechtman, Kalyan Sunkavalli, Sylvain Paris, and Dimitris Samaras. 2017. Portrait Lighting Transfer Using a Mass Transport Approach. ACM Trans. Graph. 37, 1, Article 2 (oct 2017), 15 pages.
[46]
Fengyi Song, Xiaoyang Tan, Xue Liu, and Songcan Chen. 2014. Eyes closeness detection from still images with multi-scale histograms of principal oriented gradients. Pattern Recognition 47, 9 (2014), 2825--2838.
[47]
Pratul P. Srinivasan, Boyang Deng, Xiuming Zhang, Matthew Tancik, Ben Mildenhall, and Jonathan T. Barron. 2021. NeRV: Neural Reflectance and Visibility Fields for Relighting and View Synthesis. In CVPR.
[48]
Yusuke Sugano, Yasuyuki Matsushita, and Yoichi Sato. 2014. Learning-by-Synthesis for Appearance-Based 3D Gaze Estimation. In 2014 IEEE Conference on Computer Vision and Pattern Recognition. 1821--1828.
[49]
Tiancheng Sun, Kai-En Lin, Sai Bi, Zexiang Xu, and Ravi Ramamoorthi. 2021. NeLF: Neural Light-transport Field for Portrait View Synthesis and Relighting. In Eurographics Symposium on Rendering.
[50]
Tiancheng Sun, Zexiang Xu, Xiuming Zhang, Sean Fanello, Christoph Rhemann, Paul Debevec, Yun-Ta Tsai, Jonathan T Barron, and Ravi Ramamoorthi. 2020. Light stage super-resolution: continuous high-frequency relighting. ACM Transactions on Graphics (TOG) 39, 6 (2020), 1--12.
[51]
J. Thies, M. Zollhöfer, M. Stamminger, C. Theobalt, and M. Nießner. 2018. HeadOn: Real-time Reenactment of Human Portrait Videos. ACM Transactions on Graphics 2018 (TOG) (2018).
[52]
Justus Thies, Michael Zollöfer, Marc Stamminger, Christian Theobalt, and Matthias Nießner. 2016. FaceVR: Real-Time Facial Reenactment and Eye Gaze Control in Virtual Reality. arXiv preprint arXiv:1610.03151 (2016).
[53]
Marc Tonsen, Julian Steil, Yusuke Sugano, and Andreas Bulling. 2017. InvisibleEye: Mobile Eye Tracking Using Multiple Low-Resolution Cameras and Learning-Based Gaze Estimation. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 1, 3, Article 106 (sep 2017), 21 pages.
[54]
Marc Tonsen, Xucong Zhang, Yusuke Sugano, and Andreas Bulling. 2016. Labelled Pupils in the Wild: A Dataset for Studying Pupil Detection in Unconstrained Environments. In Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research Applications (Charleston, South Carolina) (ETRA '16). Association for Computing Machinery, New York, NY, USA, 139--142.
[55]
Edgar Tretschk, Ayush Tewari, Vladislav Golyanik, Michael Zollhöfer, Christoph Lassner, and Christian Theobalt. 2021. Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Dynamic Scene From Monocular Video. In IEEE International Conference on Computer Vision (ICCV). IEEE.
[56]
Ingo Wald, Sven Woop, Carsten Benthin, Gregory S Johnson, and Manfred Ernst. 2014. Embree: a kernel framework for efficient CPU ray tracing. ACM Transactions on Graphics (TOG) 33, 4 (2014), 1--8.
[57]
Kang Wang, Rui Zhao, and Qiang Ji. 2018. A Hierarchical Generative Model for Eye Image Synthesis and Eye Gaze Estimation. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 440--448.
[58]
Erroll Wood, Tadas Baltrusaitis, Xucong Zhang, Yusuke Sugano, Peter Robinson, and Andreas Bulling. 2015. Rendering of Eyes for Eye-Shape Registration and Gaze Estimation. In Proc. of the IEEE International Conference on Computer Vision (ICCV 2015) (2015-12-12).
[59]
Zhengyang Wu, Srivignesh Rajendran, Tarrence Van As, Vijay Badrinarayanan, and Andrew Rabinovich. 2019. EyeNet: A Multi-Task Deep Network for Off-Axis Eye Gaze Estimation. In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). 3683--3687.
[60]
Zhengyang Wu, Srivignesh Rajendran, Tarrence van As, Joelle Zimmermann, Vijay Badrinarayanan, and Andrew Rabinovich. 2020. MagicEyes: A Large Scale Eye Gaze Estimation Dataset for Mixed Reality. arXiv:2003.08806 [cs.CV]
[61]
Wenqi Xian, Jia-Bin Huang, Johannes Kopf, and Changil Kim. 2021. Space-time Neural Irradiance Fields for Free-Viewpoint Video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 9421--9431.
[62]
Qinjie Xiao, Hanyuan Zhang, Zhaorui Zhang, Yiqian Wu, Luyuan Wang, Xiaogang Jin, Xinwei Jiang, Yong-Liang Yang, Tianjia Shao, and Kun Zhou. 2021. EyelashNet: A Dataset and a Baseline Method for Eyelash Matting. ACM Trans. Graph. 40, 6, Article 217 (dec 2021), 17 pages.
[63]
Jason Y. Zhang, Gengshan Yang, Shubham Tulsiani, and Deva Ramanan. 2021c. NeRS: Neural Reflectance Surfaces for Sparse-view 3D Reconstruction in the Wild. In Conference on Neural Information Processing Systems.
[64]
Kai Zhang, Fujun Luan, Qianqian Wang, Kavita Bala, and Noah Snavely. 2021a. PhySG: Inverse Rendering with Spherical Gaussians for Physics-based Material Editing and Relighting. In The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[65]
Xiuming Zhang, Pratul P. Srinivasan, Boyang Deng, Paul Debevec, William T. Freeman, and Jonathan T. Barron. 2021b. NeRFactor: Neural Factorization of Shape and Reflectance under an Unknown Illumination. ACM Trans. Graph. 40, 6, Article 237 (dec 2021), 18 pages.
[66]
Xucong Zhang, Yusuke Sugano, Mario Fritz, and Andreas Bulling. 2015. Appearance-based Gaze Estimation in the Wild. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 4511--4520.
[67]
Yufeng Zheng, Seonwook Park, Xucong Zhang, Shalini De Mello, and Otmar Hilliges. 2020. Self-Learning Transformations for Improving Gaze and Head Redirection. In Neural Information Processing Systems (NeurIPS).
[68]
Hao Zhou, Sunil Hadap, Kalyan Sunkavalli, and David Jacobs. 2019. Deep Single-Image Portrait Relighting. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV). 7193--7201.
[69]
Qian-Yi Zhou, Jaesik Park, and Vladlen Koltun. 2018. Open3D: A Modern Library for 3D Data Processing. arXiv:1801.09847 (2018).

Cited By

View all
  • (2024)ReN Human: Learning Relightable Neural Implicit Surfaces for Animatable Human RenderingACM Transactions on Graphics10.1145/367800243:5(1-22)Online publication date: 9-Aug-2024
  • (2024)InNeRF: Learning Interpretable Radiance Fields for Generalizable 3D Scene Representation and RenderingProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3681393(11004-11012)Online publication date: 28-Oct-2024
  • (2024)3Doodle: Compact Abstraction of Objects with 3D StrokesACM Transactions on Graphics10.1145/365815643:4(1-13)Online publication date: 19-Jul-2024
  • Show More Cited By

Index Terms

  1. EyeNeRF: a hybrid representation for photorealistic synthesis, animation and relighting of human eyes

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Graphics
    ACM Transactions on Graphics  Volume 41, Issue 4
    July 2022
    1978 pages
    ISSN:0730-0301
    EISSN:1557-7368
    DOI:10.1145/3528223
    Issue’s Table of Contents
    This work is licensed under a Creative Commons Attribution International 4.0 License.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 22 July 2022
    Published in TOG Volume 41, Issue 4

    Check for updates

    Author Tags

    1. HDR rendering
    2. NeRF
    3. differentiable rendering
    4. eye modeling
    5. geometry deformation modeling
    6. model fitting
    7. neural rendering
    8. novel view synthesis
    9. pose optimization
    10. refraction
    11. regazing
    12. relighting
    13. specularity synthesis
    14. volumetric rendering

    Qualifiers

    • Research-article

    Funding Sources

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)421
    • Downloads (Last 6 weeks)77
    Reflects downloads up to 17 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)ReN Human: Learning Relightable Neural Implicit Surfaces for Animatable Human RenderingACM Transactions on Graphics10.1145/367800243:5(1-22)Online publication date: 9-Aug-2024
    • (2024)InNeRF: Learning Interpretable Radiance Fields for Generalizable 3D Scene Representation and RenderingProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3681393(11004-11012)Online publication date: 28-Oct-2024
    • (2024)3Doodle: Compact Abstraction of Objects with 3D StrokesACM Transactions on Graphics10.1145/365815643:4(1-13)Online publication date: 19-Jul-2024
    • (2024)SHAPE-IT: Exploring Text-to-Shape-Display for Generative Shape-Changing Behaviors with LLMsProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676348(1-29)Online publication date: 13-Oct-2024
    • (2024)Learning Gaze-aware Compositional GAN from Limited AnnotationsProceedings of the ACM on Computer Graphics and Interactive Techniques10.1145/36547067:2(1-17)Online publication date: 17-May-2024
    • (2024)Low Light Image Enhancement Based on Retinex Theory and Diffusion ModelProceedings of the 2024 8th International Conference on Digital Signal Processing10.1145/3653876.3653908(21-26)Online publication date: 23-Feb-2024
    • (2024)Deep Reinforcement Learning with Swin TransformersProceedings of the 2024 8th International Conference on Digital Signal Processing10.1145/3653876.3653899(205-211)Online publication date: 23-Feb-2024
    • (2024)FaceFolds: Meshed Radiance Manifolds for Efficient Volumetric Rendering of Dynamic FacesProceedings of the ACM on Computer Graphics and Interactive Techniques10.1145/36513047:1(1-17)Online publication date: 13-May-2024
    • (2024)TEDi: Temporally-Entangled Diffusion for Long-Term Motion SynthesisACM SIGGRAPH 2024 Conference Papers10.1145/3641519.3657515(1-11)Online publication date: 13-Jul-2024
    • (2024)EyeIR: Single Eye Image Inverse Rendering In the WildACM SIGGRAPH 2024 Conference Papers10.1145/3641519.3657506(1-11)Online publication date: 13-Jul-2024
    • Show More Cited By

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Login options

    Full Access

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media